GKEntity/Component update cycle best practices - sprite-kit

The subject
My question is about the division of the update cycle when combining Apple's frameworks in order to respect the typical patterns and good practices on the subject, since most of the documentation and example code has not been adapted to Swift yet (or at least I can't find it anywhere).
There are so many ways to manage the update cycles in GameplayKit that I'm not too sure what is a good way of combining everything.
The elements
First and foremost: every class (GKComponent and GKEntity (sub)classes) in the Entity/Component has an update() method that you can override to perform per-frame updates. That has to come from the update cycle of the current GKScene/SKScene.
Then you have the GKComponentSystem that you can use to fire up the update() methods of every component from a given type that has been added to it. I see the point, it's very handy.
But I also want to use the state machine system and it also has an update cycle of it's own... Combining all that got me confused.
My situation
In the case where I have a subclass of GKEntity with an instance of a GKStateMachine created on initialization. The state machine has a few states (for the moment: 'Spawn', 'Normal', 'Stunned' and 'Death'.
Right now, I'm creating a big "cookie cutter" with my GKEntity subclasses and create all the components it's gonna use during the initialization. But it's becoming very impractical. For instance, I have a MovementComponent, which is a subclass of GKAgent2D. I created a singleton that manages entity creation, so after the instance is created, if loops through all the entity's components and add them to the related GKComponentSystems. The singleton has an update() method of it's own that updates passes the call to the GKComponentSystems. Some of the components I use do not need per-frame update, so no GKComponentSystem has been created for them and I update them manually as required.
If I come back to my entity, since I create everything at once and use GKComponentSystems to update the components, my component's update method is loaded with guard and if-let statements because I need to access the entity's state machine, check if it's a state where the entity can move (Normal state) and do its thing or escape the function. This is not efficient in my view: the move component doesn't need to get updated when it's spawning, stunned, or dying.
On top of that it makes my use of GKStateMachine completely overkill since my update methods are empty: the components get updated by the GKComponentSystem anyway.
My ideas
Drop the GKComponentSystems completely and simply loop through all my entities (maybe sort them in different collections at some point if need be) and call their update() methods. Dispatch the updates to the state machine which in turn will update the components involved in that state.
Keep the GKComponentSystems and use the state machine to juggle with the components, for example by adding and removing the MovementComponent from the component systems when entering and exiting the Normal state.
Option 1 is straightforward but could cause problems on the long run when my structure gets more complex becasue some components could need to be updated before others. Having each entity update its own components would scatter the update process.
Option 2 can get confusing too in a way, but my biggest concern is about the creation/removal of the components. Do I only take them out of the GKComponentSystems or do I take them out of the entity completely? What is the most efficient way of doing it?
The actual question
which one of my options would be the best? is there any better way of doing it.

If you are using GKComponentSystem to perform updates, then I would go with just that. I think a component should update only once per frame.
It's not clear from your question what you need to do with StateMachines, but there is no reason to not have them involved either directly in your GKEntity or contained within a GKComponent like the DemoBots IntelligenceComponent.

Related

Drools 6 Fusion Notification

We are working in a very complex solution using drools 6 (Fusion) and I would like your opinion about best way to read Objects created during the correlation results over time.
My first basic approach was to read Working Memory every certain time, looking for new objects and reporting them to external Service (REST).
AgendaEventListener does not seems to be the "best" approach beacuse I dont care about most of the objects being inserted in working memory, so maybe, best approach would be to inject particular "object" in some sort of service inside DRL. Is this a good approach?
You have quite a lot of options. In decreasing order of my preference:
AgendaEventListener is probably the solution requiring the smallest amount of LOC. It might be useful for other tasks as well; all you have on the negative side is one additional method call and a class test per inserted fact. Peanuts.
You can wrap the insert macro in a DRL function and collect inserted fact of class X in a global List. The problem you have here is that you'll have to pass the KieContext as a second parameter to the function call.
If the creation of a class X object is inevitably linked with its insertion into WM, you could add the registry of new objects into a static List inside class X, to be done in a factory method (or the constructor).
I'm putting your "basic approach" last because it requires much more cycles than the listener (#1) and tons of overhead for maintaining the set of X objects that have already been put to REST.

Recreate a graph that change in time

I have an entity in my domain that represent a city electrical network. Actually my model is an entity with a List that contains breakers, transformers, lines.
The network change every time a breaker is opened/closed, user can change connections etc...
In all examples of CQRS the EventStore is queried with Version and aggregateId.
Do you think I have to implement events only for the "network" aggregate or also for every "Connectable" item?
In this case when I have to replay all events to get the "actual" status (based on a date) I can have near 10000-20000 events to process.
An Event modify one property or I need an Event that modify an object (containing all properties of the object)?
Theres always an exception to the rule but I think you need to have an event for every command handled in your domain. You can get around the problem of processing so many events by making use of Snapshots.
http://thinkbeforecoding.com/post/2010/02/25/Event-Sourcing-and-CQRS-Snapshots
I assume you mean currently your "connectable items" are part of the "network" aggregate and you are asking if they should be their own aggregate? That really depends on the nature of your system and problem and is more of a DDD issue than simple a CQRS one. However if the nature of your changes is typically to operate on the items independently of one another then then should probably be aggregate roots themselves. Regardless in order to answer that question we would need to know much more about the system you are modeling.
As for the challenge of replaying thousands of events, you certainly do not have to replay all your events for each command. Sure snapshotting is an option, but even better is caching the aggregate root objects in memory after they are first loaded to ensure that you do not have to source from events with each command (unless the system crashes, in which case you can rely on snapshots for quicker recovery though you may not need them with caching since you only pay the penalty of loading once).
Now if you are distributing this system across multiple hosts or threads there are some other issues to consider but I think that discussion is best left for another question or the forums.
Finally you asked (I think) can an event modify more than one property of the state of an object? Yes if that is what makes sense based on what that event represents. The idea of an event is simply that it represents a state change in the aggregate, however these events should also represent concepts that make sense to the business.
I hope that helps.

How to keep track of objects deleted from an ObservableCollection in CRUD scenarios?

In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls.
The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database.
Self-Tracking Entities, as their name might suggest, track their state themselves.
When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself.
In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them.
Something along the lines of:
protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>();
protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection)
{
var deletedEntities = new List<TEntity>();
DeletedCollections[collection.GetHashCode()] = deletedEntities;
collection.CollectionChanged += (o, a) =>
{
if (a.OldItems != null)
{
deletedEntities.AddRange(a.OldItems.Cast<TEntity>());
}
};
}
Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along:
ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers();
customers.RemoveAt(0);
MyServiceProxy.UpdateCustomers(customers);
At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side.
This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary.
I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection.
But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution.
EDIT:
In the end I decided to go with subclassing the ObservableCollection. The service proxy code is generated so it was a relatively simple task to change it to return my derived type.
Thanks for all the help!
Instead of rolling your own "weak reference + poll Is it Dead, Is it Alive" logic, you could use the HttpRuntime.Cache (available from all project types, not just web projects).
Add each shadow collection to the Cache, either with a generous timeout, or a delegate that can check if the original collection is still alive (or both).
It isn't dreadfully different to your own solution, but it does use tried and trusted .Net components.
Other than that, you're looking at extending ObservableCollection and using that new class instead (which I'd imagine is no small change), or changing/wrapping UpdateCustomers method to remove the shadow collection form DeletedCollections
Sorry I can't think of anything else, but hope this helps.
BW
If replacing ObservableCollection is a possibility (e.g. if you are using a common factory for all the collections instances) then you could subclass ObservableCollection and add a Finalize method which cleans up the deleted items that belongs to this collection.
Another alternative is to change the way you compute which items are deleted. You could maintain the original collection, and give the client a shallow copy. When the collection comes back, you can compare the two to see what items are no longer present. If the collections are sorted, then the comparison can be done in linear time on the size of the collection. If they're not sorted, then the modified collection values can be put in a hash table and that used to lookup each value in the original collection. If the entities have a natural id, then using that as the key is a safe way of determining which items are not present in the returned collection, that is, have been deleted. This also runs in linear time.
Otherwise, your original solution doesn't sound that bad. In java, a WeakReference can register a callback that gets called when the reference is cleared. There is no similar feature in .NET, but using polling is a close approximation. I don't think this approach is so bad, and if it's working, then why change it?
As an aside, aren't you concerned about GetHashCode() returning the same value for distinct collections? Using a weak reference to the collection might be more appropriate as the key, then there is no chance of a collision.
I think you're on a good path, I'd consider refactoring in this situation. My experience is that in 99% of the cases the garbage collector makes memory managment awesome - almost no real work needed.
but in the 1% of the cases it takes someone to realize that they've got to up the ante and go "old school" by firming up their caching/memory management in those areas. hats off to you for realizing you're in that situation and for trying to avoid the IDispose/WeakReference tricks. I think you'll really help the next guy who works in your code.
As for getting a solution, I think you've got a good grip on the situation
-be clear when your objects need to be created
-be clear when your objects need to be destroyed
-be clear when your objects need to be pushed to the server
good luck! tell us how it goes :)

Proper Object Oriented application set up

I am trying to learn my first programming language. Unfortunately I picked the Iphone to start. Thought it would be easy... ooops. Anyway 4 weeks later I've actually got a couple of working apps! kind of...
One of my apps that had a couple of text boxes and a couple of labels.
Each person has a has button that starts a timer that decrements a label's text for a countdown.
I have 2 separate timers that fire two separate methods that increment variables, play a song, update some labels etc, relative to each person. Not war and piece on the amount of code, but enough not to want to have to change both sets every time I figure out how to do something new.
What is a better way to set this up so that I can work with just one set of code per person? I get the whole "person" as an object idea and that it should be it's own class and that I should be probably sub classing it from all the reading I've done. I just don't know how to practically apply it with actual code.
I think the first place to start is to realize that timers are (almost always) part of the interface and not part of the data model. Unless you have a very strange set of requirements, the timers shouldn't be related to your person data objects at all.
You want to maintain strict seperation between data and the interface. This is the key idea behind the badly misnamed Model-View-Controller pattern. It should be called Model-Controller-View to reflect that the controller mediates between the model and the view.
In your case, it sounds like you have a person object that is your data model. Ideally, that model will work without any direct links to any particular interface. A good data model will work with standard views, a web view or even a text based command line interface. It shouldn't matter to your model because it is only concerned with storing or manipulating data without regard to how it will be displayed or used.
Timers that update the interface belong in the controller because they have nothing to do with the data. The same data displayed in different interfaces would need different timers. You probably only need one timer that simply calls a method in the controller that updates all the interface elements as needed. In that method, the controller then fetches the data it needs to display from the appropriate objects in the data model (Instances of the person class in your case).
For example, suppose you have multiple person objects each with it's own countdown number. You would have the countdown value stored in the person object as a property. A single timer in the controller would fire once per second and call a method in the controller. That method would then ask each person object for its countdown value. Upon being asked for the value, the person object would automatically decrement the countdown value.
With this design, you could handle an arbitrarily large number of person objects, each with its own countdown attribute, with just a single timer and one method in the controller.
We could talk for object oriented design for hours maybe months and years. Design principles exist by they are best learned and mastered through experience and a lot of practice.
If you need 2 timers each one calling 1 method that plays a unique role then you are probably stuck with the 2 timers. If there are common tasks/responsibilities in your two timers you could create an abstract timer that implements these common tasks and extend it for more specific behavior (method implementation).
I have found role based design helpful in many situations, but as i said you will have to practice and of course know the basics of object orientation and inheritance.

What is the most practical Solution to Data Management using SQLite on the iPhone?

I'm developing an iPhone application and am new to Objective-C as well as SQLite. That being said, I have been struggling w/ designing a practical data management solution that is worthy of existing. Any help would be greatly appreciated.
Here's the deal:
The majority of the data my application interacts with is stored in five tables in the local SQLite database. Each table has a corresponding Class which handles initialization, hydration, dehydration, deletion, etc. for each object/row in the corresponding table. Whenever the application loads, it populates five NSMutableArrays (one for each type of object). In addition to a Primary Key, each object instance always has an ID attribute available, regardless of hydration state. In most cases it is a UUID which I can then easily reference.
Before a few days ago, I would simply access the objects via these arrays by tracking down their UUID. I would then proceed to hydrate/dehydrate them as I needed. However, some of the objects I have also maintain their own arrays which reference other object's UUIDs. In the event that I must track down one of these "child" objects via it's UUID, it becomes a bit more difficult.
In order to avoid having to enumerate through one of the previously mentioned arrays to find a "parent" object's UUID, and then proceed to find the "child's" UUID, I added a DataController w/ a singleton instance to simplify the process.
I had hoped that the DataController could provide a single access point to the local database and make things easier, but I'm not so certain that is the case. Basically, what I did is create multiple NSMutableDicationaries. Whenever the DataController is initialized, it enumerates through each of the previously mentioned NSMutableArrays maintained in the Application Delegate and creates a key/value pair in the corresponding dictionary, using the given object as the value and it's UUID as the key.
The DataController then exposes procedures that allow a client to call in w/ a desired object's UUID to retrieve a reference to the actual object. Whenever their is a request for an object, the DataController automatically hydrates the object in question and then returns it. I did this because I wanted to take control of hydration out of the client's hands to prevent dehydrating an object being referenced multiple times.
I realize that in most cases I could just make a mutable copy of the object and then if necessary replace the original object down the road, but I wanted to avoid that scenario if at all possible. I therefore added an additional dictionary to monitor what objects are hydrated at any given time using the object's UUID as the key and a fluctuating count representing the number of hydrations w/out an offset dehydration. My goal w/ this approach was to have the DataController automatically dehydrate any object once it's "hydration retainment count" hit zero, but this could easily lead to significant memory leaks as it currently relies on the caller to later call a procedure that decreases the hydration retainment count of the object. There are obviously many cases when this is just not obvious or maybe not even easily accomplished, and if only one calling object fails to do so properly I encounter the exact opposite scenario I was trying to prevent in the first place. Ironic, huh?
Anyway, I'm thinking that if I proceed w/ this approach that it will just end badly. I'm tempted to go back to the original plan but doing so makes me want to cringe and I'm sure there is a more elegant solution floating around out there. As I said before, any advice would be greatly appreciated. Thanks in advance.
I'd also be aware (as I'm sure you are) that CoreData is just around the corner, and make sure you make the right choice for the future.
Have you considered implementing this via the NSCoder interface? Not sure that it wouldn't be more trouble than it's worth, but if what you want is to extract all the data out into an in-memory object graph, and save it back later, that might be appropriate. If you're actually using SQL queries to limit the amount of in-memory data, then obviously, this wouldn't be the way to do it.
I decided to go w/ Core Data after all.