Creating SKNodes in a background task - sprite-kit

Is it ok to create a hierarchy of SKNodes and SKSpriteNodes in a background process as long as I process the addChild call for that hierarchy in the main thread?
I'm a bit worried about the texture cache etc. Maybe it's a bad idea?
I construct dynamic text blocks with hundreds of characters, and sometimes that results in a minor FPS dip. I would therefore like to build it in a background process.

Yes, it is ok loading and building object in a background task as long as you add them to the Scene only in the main thread.
Pool of objects
I construct dynamic text blocks with hundreds of characters [...]
If you are continuously creating and destroying many nodes/sprites, you should consider creating a pool of object.
This way you can reuse the nodes you remove from the scene and you can avoid many deallocation/allocation operations which are very expensive.

Related

Optimizing Disabled Game Objects in Unity

I am creating a infinite terrain generation system in Unity that instantiates "chunks" of terrain around the player and deactivates them once they are no longer visisble. (See images bellow)
After the player wanders around for a bit many of these chunks will be instantated and later have thier GameObjects deactivated. My problem is that all of the deactivaed chunks sitting in the scene are taking up computational resources.
Now the obvious solution would be to destroy the chunks that are no longer visible so that unity no longer loads them, however the chunks need to exist in memory in order to be re-enabled when the player is close enough to enable them again. This would work, however each chunk that gets loaded has data associated with it that needs to stay persistant.
Is there any way to optimize disabled GameObjects or Destroy GameObjects and keep their data?
I assume by disabled GameObjects you mean that they're disabled with GameObject.SetActive(false), which unfortunately is the most practical/optimal way of doing it while keeping it in RAM. What you'll have to do is serialize the chunk so that you can save it to the disk, so that you can load it later when the player gets close to it again.
There is no simple way of doing this, you'll just have to manually figure out exactly what you need to save, write a function that can save a chunk with this data, and a different function to generate a chunk from this data.
You can use Object pooling. Create 2 or more(as per requirements) objects at start of each type of chunk and keep them disabled. Generate a PoolManager to keep track of the chunk. You can make a distance based logic to command PoolManager which chunk need to be enabled when and at what place.

Method to create a list of taskings for resource to work on when resource becomes idle

Image to illustrate point of freezing Context:
Creating a scalable model for a production line to increase Man Machine Optimization ratio. Will be scaling the model for an operator (resource) to work on multiple machines (of the same type). During the process flow at a machine, the operator will be seized and released multiple times for different taskings.
Problem:
Entire process freezes when the operator is being seized at multiple seize blocks concurrently.
Thoughts:
Is there a way to create a list where taskings are added in the event the resource is currently seized. Resource will then work on the list of taskings whenever it becomes idle. Any other methods to resolve this issue is also appreciated!
If this is going to become a complex model, you may want to consider using a pure agent-based approach.
Your resource has a LinkedList of JobRequest agents that are created and send by the machines when necessary. They are sorted by some priority.
The resource then simply does one JobRequest after the next.
No ResourcePools or Seieze elements required.
This is often the more powerful and flexible approach as you are not bound to the process blocks anymore. But obviously, it needs good control and testing from you :)
Problem: Entire process freezes when the operator is being
seized at multiple seize blocks concurrently.
You need to explain your problem better: it is not possible to "seize the same operator at multiple seize blocks concurrently" (unless you are using a resource choice condition or similar to try to 'force' seizing of a particular resource --- even then, this is more accurately framed as 'I've set up resource choice conditions which mean I end up having no valid resources available').
What does your model "freezing" represent? For example, it could just be a natural consequence of having no resources available, especially if you have long delay times or are using Delay blocks with "Until stopDelay() is called" set --- i.e., you are relying on events elsewhere in your model to free agents (and seized resources) from blocks, which an incorrect model design might mean never happen in some circumstances. (If your model is "freezing" because of no resources being available, it should 'unfreeze' when one does.)
During the process flow at a machine, the operator will be
seized and released multiple times for different taskings.
You can just do this bit by breaking down the actions at a machine into a number of Seize/Delay/Release actions with different characteristics (or a process flow that loops around a set of these driven by some data if you want it to be more flexible / data-driven).

Unity Memory Management: Sprites In Loaded Scriptable Objects --- How Does Memory Use Scale?

I have been using scriptable objects as the model (data) layer for my game. I store things like unit stats there. I also started putting the sprites for the unit in this as well.
My question is: If I load a scriptable object which has a reference to a sprite, is the sprite automatically loaded as well? If I load 1000 scriptable objects with sprite references but I am not using those sprites (e.g. no GameObject is using that sprite to render), is there still a memory penalty for these sprite references? Or does the memory use only occur once a GameObject starts using the sprite to actually render? If so, if I have multiple scriptable objects with references to the same sprite, does this increase the memory as well?
I tried doing some memory inspection using the inspector but I was getting sporadic results and the memory footprint was changing too much between runs so I couldn't figure out what was going on (without changing anything I would get 2.2gb in use, then 3.1gb, then 2.6gb).
Just as Dave said in his comment, Sprite is a reference type, so Unity loads the sprite in the memory once and then all other instances of Objects that reference the specific Sprite point to its memory address (just as a C pointer system would work). So in your case, even if you create 1000 Objects from your Scriptable Object, all of them would point to the same Sprite.
Generally, if you're using Resources.Load() on an Object, then every loaded Object will use the same variable reference that the parent Scriptable Object had. But, if you're creating a new Object each time you want to create one, say with a class constructor, then each Object (and subsequently, their variables) will have their own space in the memory. In your case, that would happen if you were to use Sprite mySprite = new Sprite() every time you created a Scriptable Object.
So, there is no memory penalty for adding multiple instances of the same Sprite in your Scriptable Objects (CPU/GPU usage is another issue, not related to your question but worth mentioning nonetheless). Your memory inspection might have fluctuating values if you are performing it on your game, together with all the other operations that are being performed, so I suggest you create a new Project and then try to measure the values from the Inspector. If you do that, please share your findings here, since this is an interesting topic.
Also, check out this post from the GameDev stack exchange for more info on your question.

iOS5 NSManagedObjectContext Concurrency types and how are they used?

Literature seems a bit sparse at the moment about the new NSManagedObjectContext concurrency types. Aside from the WWDC 2011 vids and some other info I picked up along the way, I'm still having a hard time grasping how each concurrency type is used. Below is how I'm interpreting each type. Please correct me if I'm understanding anything incorrectly.
NSConfinementConcurrencyType
This type has been the norm over the last few years. MOC's are shielded from each thread. So if thread A MOC wants to merge data from Thread B MOC via a save message, thread A would need to subscribe to thread B's MOC save notification.
NSPrivateQueueConcurrencyType
Each MOC tree (parent & children MOCs) share the same queue no matter what thread each is on. So whenever a save message from any of these contexts is sent, it is put in a private cue specifically made only for this MOC tree.
NSMainQueueConcurrencyType
Still confused by this one. From what I gather it's the like NSPrivateQueueConcurrencyType, only the private queue is run on the main thread. I read that this is beneficial for UI communications with the MOC, but why? Why would I choose this over NSPrivateQueueConcurrencyType? I'm assuming that since the NSMainQueueConcurrencyType is executed in the main thread, does this not allow for background processes? Isn't this the same as not using threads?
The queue concurrency types help you to manage mutlithreaded core data:
For both types, the actions only happen on the correct queue when you do them using one of the performBlock methods. i.e.
[context performBlock:^{
dataObject.title = #"Title";
[context save:nil]; // Do actual error handling here
}];
The private queue concurrency type does all it's work in a background thread. Great for processing or disk io.
The main queue type just does all it's actions on a UIThread. That's neccesary for when you need to
do things like bind a NSFetchedResultsController up to it, or any other ui related tasks that need to be interwoven with processing that context's objects.
The real fun comes when you combine them. Imagine having a parent context that does all io on a background thread that is a private queue context, and then you do all your ui work against a child context of the main queue type. Thats essentially what UIManagedDocument does. It lets you keep you UI queue free from the busywork that has to be done to manage data.
I think the answers are in the note :
Core Data Release Notes for Mac OS X Lion
http://developer.apple.com/library/mac/#releasenotes/DataManagement/RN-CoreData/_index.html
For NSPrivateQueueConcurrencyType, I think you are not right.
A child context created with this concurrency type will have its own queue.
The parent/child context is not entirely related to threading.
The parent/child seems to simplify communication between contexts.
I understand that you just have to save changes in the child contexts to bring them back in the parent context (I have not tested it yet).
Usually parent/child context pattern are related to main queue/background queue pattern but it is not mandatory.
[EDIT] It seems that access to the store (Save and Load) are done via the main context (in the main queue). So it is not a good solution to perform background fetches as the query behind executeFetchRequest will always be performed in the main queue.
For NSMainQueueConcurrencyType, it is the same as NSPrivateQueueConcurrencyType, but as it is related to main queue, I understand that you perform operation with the context without necesseraly using performBlock ; if you are in the context of the main queue, in View controller delegate code for example
(viewDidLoad, etc).
midas06 wrote:
Imagine having a parent context that does all io on a background
thread that is a private queue context, and then you do all your ui
work against a child context of the main queue type.
I understood it to be the other way around: you put the parent context on the main thread using NSMainQueueConcurrencyType and the child context on the background thread using NSPrivateQueyeConcurrencyType. Am I wrong?

Is this a memory management problem when using multiple threads?

Example: In my main thread (the thread that's just there without doing anything special) I call a selector to be performed in a background thread. So there's a new thread, right? And now, inside this background thread I create a new object which holds image data. Next I use that object and want to keep it around for a while. How would I store a reference to that object so I can release it later? Or will the thread be alive as long as this object exists? How does a thread relate to objects which were created in it?
Maybe someone can explain this in clear words :-)
Storage location and thread creation are two separate concepts. It doesn't matter which thread created an object in terms of who will finally 'own' it or when it will be released later.
Unfortunately, there's no clear-cut answer to your question, but I'd start by thinking about whether this object is a singleton, or whether it's a cache item that can be purged, or whether it's a result object that you need to pass back to some other selector asynchronously.
If it's a singleton, put it in a static var, and never release it (or, consider doing so in response to a low memory warning)
If it's a cache, then have the cache own the item when it's inserted, so you don't have to worry about it. You might need an autorelease to do this, but be careful when using autorelease and threads since each thread may have its own autorelease pool which gets drained at different times.
If it's an object that you need to pass back to another caller, you want to autorelease it as you return and let the caller pick up the ownership of the item.