Protect Core Data calls with NSOperationQueue addOperationWithBlock - iphone

I have a process with several steps I need to perform and each one needs to be completed before the other (synchronous, serial, etc.). I want to use the idea of queues, and have one for my db updates to protect core data. What's the best way to fire off something to update/access core data, but ensure I can do what I need to serially once it's done? I've got the below, but how do i do something when it's done? Do i even need to bother with this to "protect" my core data or if the whole thing is serial, I can just access it?
[databaseQueue addOperationWithBlock:^{
// access MOC to update DB
}];

If you just need to serialize your access, the new managed context architecture (private queue) could be used exactly for that.
you define a managed object context like so:
NSManagedObjectContext* context = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
context.persistentStoreCoordinator = coordinator;
//any other initialization you need
[ContextProvider setWorkerContext:context];
This context will operate on a background thread using GCD.
You need to retain this context (say in your app delegate, or any other place that make sense to you as long as you have access to it when you need to).
This context could now function as a queue for all operations you need (you might be able to accomplish what you need without NSOperations).
To issue a task to the queue you simply call:
[[ContextProvider workerContext] performBlock:^{//do something asynchronously
//Do whatever you need to do
//The context will serialize tasks in the order they were queued
//You will probably want to save at the end of each task.
}];
You can add wrappers to call a completion block if you like, or listen to context change/save event.
You will want NSOperation and NSOperationQueue if you need dependencies between your tasks, or you would like to be able to cancel your tasks (you will still need to check if the task was canceled) in a convenient manner, but then you will most likely want more than one context.
In any case, only one context can write to the store at any given moment (the store is serial when writing), other writing/reading context will be blocked from accessing the store during the write.

Related

Prevent a Race Condition using dispatch_sync

I am having a race condition with a NSManagedObjectContext. I was trying out various ways to prevent this using lock on NSManagedObjectContext. Using dispatch_sync seems to be a better approach as suggested by apple. But I am unable to figure out whether an object(being used under a block which is executed using dispatch_sync) can be saved from being accessed by two different threads.
Here is a more clear picture of what I am trying to ask:
[[*Some Singleton class* instance].managedObjectContext executeFetchRequest:request error:&err];
// After fetching results do something in DB
Let's say the above code is passed in a block executed using dispatch_sync like this:
dispatch_sync(someConcurrentQueue, ^{
[[*Some Singleton class* instance].managedObjectContext executeFetchRequest:request error:&err];
// After fetching results do something in DB
});
Can any other thread access [Some Singleton class instance].managedObjectContext before this block is completely executed.
AFAIK it can be accessed. If this is true then, is applying lock on NSManagedObjectContext the only way to prevent this race condition?
As always: It depends
dispatch_sync (and it's even safer cousin dispatch_barrier_sync) cause the queue it's called from to synchronously execute the block. In doing so, they block the thread. This makes one case potentially safe: access from the same thread. What you have to worry about in this scenario is that any reads you do may take place before the block is executed on the queue. Plan accordingly.
But this is ignoring a huge flaw in your code. Managed object contexts should never be shared across threads, or even dispatched off to a different thread from the one they were created on. You can fix your concurrent access problems by simply spawning child contexts.

iOS5 NSManagedObjectContext Concurrency types and how are they used?

Literature seems a bit sparse at the moment about the new NSManagedObjectContext concurrency types. Aside from the WWDC 2011 vids and some other info I picked up along the way, I'm still having a hard time grasping how each concurrency type is used. Below is how I'm interpreting each type. Please correct me if I'm understanding anything incorrectly.
NSConfinementConcurrencyType
This type has been the norm over the last few years. MOC's are shielded from each thread. So if thread A MOC wants to merge data from Thread B MOC via a save message, thread A would need to subscribe to thread B's MOC save notification.
NSPrivateQueueConcurrencyType
Each MOC tree (parent & children MOCs) share the same queue no matter what thread each is on. So whenever a save message from any of these contexts is sent, it is put in a private cue specifically made only for this MOC tree.
NSMainQueueConcurrencyType
Still confused by this one. From what I gather it's the like NSPrivateQueueConcurrencyType, only the private queue is run on the main thread. I read that this is beneficial for UI communications with the MOC, but why? Why would I choose this over NSPrivateQueueConcurrencyType? I'm assuming that since the NSMainQueueConcurrencyType is executed in the main thread, does this not allow for background processes? Isn't this the same as not using threads?
The queue concurrency types help you to manage mutlithreaded core data:
For both types, the actions only happen on the correct queue when you do them using one of the performBlock methods. i.e.
[context performBlock:^{
dataObject.title = #"Title";
[context save:nil]; // Do actual error handling here
}];
The private queue concurrency type does all it's work in a background thread. Great for processing or disk io.
The main queue type just does all it's actions on a UIThread. That's neccesary for when you need to
do things like bind a NSFetchedResultsController up to it, or any other ui related tasks that need to be interwoven with processing that context's objects.
The real fun comes when you combine them. Imagine having a parent context that does all io on a background thread that is a private queue context, and then you do all your ui work against a child context of the main queue type. Thats essentially what UIManagedDocument does. It lets you keep you UI queue free from the busywork that has to be done to manage data.
I think the answers are in the note :
Core Data Release Notes for Mac OS X Lion
http://developer.apple.com/library/mac/#releasenotes/DataManagement/RN-CoreData/_index.html
For NSPrivateQueueConcurrencyType, I think you are not right.
A child context created with this concurrency type will have its own queue.
The parent/child context is not entirely related to threading.
The parent/child seems to simplify communication between contexts.
I understand that you just have to save changes in the child contexts to bring them back in the parent context (I have not tested it yet).
Usually parent/child context pattern are related to main queue/background queue pattern but it is not mandatory.
[EDIT] It seems that access to the store (Save and Load) are done via the main context (in the main queue). So it is not a good solution to perform background fetches as the query behind executeFetchRequest will always be performed in the main queue.
For NSMainQueueConcurrencyType, it is the same as NSPrivateQueueConcurrencyType, but as it is related to main queue, I understand that you perform operation with the context without necesseraly using performBlock ; if you are in the context of the main queue, in View controller delegate code for example
(viewDidLoad, etc).
midas06 wrote:
Imagine having a parent context that does all io on a background
thread that is a private queue context, and then you do all your ui
work against a child context of the main queue type.
I understood it to be the other way around: you put the parent context on the main thread using NSMainQueueConcurrencyType and the child context on the background thread using NSPrivateQueyeConcurrencyType. Am I wrong?

Faster way than -forwardInvocation: to perform messages on a specific thread

To improve responsiveness, some synchronous methods that used FMDB to execute SQLite queries on the main thread were rewritten to be asynchronous, and run in the background via -performSelectorInBackground:withObject:. SQLite not being thread-safe, however, each of these methods would ultimately call -[FMDatabase open], decreasing overall performance.
So, I wrote a proxy for the FMDB classes, which overrode -forwardInvocation: to perform -[NSInvocation invokeWithTarget:] on one specific thread, via -performSelector:onThread:withObject:waitUntilDone:. This solved the problem of too many calls to -[FMDatabase open], but -forwardInvocation: itself is rather expensive.
Is there a good way to solve this performance issue without rewriting all of the code that calls FMDB methods?
You've found the problem: don't call -performSelectorInBackground:withObject:! There's no guarantee what it'll do, but it probably won't do the right thing.
If what you want is a single "database thread" for background ops, then there are several options:
Create a new database thread and run loop and use -performSelector:onThread:... instead.
Create an NSOperationQueue with maxConcurrentOperationCount=1 and use NSOperation (NSInvocationOperation, perhaps?), or a serial dispatch queue. This isn't quite right: the operations/blocks are not guaranteed to be executed on the same thread, which might break sqlite (IIRC you can only move a DB handle between threads after freeing all statements)
Use NSOperationQueue, but save a thread-local reference to the database in [[NSThread currentThread] threadDictionary]. This is a bit messy, since you have little control over when the database disappears. It also might violate the NSOperation contract (you're supposed to return the thread to its original state when the operation finishes).

Multithreaded use of Core Data (NSOperationQueue and NSManagedObjectContext)

In Apple's Core Data documentation for Concurrency with Core Data, they list the preferred method for thread safety as using a seperate NSManagedObjectContext per thread, with a shared NSPersistentStoreCoordinator.
If I have a number of NSOperations running one after the other on an NSOperationQueue, will there be a large overhead creating the context with each task?
With the NSOperationQueue having a max concurrent operation count of 1, many of my operations will be using the same thread. Can I use the thread dictionary to create one NSManagedObjectContext per thread? If I do so, will I have problems cleaning up my contexts later?
What’s the correct way to use Core Data in this instance?
The correct way to use Core Data in this case is to create a separate NSManagedObjectContext per operation or to have a single context which you lock (via -[NSManagedObjectContext lock] before use and -[NSManagedObjectContext unlock] after use). The locked approach might make sense if the operations are serial and there are no other threads using the context.
Which approach to use is an empirical question that can't be asnwered without data. There are too many variables to have a general rule. Hard numbers from performance testing are the only way to make an informed decision.
Operations started using NSOperationQueue using a maximum concurrent operation count of 1 will not run all operations on the same thread. The operations will be executed one after the other, but a new thread will be created every time.
So creating objects in the thread dictionary will be of little use.
While this question is old, it's actually at the top of Google's search results on 'NSMangedObjectContext threading', so, I'll just drop in a new answer.
The new 'preferred' method is to use the initWithConcurrencyType: and tell the MOC whether it's a main thread MOC or a secondary thread moc. You can then use the new performBlock: and performBlockAndWait: methods on it and the MOC will take care of serializing operations on it's 'native' thread.
The issue then becomes how do you intelligently handle merging the data between the various MOCs your application may spawn, along with a thousand other details that make life 'fun' as a programmer.

Is this a memory management problem when using multiple threads?

Example: In my main thread (the thread that's just there without doing anything special) I call a selector to be performed in a background thread. So there's a new thread, right? And now, inside this background thread I create a new object which holds image data. Next I use that object and want to keep it around for a while. How would I store a reference to that object so I can release it later? Or will the thread be alive as long as this object exists? How does a thread relate to objects which were created in it?
Maybe someone can explain this in clear words :-)
Storage location and thread creation are two separate concepts. It doesn't matter which thread created an object in terms of who will finally 'own' it or when it will be released later.
Unfortunately, there's no clear-cut answer to your question, but I'd start by thinking about whether this object is a singleton, or whether it's a cache item that can be purged, or whether it's a result object that you need to pass back to some other selector asynchronously.
If it's a singleton, put it in a static var, and never release it (or, consider doing so in response to a low memory warning)
If it's a cache, then have the cache own the item when it's inserted, so you don't have to worry about it. You might need an autorelease to do this, but be careful when using autorelease and threads since each thread may have its own autorelease pool which gets drained at different times.
If it's an object that you need to pass back to another caller, you want to autorelease it as you return and let the caller pick up the ownership of the item.