I'd like to know what is the proper way to dealloc an ivar NSOperationQueue in case it has still some operations running, which can typically occur when the user suddenly quits the app. In some examples I saw the waitUntilAllOperationsAreFinished was used, like this:
- (void)dealloc {
[_queue cancelAllOperations];
[_queue waitUntilAllOperationsAreFinished];
[_queue release];
...
however many suggest to avoid doing so since it would hang the run loop. So what is the proper way to release the _queue? And what happens if I don't wait for operations to be finished and just go on with the release?
In almost all cases, calling cancelAllOperations will be sufficient. The only time you need to call waitUntilAllOperationsAreFinished is if you actually need to ensure that those operations are done before you move on.
For example, you might do so if the operations are accessing some shared memory, and if you don't wait then you'll end up with two threads writing to that shared memory at the same time. However, I can't think of any reasonable design that would protect shared memory by causing a blocking delay in a dealloc method. There are far better sychronization mechanisms available.
So the short answer is this: you don't need to wait for all operations to finish unless there's some reason your application specifically needs it.
Related
According to the documentation of CLLocationManagerDelegate
The methods of your delegate object are called from the thread in which you started the corresponding location services. That thread must itself have an active run loop, like the one found in your application’s main thread.
I am not clear as to whether this means that to receive location manager updates on a background thread, we must instantiate the location manager on that background thread or simply call the startUpdatingLocation() method on that thread.
In any event, this explains an issue when a CLLocationManagerDelegate does not receive any events from a CLLocationManager which was started on a background thread:
That thread must itself have an active run loop
If I understand run loop functioning correctly, all NSThreads are instantiated with a run loop, but the run loop will only be running if you assign some work to the thread. Therefore, to have a CLLocationManager send events correctly on a background thread, we need to set the thread's run loop to loop permanently so that it can process the CLLocationManager's calls as they arrive.
A reasonable solution to making sure the run loop is running is suggested in this question but the author implies that this is a processor expensive way of doing it.
Also, according to the threading documentation,
Threading has a real cost to your program (and the system) in terms of memory use and performance
I appreciate that we are all using lots of threading anyway, by using Grand Central Dispatch, but Grand Central Dispatch probably mitigates a lot of this in its internal thread management.
So my first question is, is it worthwhile setting up a background thread with a continuously running run loop, in order to have location events dealt with on a background thread, or will this involve an unreasonable extra amount of processing when compared to leaving the manager on the main thread?
Secondly, if it is worthwhile, is there a good way to do this using Grand Central Dispatch. As I understand the documentation, Grand Central Dispatch manages its own threads and we have no means of knowing which thread a given block will be executed on. I presume we could simply execute the usual run loop code to make the run loop of whichever thread our CLLocationManager instantiation is run on loop continuously, but might this not then affect other tasks independently assigned to Grand Central Dispatch?
This is a somewhat opinion-based question, but I have a pretty strong opinion on it :D
No.
Just deliver the events to the main queue, and dispatch any work to a background queue if it's non-trivial. Anything else is a lot of complexity for little benefit. CLLocationManager pre-dates GCD, so this was useful information in the days when we occasionally managed run loops by hand and dispatching from one thread to another was a pain. GCD gets rid of most of that, and is absolutely the tool you should use for this. Just let GCD handle it with dispatch_async.
You absolutely should not set up your own NSThread for this kind of thing. They're still necessary at times for interacting with C++, but generally if GCD can handle something, you should let it, and avoid NSThread as much as possible.
I have a call that goes to a server. I want the callback here to be run asynchronously in a secondary thread that's not the UI thread. Core Data here freezes up and I'd like to try to make the app feel more responsive. What's the best way to have this callback run in a secondary thread? Code example would be great!
[[SomeServer sharedInstance] doServerCallCallback:^(NSObject *param) {
NSManagedObjectContext *moc = [MYAPPDELEGATE managedObjectContext];
/* do more stuff with param */
[MYAPPDELEGATE saveManagedObjectContext];
}];
The server call itself doesn't need to be in a secondary thread, however the code executed in the block should be.
Putting some work on a background thread is easy: fire off your block with dispatch_async(), -[NSOperationQueue addOperationWithBlock:], or possibly even something related to the server connection you're using, like +[NSURLConnection sendAsynchronousRequest:queue:completionHandler:]. (Look up any of those in the docs for usage examples.)
If you're looking to do Core Data stuff on your background thread, it gets nasty unless you're on iOS 5.0 or newer. Apple has a big writeup on Concurrency and Core Data for the pre-5.0 case, but the new stuff, while a whole lot easier for simple uses like you're proposing, isn't as well documented. This question should give you a good start, though.
The block that you're passing is an object that the server will execute at some point. If you want the block to be executed on a different thread, you'll need to change SomeServer's implementation of -doServerCallCallback:.
See the Grand Central Dispatch Reference manual for complete information about using blocks. In short, the server should create a dispatch queue when it starts up. You can then use a function like dispatch_async() to execute the block.
I have a question. My case study is that I have two big SQLite databases and I want to use threads (meaning 2 processes simultaneously). Did it work well? I have written the following code:
NSAutoreleasePool *dbPool;
dbPool = [[NSAutoreleasePool alloc] init];
/* All Database work is performed here */
[dbPool release];
Please guide me. Am I doing this correctly or not? Should I use a pool to drain or release?
And in that way is this using concurrent processes, meaning it's also the same behavior as multitasking?
Thanks in advance!
Yep, you're doing it right. Each of your new thread needs own autorelease pool.
Regarding to your question about release / drain of pool, recommended is drain message.
What do you mean by sqlite database? How do you access it? If you access it via CoreData, you have keep following in your memory:
you need one NSManagedObjectContext per thread,
do not pass NSManagedObjects to another thread, just pass object ID,
before you pass object ID to another thread, save it in thread where it was modified / created before passing it.
There are more rules, but these are basic ones.
Multitasking means that you can run more applications in one time. Multithreading (= your case) means that your application does use more threads to achieve its task.
A common approach for user interface or other heavy object management work is to surround your code like you're doing, but you should be using drain:
NSAutoreleasePool *dbPool = [[NSAutoreleasePool alloc] init];
// do your work
[dbPool drain];
A lot of detail on NSAutoreleasePool is available here and a previous Stack Overflow answer here. Basically the work you're doing inside the pool, if set to autorelease, will be released once the pool drains. This can increase performance when working with certain classes that produce autoreleased instances. If you want full and immediate control though, you can release each object you're working with once it's no longer needed and ditch the pool altogether.
As to your multithreading questions I'm not sure if I understand what you're asking but nonetheless using the pool is a solid approach even in a background thread. This is assuming that the objects you're working with in the thread aren't somehow also used in another (since you might have an accidental release).
Currently I'm using NSThread to cache images in another thread.
[NSThread detachNewThreadSelector:#selector(cacheImage:) toTarget:self withObject:image];
Alternately:
[self performSelectorInBackground:#selector(cacheImage:) withObject:image];
Alternately, I can use an NSOperationQueue
NSInvocationOperation *invOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(cacheImage:) object:image];
NSOperationQueue *opQueue = [[NSOperationQueue alloc] init];
[opQueue addOperation:invOperation];
Is there any reason to switch away from NSThread? GCD is a 4th option when it's released for the iPhone, but unless there's a significant performance gain, I'd rather stick with methods that work in most platforms.
Based on #Jon-Eric's advice, I went with an NSOperationQueue/NSOperation subclass solution. It works very well. The NSOperation class is flexible enough that you can use it with invocations, blocks or custom subclasses, depending on your needs. No matter how you create your NSOperation you can just throw it into an operation queue when you are ready to run it. The operations are designed to work as either objects you put into a queue or you can run them as standalone asynchronous methods, if you want. Since you can easily run your custom operation methods synchronously, testing is trivially easy.
I've used this same technique in a handful of projects since I asked this question and I couldn't be happier with the way it keeps my code and my tests clean, organized and happily asynchronous.
A++++++++++
Would subclass again
In general you'll get better mileage with NSOperationQueue.
Three specific reasons:
You may want to initiate caching of many items at once. NSOperationQueue is smart enough to only create about as many threads as there are cores, queuing the remaining operations. With NSThread, creating 100 threads to cache 100 images is probably overkill and somewhat inefficient.
You may want to cancel the cacheImage operation. Implementing cancellation is easier with NSOperationQueue; most the work is already done for you.
NSOperationQueue is free to switch to a smarter implementation (like Grand Central Dispatch) now or in the future. NSThread is more likely to always be just an operating system thread.
Bonus:
NSOperationQueue has some other nice constructs built-in, such as a sophisticated way of honoring operation priorities and dependencies.
I would use NSOperationQueue. Under OS 3.2, NSOperationQueue uses threads under the hood, so the two methods should perform similarly. However, under Mac OS 10.6, NSOperationQueue uses GCD under the hood and so has the advantage of not having the overhead of separate threads. I haven't looked at the docs for OS 4, but I'd suspect it does something similar--in any case, NSOperationQueue could swap implementations if/when the performance advantages of GCD become available for the iPhone.
In my application I am executing 10 asynchronous NSURLConnections within an NSOperationQueue as NSInvocationOperations. In order to prevent each operation from returning before the connection has had a chance to finish I call CFRunLoopRun() as seen here:
- (void)connectInBackground:(NSURLRequest*)URLRequest {
TTURLConnection* connection = [[TTURLConnection alloc] initWithRequest:URLRequest delegate:self];
// Prevent the thread from exiting while the asynchronous connection completes the work. Delegate methods will
// continue the run loop when the connection is finished.
CFRunLoopRun();
[connection release];
}
Once the connection finishes, the final connection delegate selector calls CFRunLoopStop(CFRunLoopGetCurrent()) to resume the execution in connectInBackground(), allowing it to return normally:
- (void)connectionDidFinishLoading:(NSURLConnection *)connection {
TTURLConnection* ttConnection = (TTURLConnection*)connection;
...
// Resume execution where CFRunLoopRun() was called.
CFRunLoopStop(CFRunLoopGetCurrent());
}
- (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error {
TTURLConnection* ttConnection = (TTURLConnection*)connection;
...
// Resume execution where CFRunLoopRun() was called.
CFRunLoopStop(CFRunLoopGetCurrent());
}
This works well and it is thread safe because I bundled each connection's response and data as instance variables in the TTURLConnection subclass.
NSOperationQueue claims that leaving its maximum number of concurrent operations as NSOperationQueueDefaultMaxConcurrentOperationCount allows it to adjust the number of operations dynamically, however, in this case it always decides that 1 is enough. Since that is not what I want, I have changed the maximum number to 10 and it seriously hauls now.
The problem with this is that these threads (with the help of SpringBoard and DTMobileIS) consume all of the available CPU time and cause the main thread to become latent. In other words, once the CPU is 100% utilized, the main thread is not processing UI events as fast as it needs to in order to maintain a smooth UI. Specifically, table view scrolling becomes jittery.
Process Name % CPU
SpringBoard 45.1
MyApp 33.8
DTMobileIS 12.2
...
While the user interacts with the screen or the table is scrolling the main thread's priority becomes 1.0 (the highest possible) and its run loop mode becomes UIEventTrackingMode. Each of the operation's threads are 0.5 priority by default and the asynchronous connections run in the NSDefaultRunLoopMode. Due to my limited understanding of how threads and their run loops interact based on priorities and modes, I am stumped.
Is there a way to safely consume all available CPU time in my app's background threads while still guaranteeing that its main thread is given as much of the CPU as it needs? Perhaps by forcing the main thread to run as often as it needs to? (I thought thread priorities would have taken care of that.)
UPDATE 12/23:
I have finally started getting a handle on the CPU Sampler and found most of the reasons why the UI was becoming jittery. First of all, my software was calling a library which had mutual exclusion semaphores. These locks were blocking the main thread for short periods of time causing the scroll to skip slightly.
In addition, I found some expensive NSFileManager calls and md5 hashing functions which were taking too much time to run. Allocating big objects too frequently caused some other performance hits in the main thread.
I have begun to resolve these issues and the performance is already much better than before. I have 5 simultaneous connections and the scrolling is smooth, but I still have more work to do. I am planning to write a guide on how to use the CPU Sampler to detect and fix issues that affect the main thread's performance. Thanks for the comments so far, they were helpful!
UPDATE 1/14/2010:
After achieving acceptable performance I began to realize that the CFNetwork framework was leaking memory occasionally. Exceptions were randomly (however, rarely) being raised inside CFNetwork too! I tried everything I could to avoid those problems but nothing worked. I am quite sure that the issues are due to defects within NSURLConnection itself. I wrote test programs which did nothing except exercise NSURLConnection and they were still crashing and leaking.
Ultimately I replaced NSURLConnection with ASIHTTPRequest and the crashing stopped entirely. CFNetwork almost never leaks, however, there is still one very rare leak which occurs when resolving a DNS name. I am quite satisfied now. Hopefully this information saves you some time!
In practice you simply cannot have more than two or three background network threads and have the UI stay fully responsive.
Optimize for user responsiveness, it's the only thing a user really notices. Or (and I really hate to say this) add a "Turbo" button to your app that puts up a non-interactive modal dialog and increases concurrent operations to 10 while it is up.
It sounds as though NSOperationQueueDefaultMaxConcurrentOperationCount is set to 1 for a reason! I think you're just overloading your poor phone. You may be able to mess around with threading priorities -- I think the Mach core is available and part of the officially blessed API -- but to me it sounds like the wrong approach.
One of the advantages of using "system" constants is that Apple can tune the app for you. How are you going to tune this to run on an original iPhone? Is 10 high enough for next years quad-core iPhone?
James, although I haven't experienced your problem, what I've had success with is using the synchronous connection for downloading within an NSOperation subclass.
NSData *response = [NSURLConnection sendSynchronousRequest:request returningResponse:&urlResponse error:&requestError];
I use this approach for grabbing image assets from network locations and updating the target UIImageViews. The download occurs in NSOperationQueue and the method that updates the image-view is performed on the main thread.