I have an NSOperationQueue to which i have added my custom class's objects(inherits from NSOperation) i have set the "setMaxConcurrentOperationCount" to be 1. But when i cancel all the operations like this:
[mOperationQueue cancelAllOperations];
it takes about 10 -15 seconds sometimes to cancel them all.
NSOperationQueue contains about 60-80 tasks(Operations).
Initially i was canceling the operations in main thread which obviously blocked the main thread but now i have done that in a different thread.But my main issue is that it takes too long.
Any Suggestions ?
Your operation should check to see if it's been cancelled more often (e.g. in its implementation of -main).
Beyond that, run a sampler to see what is taking all that time.
Related
I have just started using NSOperation/NSOprationQueue, so forgive me for asking this question. :P
At the start of my app, I want some set of functions to be performed in a queue, so that when one ends, another starts (I have set setMaxConcurrentOperationCount to 1 so that only one operation occurs at a time). All should happen in background, as its a kind of a download/upload to server of information.
I put the first operation in the queue, that calls another method, which may invoke some new threads to perform some other actions.
My question is,
Will the Operation Queue wait for all the methods/threads started in the first operation to complete before starting second operation?
There are two kinds of NSOperations, concurrent and non-concurrent.
The non-concurrent operations are implemented in their -main method, and when this method returns, the operation is considered done. If you spawn a thread inside -main and want the operation to run until the thread is finished, you should block the execution in -main until the thread is done (using a semaphore, for example).
The concurrent operations have a set of predicates like -isExecuting and -isFinished, and there’s a -start method that starts the operation. This method may just spawn some background processing and return immediately, the whole operation is not considered finished until -isFinished says so.
Now that we have GCD it’s usually a good idea to consider blocks and dispatch queues as a lighter alternative to NSOperation, also see the –addOperationWithBlock: method on NSOperationQueue.
If NSOperation is doing asynchronous task like something like [NSURLConnection sendAsynchronousRequest:.....] than the thread on which operation is running wont wait for response, and it will not wait. As soon main method last statement or block last statement is executed operation would be remove from queue and next operation would start.
You can use something like this
NSOperationQueue *queue=[NSOperationQueue new];
[queue setMaxConcurrentOperationCount:1];
NSBlockOperation *blockOperation1 = [NSBlockOperation blockOperationWithBlock:^{
//operation first
}];
NSBlockOperation *blockOperation2 = [NSBlockOperation blockOperationWithBlock:^{
//operation second
}];
[blockOperation2 addDependency:blockOperation1];
[queue addOperation:blockOperation1];
[queue addOperation:blockOperation2];
I have an NSManagedObjectContext declared like so:
- (NSManagedObjectContext *) backgroundMOC {
if (backgroundMOC != nil) {
return backgroundMOC;
}
backgroundMOC = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
return backgroundMOC;
}
Notice that it is declared with a private queue concurrency type, so its tasks should be run on a background thread. I have the following code:
-(void)testThreading
{
/* ok */
[self.backgroundMOC performBlock:^{
assert(![NSThread isMainThread]);
}];
/* CRASH */
[self.backgroundMOC performBlockAndWait:^{
assert(![NSThread isMainThread]);
}];
}
Why does calling performBlockAndWait execute the task on the main thread rather than background thread?
Tossing in another answer, to try an explain why performBlockAndWait will always run in the calling thread.
performBlock is completely asynchronous. It will always enqueue the block onto the queue of the receiving MOC, and then return immediately. Thus,
[moc performBlock:^{
// Foo
}];
[moc performBlock:^{
// Bar
}];
will place two blocks on the queue for moc. They will always execute asynchronously. Some unknown thread will pull blocks off of the queue and execute them. In addition, those blocks are wrapped within their own autorelease pool, and also they will represent a complete Core Data user event (processPendingChanges).
performBlockAndWait does NOT use the internal queue. It is a synchronous operation that executes in the context of the calling thread. Of course, it will wait until the current operations on the queue have been executed, and then that block will execute in the calling thread. This is documented (and reasserted in several WWDC presentations).
Furthermore, performBockAndWait is re-entrant, so nested calls all happen right in that calling thread.
The Core Data engineers have been very clear that the actual thread in which a queue-based MOC operation runs is not important. It's the synchronization by using the performBlock* API that's key.
So, consider 'performBlock' as "This block is being placed on a queue, to be executed at some undetermined time, in some undetermined thread. The function will return to the caller as soon as it has been enqueued"
performBlockAndWait is "This block will be executed at some undetermined time, in this exact same thread. The function will return after this code has completely executed (which will occur after the current queue associated with this MOC has drained)."
EDIT
Are you sure of "performBlockAndWait does NOT use the internal queue"?
I think it does. The only difference is that performBlockAndWait will
wait until the block's completion. And what do you mean by calling
thread? In my understanding, [moc performBlockAndWait] and [moc
performBloc] both run on its private queue (background or main). The
important concept here is moc owns the queue, not the other way
around. Please correct me if I am wrong. – Philip007
It is unfortunate that I phrased the answer as I did, because, taken by itself, it is incorrect. However, in the context of the original question it is correct. Specifically, when calling performBlockAndWait on a private queue, the block will execute on the thread that called the function - it will not be put on the queue and executed on the "private thread."
Now, before I even get into the details, I want to stress that depending on internal workings of libraries is very dangerous. All you should really care about is that you can never expect a specific thread to execute a block, except anything tied to the main thread. Thus, expecting a performBlockAndWait to not execute on the main thread is not advised because it will execute on the thread that called it.
performBlockAndWait uses GCD, but it also has its own layer (e.g., to prevent deadlocks). If you look at the GCD code (which is open source), you can see how synchronous calls work - and in general they synchronize with the queue and invoke the block on the thread that called the function - unless the queue is the main queue or a global queue. Also, in the WWDC talks, the Core Data engineers stress the point that performBlockAndWait will run in the calling thread.
So, when I say it does not use the internal queue, that does not mean it does not use the data structures at all. It must synchronize the call with the blocks already on the queue, and those submitted in other threads and other asynchronous calls. However, when calling performBlockAndWait it does not put the block on the queue... instead it synchronizes access and runs the submitted block on the thread that called the function.
Now, SO is not a good forum for this, because it's a bit more complex than that, especially w.r.t the main queue, and GCD global queues - but the latter is not important for Core Data.
The main point is that when you call any performBlock* or GCD function, you should not expect it to run on any particular thread (except something tied to the main thread) because queues are not threads, and only the main queue will run blocks on a specific thread.
When calling the core data performBlockAndWait the block will execute in the calling thread (but will be appropriately synchronized with everything submitted to the queue).
I hope that makes sense, though it probably just caused more confusion.
EDIT
Furthermore, you can see the unspoken implications of this, in that the way in which performBlockAndWait provides re-entrant support breaks the FIFO ordering of blocks. As an example...
[context performBlockAndWait:^{
NSLog(#"One");
[context performBlock:^{
NSLog(#"Two");
}];
[context performBlockAndWait:^{
NSLog(#"Three");
}];
}];
Note that strict adherence to the FIFO guarantee of the queue would mean that the nested performBlockAndWait ("Three") would run after the asynchronous block ("Two") since it was submitted after the async block was submitted. However, that is not what happens, as it would be impossible... for the same reason a deadlock ensues with nested dispatch_sync calls. Just something to be aware of if using the synchronous version.
In general, avoid sync versions whenever possible because dispatch_sync can cause a deadlock, and any re-entrant version, like performBlockAndWait will have to make some "bad" decision to support it... like having sync versions "jump" the queue.
Why not? Grand Central Dispatch's block concurrency paradigm (which I assume MOC uses internally) is designed so that only the runtime and operating system need to worry about threads, not the developer (because the OS can do it better than you can do to having more detailed information). Too many people assume that queues are the same as threads. They are not.
Queued blocks are not required to run on any given thread (the exception being blocks in the main queue must execute on the main thread). So, in fact, sometimes sync (i.e. performBlockAndWait) queued blocks will run on the main thread if the runtime feels it would be more efficient than creating a thread for it. Since you are waiting for the result anyway, it wouldn't change the way your program functioned if the main thread were to hang for the duration of the operation.
This last part I am not sure if I remember correctly, but in the WWDC 2011 videos about GCD, I believe that it was mentioned that the runtime will make an effort to run on the main thread, if possible, for sync operations because it is more efficient. In the end though, I suppose the answer to "why" can only be answered by the people who designed the system.
I don't think that the MOC is obligated to use a background thread; it's just obligated to ensure that your code will not run into concurrency issues with the MOC if you use performBlock: or performBlockAndWait:. Since performBlockAndWait: is supposed to block the current thread, it seems reasonable to run that block on that thread.
The performBlockAndWait: call only makes sure that you execute the code in such a way that you don't introduce concurrency (i.e. on 2 threads performBlockAndWait: will not run at the same time, they will block each other).
The long and the short of it is that you can't depend on which thread a MOC operation runs on, well basically ever. I've learned the hard way that if you use GCD or just straight up threads, you always have to create local MOCs for each operation and then merge them to the master MOC.
There is a great library (MagicalRecord) that makes that process very simple.
This question already has answers here:
Difference between DispatchQueue.main.async and DispatchQueue.main.sync
(4 answers)
Closed 3 years ago.
I'm reading the docs on dispatch queues for GCD, and in it they say that the queues are FIFO, so I am woundering what effect this has on async / sync dispatches?
from my understand async executes things in the order that it gets things while sync executes things serial..
but when you write your GCD code you decide the order in which things happen.. so as long as your know whats going on in your code you should know the order in which things execute..
my questions are, wheres the benefit of async here? am I missing something in my understanding of these two things.
The first answer isn't quite complete, unfortunately. Yes, sync will block and async will not, however there are additional semantics to take into account. Calling dispatch_sync() will also cause your code to wait until each and every pending item on that queue has finished executing, also making it a synchronization point for said work. dispatch_async() will simply submit the work to the queue and return immediately, after which it will be executed "at some point" and you need to track completion of that work in some other way (usually by nesting one dispatch_async inside another dispatch_async - see the man page for example).
sync means the function WILL BLOCK the current thread until it has completed, async means it will be handled in the background and the function WILL NOT BLOCK the current thread.
If you want serial execution of blocks check out the creation of a serial dispatch queue
From the man page:
FUNDAMENTALS
Conceptually, dispatch_sync() is a convenient wrapper around dispatch_async() with the addition of a semaphore to wait for completion of the block, and a wrapper around the block to signal its completion.
See dispatch_semaphore_create(3) for more information about dispatch semaphores. The actual implementation of the dispatch_sync() function may be optimized and differ from the above description.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns the control on the current queue only after task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control on the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Only in Asynchronous we can add delay -> asyncAfter(deadline: 10..
I have some updating to do when my application comes from the background, but some updates are dependent on a certain function that although executes first, it finishes after the other update methods(it calls a bunch of chained functions).
How can i ensure that a function tree is finished so that i may then execute the rest of the code?
Have you looked at NSOperationQueue? It enables you to specify dependencies among NSOperations so that you can rely on certain execution orders to be followed.
This might work, with wait untill done flag set to YES. Give it a shot.
(void)performSelector:(SEL)aSelector
onThread:(NSThread *)thr withObject:(id)arg waitUntilDone:(BOOL)wait;
Apple doc says that setting the waitUntillDone on YES will stop the current thread untill your selector has finished its execution.
wait - A Boolean that specifies whether the current thread blocks until
after the specified selector is performed on the receiver on the
specified thread. Specify YES to block this thread; otherwise, specify
NO to have this method return immediately. If the current thread and
target thread are the same, and you specify YES for this parameter,
the selector is performed immediately on the current thread. If you
specify NO, this method queues the message on the thread’s run loop
and returns, just like it does for other threads. The current thread
must then dequeue and process the message when it has an opportunity
to do so.
Let me know if it worked.
here is question:
i have 2 threads: the main one and the other.
on the main thread i perform anything gui related
on the other thread i perform all my calculations
on some operation i have to stop the second thread for some seconds, waiting the first thread to do something....
my question:
which is the best option and why?
being in the second thread..
use sleepUntilDate
use sleep function
any other option?
the pseudo code is this:
on the second thread:
...
do some calculations
send the results to the first thread and wait for the # of seconds to wait (let's say K)
wait K seconds
+[NSThread sleepForTimeInterval:] is more likely to do what you want if the time suddenly changes. Being in the second thread doesn't affect things.
However, I don't see why the second thread should have to sleep at all. If you want to wait for new data, use something like NSCondition or NSConditionLock to signal when data has arrived.
Alternatively, don't use threads directly at all. You can use NSOperation or performSelectorInBackground: or dispatch_*, to name a few.
EDIT: You need to be very careful when writing traditional thread-synchronization code, and you need to be just as careful every time you edit it to add a new feature. I have some threaded code which I need to think about for a couple minutes to figure out what's going on. Deadlocks are no fun.