I've noticed that it's possible for an NSManagedObjectContext with an NSMainQueueConcurrencyType to performBlockAndWait: and execute the block on a queue other than the receiver's (main) queue.
For example, the following code results in my parentContext executing the block on the childContext's queue if my parentContext is of type NSMainQueueConcurrencyType and my childContext is of type NSPrivateQueueConcurrencyType:
[childContext performBlockAndWait:^{
//Thread 1, Queue: NSManagedObjectContext Queue
[parentContext performBlockAndWait:^{
//Thread 1, Queue: NSManagedObjectContext Queue
//This is the same queue as the child context's queue
}];
}];
In contrast, the following code works as expected – my parentContext executes the block on the main queue:
[childContext performBlock:^{
[parentContext performBlockAndWait:^{
//Thread 1, Queue: com.apple.main-thread
}];
}];
Is this the expected behavior? It is certainly confusing me since the docs state "performBlockAndWait: synchronously performs a given block on the receiver’s queue."
You should not be worried on what thread blocks are executed. What the performBlock: and performBlockAndWait: methods guarantee is thread safety. Thus, calling performBlockAndWait: from the main thread does not mean there would be a context switch to a background thread - it is very expensive and it is not needed. If during the operation of the block (on the main thread), an attempt is performed to perform a block, it would be blocked until the currently executing block is finished. At the end of the day, the result would be the same as if a context switch was performed, only faster. On the other hand, calling performBlock: will queue the block on an arbitrary queue, often executing on a background thread.
In the example above, since you performBlockAndWait:, your private queue context executes your block on the main thread, as does the main context block. In your second example, you schedule the block to run asynchronously, so it is executed on a background thread.
You should not judge a thread's queue by it's name. To see if you are on the main queue, you can use dispatch_get_current_queue() and test if it is equal to dispatch_get_main_queue().
Related
The question is very simple. Is it guaranteed that, without calling observeOn() - i.e. using CurrentThreadScheduler - the closure of subscribe() is executed in the same thread (not queue) of the call?
In the example starting thread == observer thread
// <starting thread>
let observable = ... // an observable
observable.subscribe(onNext: { _ in
// <observer thread>
})
It's said here that
When we are doing some operations with Rx, by definition it is all done on the same thread. Unless you don’t change the thread manually, entry point of the chain will begin on the current thread and it will also dispose on the same thread.
Also, the default scheduler is the CurrentThreadScheduler, which schedules on the current thread
{
dispatch_queue_t myQueue = dispatch_queue_create("com.mycompany.myqueue", 0);
dispatch_sync(myQueue, ^{
//Do EXTREME PROCESSING!!!
for (int i = 0; i< 100; i++) {
[NSThread sleepForTimeInterval:.05];
NSLog(#"%i", i);
}
dispatch_sync(dispatch_get_main_queue(), ^{
[self updateLabelWhenBackgroundDone];
});
});
}
I am getting a deadlock here. According to Apple documentation
"dispatch_sync": "Submits a block to a dispatch queue for synchronous
execution. Unlike dispatch_async, this function does not return until
the block has finished. Calling this function and targeting the
current queue results in deadlock.".
However, I do the outer dispatch_sync on myQueue and then I do inner ditpatch_sync on a different queue which is `main_queue.
Can not find out the reason for the deadlock. Any comments/help are appreciated here.
If you dispatch_sync to myQueue like that and the call happens on the main thread, then dispatch_sync will, if possible, execute the block right there and not on a new worker thread like dispatch_async would. You're not guaranteed to get a separate worker thread for your queue.
The block then runs on the main thread until it hits your second dispatch_sync call, which happens to target the main queue. That queue can't be serviced, since there's already a block running on it, and that's where you end up in a deadlock.
If that's your problem, i.e. the first dispatch_sync is indeed coming from the main thread, then you should switch to dispatch_async. You wouldn't want to block the main thread with the long-running "EXTREME PROCESSING" operation.
You are calling dispatch_sync twice. The first time suspends the main thread waiting for your block to complete. The block then suspends the background thread with the second call which tries to push back to the main thread (which will never process the block from its queue because it's suspended). Both threads are now waiting for each other.
At least one of the calls needs to be dispatch_async.
I had similar problems and none of these solutions worked. I asked someone smarter than me.
My problem was I was spawning a dispatching an async worker block, and then displaying a progress window. Calls back into the main thread via
dispatch_sync(dispatch_get_main_queue(), ^{})
failed as did async calls.
The explanation was that the main thread was no longer in 'commons mode' because of the modal window. I replaced my calls to the main thread with this....
CFRunLoopPerformBlock(([[NSRunLoop mainRunLoop] getCFRunLoop]), (__bridge CFStringRef)NSModalPanelRunLoopMode, ^{
//Update UI thread.
});
I am detaching a thread to do some operation in the background, refer the code as below
currentThread = [[NSThread alloc]initWithTarget:contactServiceselector:#selector(requestForContactBackup:)object:msisdn];
[currentThread start];
This currentThread is the pointer declared in AppDelegate.
I have a button on my view, on tap of it, the execution of background thread should stop. Refer the below code:
-(void)cancelTheRunningTasks {
if(self.currentThread !=nil) {
[currentThread cancel];
NSLog(#"IsCancelled: %d",[currentThread isCancelled]); //here Yes returns
[self removeNetworkIndicatorInView:backUpViewController.view];
}
}
Problem with the below code is that the background thread is still remains in execution.
My question would be, having the thread reference, how to cancel/stop execution/kill the background thread from main thread?
please suggest me possible solution.
Thanks.
Your background thread needs to check to see if it has been cancelled, either through the isCancelled method...
if ([[NSThread currentThread] isCancelled]) {
// do cleanup here
[NSThread exit];
}
You can't kill the thread externally because there is no way to know what state the thread might be in and, thus, killing it would produce indeterminate behavior (imagine if the thread was holding a mutex down in the allocator when it was killed... ouch).
cancel
Changes the cancelled state of the receiver to indicate that it should exit.
exit
Terminates the current thread.
Check NSThread Class Reference
For more information about cancellation and operation objects, see NSOperation Class Reference.
Note: In OS X v10.6, the behavior of the cancel method varies depending on whether the operation is currently in an operation queue. For unqueued operations, this method marks the operation as finished immediately, generating the appropriate KVO notifications. For queued operations, it simply marks the operation as ready to execute and lets the queue call its start method, which subsequently exits and results in the clearing of the operation from the queue.
I resolved the Problem. Exactly what I was want to do that I want to stop or kill the working condition of some background thread from my main Thread or some other thread. As I read the Apple documentation and some posts I concluded that we can't kill one thread from other thread because they all threads shares common memory space and resources and its is not better to kill the thread by other thread (But one process can kill the other process because no common memory space shares between two processes).
Then I got info we cant exit/kill thread like that but still we can set the cancel property of the running thread from other thread. (In code where user requested to cancel the Tasks).
So here we can set cancel property. And inside our background task code which is under execution just check whether the cancel property is set or not. (we need to monitor after a chunk of execution of code). If cancel property is set/Yes then call [Thread exit] in that background thread code and release all the memory allocated by that thread to protect memory leaks (autorelease pool will not take care here for freeing the resources).
This is How i resolved the problem.
In simple --> just set the property of the particular task u want to cancel as cancel set. (method to set cancel will be call by the thread object reference).
if(self.currentThread != nil && [currentThread isExecuting])
{
[currentThread cancel];
}
And then monitoring in your code for cancel property. If property set then exit the thread.
if([appDelegate.currentThread isCancelled])
{
[NSThread exit];
}
If someone has better solution than this please refer. Otherwise It will also work fine.
In reviewing my code, I've been seeing that in many places I have been making the assumption that calling [NSBlockOperationInstance start]; will start this operation on the main thread. I don't know why I thought this, but I shouldn't have been so sure any way. I checked the documentation but couldn't find any explicit mention of the thread the block would run on. However, asserting assert([NSThread isMainThread]); in the main body of the block does pass every time using start, so I'm not sure if this is a coincidence. Any one have more solid understanding of how this would work?
I forgot to mention that [op start] is being called on the main thread.
OK, it all depends on where you call start(). While NSBlockOperation will farm out blocks to other threads, start() is synchronous, and will not return until all the blocks that have been given to NSBlockOperation have completed.
While NSBlockOperation will concurrently execute the blocks it is given, NSBlockOperation itself is NOT concurrent (i.e., isConcurrent is false). Thus, according to the documentation, start() will execute in its entirety in the thread of the caller to start().
Since the thread that calls start() will not return until all the blocks have executed, it makes sense to let the calling thread be involved in the thread pool that is executing the concurrent blocks. That is why you will see some blocks executing in the thread that called start().
If you are seeing a block execute in the main thread, then you must have called it from the main thread.
On a related note, if your NSBlockOperation contains a single block, than that block will always execute in the calling thread.
Remember, if you want a NSOperation to be fully concurrent, you must implement the appropriate functionality in a subclass.
Barring that, you can give any NSOperation to a NSOperationQueue, and it will execute concurrently, because the NSOperation is given to a queue, and the thread running the operation calls start().
Personally, I do not see any advantage in using NSBlockOperation over dispatch_async() unless I need to use its features. If you are only executing one block, just call
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ });
If you want to utilize the features of NSBlockOperation, but you do not want to wait for them to complete in the current calling thread, it still makes sense to do this...
// Add lots of concurrent blocks
[op addExecutionBlock:^{ /*whatever*/ }];
// Execute the blocks asynchronously
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[op start];
// Now do what you want after all the concurrent blocks have completed...
// Maybe even tell the UI
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI now that all my concurrent blocks have finished.
});
});
EDIT
To address your comment to tc's answer...
If you call
op = [NSBlockOperation blockOperationWithBlock:^{assert([NSThread isMainThread])}];
[op start];
from the main thread, then there are some guarantees, and some high probabilities.
First, you are guaranteed that [op start] will run to completion in the calling thread. That's because NSBlockOperation does not override the default behavior of NSOperation that specifies it is NOT a concurrent operation.
Next, you have a very high probability that if the NSBlockOperation only has one block, that it will run in the calling thread. You have almost the same probability that the first block will run in the calling thread.
However, the above "probabilities" are not guarantees (only because the documentation does not say it). I guess, some engineer may find some reason to spin that single block to one of the concurrent queues, and just have the calling thread join on the operation that is executing in another thread... but I highly doubt that.
Anyway, maybe your confusion comes from the fact that the documentation for NSBlockOperation says it executes block concurrently, which it does. However, the operation itself is not concurrent, so the initial operation is synchronous. It will wait for all blocks to execute, and it may (or may not) execute some of them on the calling thread.
While there is no guarantee, I find it highly unlikely that a NSBlockOperation with only one block will do anything other than execute on the calling thread.
The docs specifically say
Blocks added to a block operation are dispatched with default priority to an appropriate work queue. The blocks themselves should not make any assumptions about the configuration of their execution environment.
I suspect that the following will crash:
NSBlockOperation * op = [NSBlockOperation blockOperationWithBlock:^{ sleep(1); }];
[op addExecutionBlock:^{assert([NSThread isMainThread]); }];
[op start];
What's wrong with simply executing the block?
I get a memory leak when the view controller calls my model class method at the line where i create my gcd queue. Any ideas?
+(void)myClassMethod {
dispatch_queue_t myQueue = dispatch_queue_create("com.mysite.page", 0); //run with leak instrument points here as culprit
dispatch_async(myQueue, ^{});
}
You should change it to ...
dispatch_queue_t myQueue = dispatch_queue_create("com.mysite.page", 0);
dispatch_async(myQueue, ^{});
dispatch_release(myQueue);
... you should call dispatch_release when you no longer need an access to the queue. And as myQueue is local variable, you must call it there.
Read dispatch_queue_create documentation:
Discussion
Blocks submitted to the queue are executed one at a time in FIFO order. Note, however, that blocks submitted to independent queues may be executed concurrently with respect to each other.
When your application no longer needs the dispatch queue, it should release it with the dispatch_release function. Any pending blocks submitted to a queue hold a reference to that queue, so the queue is not deallocated until all pending blocks have completed.
The Leak tool reports where memory is allocated that no longer has any references from your code.
After that method runs, since there is nothing that has a reference to the queue you created, and dispatch_release() was never called, it's considered a leak.