I am currently working under the assumption that -performSelector:withObject:afterDelay: does not utilize threading, but schedules an event to fire at a later date on the current thread. Is this correct?
More, specifically:
- (void) methodCalledByButtonClick {
for (id obj in array) {
[self doSomethingWithObj:obj];
}
}
static BOOL isBad = NO;
- (void) doSomethingWithObj:(id)obj {
if (isBad) {
return;
}
if ([obj isBad]) {
isBad = YES;
[self performSelector:#selector(resetIsBad) withObject:nil afterDelay:0.1];
return;
}
//Do something with obj
}
- (void) resetIsBad {
isBad = NO;
}
Is it guaranteed that -resetIsBad will not be called until after -methodCalledByButtonClick returns, assuming we are running on the main thread, even if -methodCalledByButtonClick takes an arbitrarily long time to complete?
From the docs:
Invokes a method of the receiver on
the current thread using the default
mode after a delay.
The discussion goes further:
This method sets up a timer to perform
the aSelector message on the current
thread’s run loop. The timer is
configured to run in the default mode
(NSDefaultRunLoopMode). When the timer
fires, the thread attempts to dequeue
the message from the run loop and
perform the selector. It succeeds if
the run loop is running and in the
default mode; otherwise, the timer
waits until the run loop is in the
default mode.
From this we can answer your second question. Yes, it's guaranteed, even with a shorter delay since the current thread is busy executing when performSelector is called. When the thread returns to the run loop and dequeues the selector, you'll have returned from your methodCalledByButtonClick.
performSelector:withObject:afterDelay: schedules a timer on the same thread to call the selector after the passed delay. If you sign up for the default run mode (i.e. don't use performSelector:withObject:afterDelay:inModes:), I believe it is guaranteed to wait until the next pass through the run loop, so everything on the stack will complete first.
Even if you call with a delay of 0, it will wait until the next loop, and behave as you want here. For more info refer to the docs.
Related
The below code is used to execute a long running calculation on a background thread:
enum CalculationInterface {
private static var latestKey: AnyObject? // Used to cancel previous calculations when a new one is initiated.
static func output(from input: Input, return: #escaping (Output?) -> ()) {
self.latestKey = EmptyObject()
let key = self.latestKey! // Made to enable capturing `self.latestKey's` value.
DispatchQueue.global().async {
do {
let output = try calculateOutput(from: input, shouldContinue: { key === self.latestKey }) // Function cancels by throwing an error.
DispatchQueue.main.async { if (key === self.latestKey) { `return`(output) } }
} catch {}
}
}
}
This function is called from the main thread like so:
/// Initiates calculation of the output and sets it to the result when finished.
private func recalculateOutput() {
self.output = .calculating // Triggers calculation in-progress animation for user.
CalculationInterface.output(from: input) { self.output = $0 } // Ends animation once set and displays calculated output to user.
}
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code. Or in other words execute after self.output = .calculating but before self.latestKey is re-set to the new object. If it could, then the stale calculation output could be displayed to the user.
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code
No, it isn't possible. The main queue is a serial queue. If code is running on the main queue, no "other" main queue code can run. Your DispatchQueue.main.async effectively means: "Wait until all code running on the main queue comes naturally to an end, and then run this on the main queue."
On the other hand, DispatchQueue.global() is not a serial queue. Thus it is theoretically possible for two calls to calculateOutput to overlap. That isn't something you want to have happen; you want to be sure that any executing instance of calculateOutput finishes (and we proceed to grapple with the latestKey) before another one can start. In other words, you want to ensure that the sequence
set latestKey on the main thread
perform calculateOutput in the background
look at latestKey on the main thread
happens coherently. The way to ensure that is to set aside a DispatchQueue that you create with DispatchQueue(label:), that you will always use for running calculateOutput. That queue will be a serial queue by default.
In ReactiveX an Observable might invoke
the following methods
onNext
onError
onCompleted
according to a very clear contract http://reactivex.io/documentation/contract.html
I am writing the code for the Observable and I have realized that under certain circumstances I might invoke onNext within the same thread of execution.
Is that a mistake ?
For example, if I have code like this:
// call onNext twice on the same thread execution.
o.onNext(event1);
o.onNext(event2);
Should I rewrite it like this:
// call onNext and then schedule the next call via setTimeout
o.onNext(event1);
setTimeout(function() { o.onNext(event2); },0);
= = = = = = = =
Clarification: why am I asking ?
In a browser, let's imagine the observer wants to update an HTML element with the content of onNext then if it is called serially that code would not work.
function onNext() {
count++;
htmlElement.innerHTML = 'count:'+count; // <-- this will work only if onNext is invoked on different thread of executions.
}
this may sound a newbie question anyway Im new to GCD,
I'm creating and running these two following threads. The first one puts data into ivar mMutableArray and the second one reads from it. How do i lock and unclock the threads to avoid crashes and keep the code thread safe ?
// Thread for writing data into mutable array
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
if (timer) {
dispatch_source_set_timer(timer, dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC * interval), interval * NSEC_PER_SEC, leeway);
dispatch_source_set_event_handler(timer, ^{
...
// Put data into ivar
[mMutableArray addObject:someObject];
...
});
dispatch_resume(timer);
}
// Thread for reading from mutable array
dispatch_source_t timer1 = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
if (timer1) {
dispatch_source_set_timer(timer1, dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC * interval), interval * NSEC_PER_SEC, leeway);
dispatch_source_set_event_handler(timer1, ^{
...
if (mMutableArray) {
// Read data from ivar
SomeObject* someobject = [mMutableArray lastObject];
}
...
});
dispatch_resume(timer1);
}
You are using it wrong, by locking access to the variables you are simply losing any benefit from GCD. Create a single serial queue which is associated to the variables you want to modify (in this case the mutable array). Then use that queue to both write and read to it, which will happen in guaranteed serial sequence and with minimum locking overhead. You can read more about it in "Asynchronous setters" at http://www.fieryrobot.com/blog/2010/09/01/synchronization-using-grand-central-dispatch/. As long as your access to the shared variable happens through its associated dispatch queue, you won't have concurrency problems ever.
I've used mutexes in my project and I'm quite happy with how it works at the moment.
Create the mutex and initialise it
pthread_mutex_t *mutexLock;
pthread_mutex_init(&_mutex, NULL);
Then put the lock around your code, once the second thread tries to get the lock, it will wait until the lock is freed again by the first thread. Note that you still might want to check if the first thread is actually the one writing to it.
pthread_mutex_lock(self_mutex);
{
** Code here
}
pthread_mutex_unlock(self_mutex);
You can still use #synchronized on your critical sections with GCD
I have run a secondary thread which some operations are carried on. Then while executing in secondary thread i want to call some operations on main thread. Can any one have sample code for it. I could not find it from google.
Here is my sample call:
Glib::thread_init();
Glib::Thread *const myThread = Glib::Thread::create(sigc::mem_fun(*this, &MyClass::MyFunction), true);
myThread->join();
MyClass::MyFunction()
{
//here i want to call the function from main thread
AnotherFunction();
}
MyClass::AnotherFunction()
{
}
I have two methods that I need to run, lets call them metA and metB.
When I start coding this app, I called both methods without using threads, but the app started freezing, so I decided to go with threads.
metA and metB are called by touch events, so they can occur any time in any order. They don't depend on each other.
My problem is the time it takes to either threads start running. There's a lag between the time the thread is created with
[NSThread detachNewThreadSelector:#selector(.... bla bla
and the time the thread starts running.
I suppose this time is related to the amount of time required by iOS to create the thread itself. How can I speed this? If I pre create both threads, how do I make them just do their stuff when needed and never terminate? I mean, a kind of sleeping thread that is always alive and works when asked and sleeps after that?
thanks.
If you want to avoid the expensive startup time of creating new threads, create both threads at startup as you suggested. To have them only run when needed, you can have them wait on a condition variable. Since you're using the NSThread class for threading, I'd recommend using the NSCondition class for condition variables (an alternative would be to use the POSIX threading (pthread) condition variables, pthread_cond_t).
One thing you'll have to be careful of is if you get another touch event while the thread is still running. In that case, I'd recommend using a queue to keep track of work items, and then the touch event handler can just add the work item to the queue, and the worker thread can process them as long as the queue is not empty.
Here's one way to do this:
typedef struct WorkItem
{
// information about the work item
...
struct WorkItem *next; // linked list of work items
} WorkItem;
WorkItem *workQueue = NULL; // head of linked list of work items
WorkItem *workQueueTail = NULL; // tail of linked list of work items
NSCondition *workCondition = NULL; // condition variable for the queue
...
-(id) init
{
if((self = [super init]))
{
// Make sure this gets initialized before the worker thread starts
// running
workCondition = [[NSCondition alloc] init];
// Start the worker thread
[NSThread detachNewThreadSelector:#selector(threadProc:)
toTarget:self withObject:nil];
}
return self;
}
// Suppose this function gets called whenever we receive an appropriate touch
// event
-(void) onTouch
{
// Construct a new work item. Note that this must be allocated on the
// heap (*not* the stack) so that it doesn't get destroyed before the
// worker thread has a chance to work on it.
WorkItem *workItem = (WorkItem *)malloc(sizeof(WorkItem));
// fill out the relevant info about the work that needs to get done here
...
workItem->next = NULL;
// Lock the mutex & add the work item to the tail of the queue (we
// maintain that the following invariant is always true:
// (workQueueTail == NULL || workQueueTail->next == NULL)
[workCondition lock];
if(workQueueTail != NULL)
workQueueTail->next = workItem;
else
workQueue = workItem;
workQueueTail = workItem;
[workCondition unlock];
// Finally, signal the condition variable to wake up the worker thread
[workCondition signal];
}
-(void) threadProc:(id)arg
{
// Loop & wait for work to arrive. Note that the condition variable must
// be locked before it can be waited on. You may also want to add
// another variable that gets checked every iteration so this thread can
// exit gracefully if need be.
while(1)
{
[workCondition lock];
while(workQueue == NULL)
{
[workCondition wait];
// The work queue should have something in it, but there are rare
// edge cases that can cause spurious signals. So double-check
// that it's not empty.
}
// Dequeue the work item & unlock the mutex so we don't block the
// main thread more than we have to
WorkItem *workItem = workQueue;
workQueue = workQueue->next;
if(workQueue == NULL)
workQueueTail = NULL;
[workCondition unlock];
// Process the work item here
...
free(workItem); // don't leak memory
}
}
If you can target iOS4 and higher, consider using blocks with Grand Central Dispatch asynch queue, which operates on background threads which the queue manages... or for backwards compatibility, as mentioned use NSOperations inside an NSOperation queue to have bits of work performed for you in the background. You can specify exactly how many background threads you want to support with an NSOperationQueue if both operations have to run at the same time.