iphone - how do I make a thread runs faster - iphone

I have two methods that I need to run, lets call them metA and metB.
When I start coding this app, I called both methods without using threads, but the app started freezing, so I decided to go with threads.
metA and metB are called by touch events, so they can occur any time in any order. They don't depend on each other.
My problem is the time it takes to either threads start running. There's a lag between the time the thread is created with
[NSThread detachNewThreadSelector:#selector(.... bla bla
and the time the thread starts running.
I suppose this time is related to the amount of time required by iOS to create the thread itself. How can I speed this? If I pre create both threads, how do I make them just do their stuff when needed and never terminate? I mean, a kind of sleeping thread that is always alive and works when asked and sleeps after that?
thanks.

If you want to avoid the expensive startup time of creating new threads, create both threads at startup as you suggested. To have them only run when needed, you can have them wait on a condition variable. Since you're using the NSThread class for threading, I'd recommend using the NSCondition class for condition variables (an alternative would be to use the POSIX threading (pthread) condition variables, pthread_cond_t).
One thing you'll have to be careful of is if you get another touch event while the thread is still running. In that case, I'd recommend using a queue to keep track of work items, and then the touch event handler can just add the work item to the queue, and the worker thread can process them as long as the queue is not empty.
Here's one way to do this:
typedef struct WorkItem
{
// information about the work item
...
struct WorkItem *next; // linked list of work items
} WorkItem;
WorkItem *workQueue = NULL; // head of linked list of work items
WorkItem *workQueueTail = NULL; // tail of linked list of work items
NSCondition *workCondition = NULL; // condition variable for the queue
...
-(id) init
{
if((self = [super init]))
{
// Make sure this gets initialized before the worker thread starts
// running
workCondition = [[NSCondition alloc] init];
// Start the worker thread
[NSThread detachNewThreadSelector:#selector(threadProc:)
toTarget:self withObject:nil];
}
return self;
}
// Suppose this function gets called whenever we receive an appropriate touch
// event
-(void) onTouch
{
// Construct a new work item. Note that this must be allocated on the
// heap (*not* the stack) so that it doesn't get destroyed before the
// worker thread has a chance to work on it.
WorkItem *workItem = (WorkItem *)malloc(sizeof(WorkItem));
// fill out the relevant info about the work that needs to get done here
...
workItem->next = NULL;
// Lock the mutex & add the work item to the tail of the queue (we
// maintain that the following invariant is always true:
// (workQueueTail == NULL || workQueueTail->next == NULL)
[workCondition lock];
if(workQueueTail != NULL)
workQueueTail->next = workItem;
else
workQueue = workItem;
workQueueTail = workItem;
[workCondition unlock];
// Finally, signal the condition variable to wake up the worker thread
[workCondition signal];
}
-(void) threadProc:(id)arg
{
// Loop & wait for work to arrive. Note that the condition variable must
// be locked before it can be waited on. You may also want to add
// another variable that gets checked every iteration so this thread can
// exit gracefully if need be.
while(1)
{
[workCondition lock];
while(workQueue == NULL)
{
[workCondition wait];
// The work queue should have something in it, but there are rare
// edge cases that can cause spurious signals. So double-check
// that it's not empty.
}
// Dequeue the work item & unlock the mutex so we don't block the
// main thread more than we have to
WorkItem *workItem = workQueue;
workQueue = workQueue->next;
if(workQueue == NULL)
workQueueTail = NULL;
[workCondition unlock];
// Process the work item here
...
free(workItem); // don't leak memory
}
}

If you can target iOS4 and higher, consider using blocks with Grand Central Dispatch asynch queue, which operates on background threads which the queue manages... or for backwards compatibility, as mentioned use NSOperations inside an NSOperation queue to have bits of work performed for you in the background. You can specify exactly how many background threads you want to support with an NSOperationQueue if both operations have to run at the same time.

Related

Will an item submitted to the main `DispatchQueue` ever interrupt currently executing code on the main thread?

The below code is used to execute a long running calculation on a background thread:
enum CalculationInterface {
private static var latestKey: AnyObject? // Used to cancel previous calculations when a new one is initiated.
static func output(from input: Input, return: #escaping (Output?) -> ()) {
self.latestKey = EmptyObject()
let key = self.latestKey! // Made to enable capturing `self.latestKey's` value.
DispatchQueue.global().async {
do {
let output = try calculateOutput(from: input, shouldContinue: { key === self.latestKey }) // Function cancels by throwing an error.
DispatchQueue.main.async { if (key === self.latestKey) { `return`(output) } }
} catch {}
}
}
}
This function is called from the main thread like so:
/// Initiates calculation of the output and sets it to the result when finished.
private func recalculateOutput() {
self.output = .calculating // Triggers calculation in-progress animation for user.
CalculationInterface.output(from: input) { self.output = $0 } // Ends animation once set and displays calculated output to user.
}
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code. Or in other words execute after self.output = .calculating but before self.latestKey is re-set to the new object. If it could, then the stale calculation output could be displayed to the user.
I'm wondering if it's possible for the closure that's pushed to DispatchQueue.main to execute while the main thread is running my code
No, it isn't possible. The main queue is a serial queue. If code is running on the main queue, no "other" main queue code can run. Your DispatchQueue.main.async effectively means: "Wait until all code running on the main queue comes naturally to an end, and then run this on the main queue."
On the other hand, DispatchQueue.global() is not a serial queue. Thus it is theoretically possible for two calls to calculateOutput to overlap. That isn't something you want to have happen; you want to be sure that any executing instance of calculateOutput finishes (and we proceed to grapple with the latestKey) before another one can start. In other words, you want to ensure that the sequence
set latestKey on the main thread
perform calculateOutput in the background
look at latestKey on the main thread
happens coherently. The way to ensure that is to set aside a DispatchQueue that you create with DispatchQueue(label:), that you will always use for running calculateOutput. That queue will be a serial queue by default.

Mutex does not work as I expected

My Environment: C++ Builder XE4.
I am using Mutex. In the following code, I expect that while Timer1 would acquire mutex, Timer2 process would be skipped. However, Timer2 process was not skipped at all.
What is the problem in the code?
Unit1.cpp
//---------------------------------------------------------------------------
#include <vcl.h>
#pragma hdrstop
#include "Unit1.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm(Owner)
{
}
//---------------------------------------------------------------------------
String MutexName = L"Project1";
HANDLE HWNDMutex;
void __fastcall TForm1::FormShow(TObject *Sender)
{
HWNDMutex = CreateMutex(NULL, false, MutexName.c_str());
if (HWNDMutex == NULL) {
String msg = L"failed to create mutex";
OutputDebugString(msg.c_str());
}
Timer1->Enabled = false;
Timer1->Interval = 1000; // msec
Timer1->Enabled = true;
Timer2->Enabled = false;
Timer2->Interval = 200; // msec
Timer2->Enabled = true;
}
__fastcall TForm1::~TForm1()
{
CloseHandle(HWNDMutex);
}
void __fastcall TForm1::Timer1Timer(TObject *Sender)
{
if (WaitForSingleObject(HWNDMutex, INFINITE) == WAIT_TIMEOUT) {
return;
}
if (CHK_update->Checked) {
String msg = L"Timer1 " + Now().FormatString(L"yyyy/mm/dd hh:nn:ss.zzz");
Memo1->Lines->Add(msg);
}
for(int loop=0; loop<10; loop++) {
Application->ProcessMessages();
Sleep(90); // msec
}
ReleaseMutex(HWNDMutex);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::Timer2Timer(TObject *Sender)
{
if (WaitForSingleObject(HWNDMutex, INFINITE) == WAIT_TIMEOUT) {
return;
}
if (CHK_update->Checked) {
String msg = L">>>Timer2 " + Now().FormatString(L"yyyy/mm/dd hh:nn:ss.zzz");
Memo1->Lines->Add(msg);
}
ReleaseMutex(HWNDMutex);
}
//---------------------------------------------------------------------------
Result
Timer1 2017/11/08 15:20:39.781
>>>Timer2 2017/11/08 15:20:39.786
>>>Timer2 2017/11/08 15:20:40.058
>>>Timer2 2017/11/08 15:20:40.241
>>>Timer2 2017/11/08 15:20:40.423
>>>Timer2 2017/11/08 15:20:40.603
Timer1 2017/11/08 15:20:40.796
>>>Timer2 2017/11/08 15:20:40.799
>>>Timer2 2017/11/08 15:20:41.071
>>>Timer2 2017/11/08 15:20:41.254
>>>Timer2 2017/11/08 15:20:41.436
>>>Timer2 2017/11/08 15:20:41.619
Timer1 2017/11/08 15:20:41.810
>>>Timer2 2017/11/08 15:20:41.811
>>>Timer2 2017/11/08 15:20:42.083
>>>Timer2 2017/11/08 15:20:42.265
>>>Timer2 2017/11/08 15:20:42.448
>>>Timer2 2017/11/08 15:20:42.633
I tried using TMutex with acquire() and release(), but it did not work either.
A mutex has a thread affinity and thus is re-entrant:
A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and nonsignaled when it is owned. Only one thread at a time can own a mutex object, whose name comes from the fact that it is useful in coordinating mutually exclusive access to a shared resource. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object.
...
After a thread obtains ownership of a mutex, it can specify the same mutex in repeated calls to the wait-functions without blocking its execution. This prevents a thread from deadlocking itself while waiting for a mutex that it already owns. To release its ownership under such circumstances, the thread must call ReleaseMutex once for each time that the mutex satisfied the conditions of a wait function.
TTimer is a message-based timer. You have two timers running in the same thread. Which means their OnTimer events are serialized by default in relation to each other. Only one event can be running at a time (unless you do something stupid like call Application->ProcessMessages(), which is a re-entrant nightmare).
Timer2 will trigger first (4-5 times, actually), acquiring and releasing the mutex lock each time, before Timer1 triggers. Then Timer1 triggers, acquires the lock, runs a loop to pump the main UI message queue, thus allowing Timer2 to trigger again (multiple times) while Timer1Timer() is still running. Timer2 will re-acquire and release the same lock that the UI thread already has, so WaitForSingleObject() exits with WAIT_OBJECT_0 immediately. Then the loop ends and Timer1 releases the lock.
Your mutex is useless in this code. A mutex is meant for inter-thread synchronization, but you have no worker threads in this code! You have a single thread synchronizing against itself, which is redundant, and exactly the kind of deadlock-causing situation that many synchronization objects avoid by supporting re-entry.
A critical section also has a thread affinity and is re-entrant, so that is not going to help you, either:
A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process.
...
When a thread owns a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. To release its ownership, the thread must call LeaveCriticalSection one time for each time that it entered the critical section. There is no guarantee about the order in which waiting threads will acquire ownership of the critical section.
However, a semaphore would work for what you are attempting, as it does not have a thread affinity:
A semaphore object is a synchronization object that maintains a count between zero and a specified maximum value. The count is decremented each time a thread completes a wait for the semaphore object and incremented each time a thread releases the semaphore. When the count reaches zero, no more threads can successfully wait for the semaphore object state to become signaled. The state of a semaphore is set to signaled when its count is greater than zero, and nonsignaled when its count is zero.
The semaphore object is useful in controlling a shared resource that can support a limited number of users. It acts as a gate that limits the number of threads sharing the resource to a specified maximum number. For example, an application might place a limit on the number of windows that it creates. It uses a semaphore with a maximum count equal to the window limit, decrementing the count whenever a window is created and incrementing it whenever a window is closed. The application specifies the semaphore object in call to one of the wait functions before each window is created. When the count is zero—indicating that the window limit has been reached—the wait function blocks execution of the window-creation code.
...
A thread that owns a mutex object can wait repeatedly for the same mutex object to become signaled without its execution becoming blocked. A thread that waits repeatedly for the same semaphore object, however, decrements the semaphore's count each time a wait operation is completed; the thread is blocked when the count gets to zero. Similarly, only the thread that owns a mutex can successfully call the ReleaseMutex function, though any thread can use ReleaseSemaphore to increase the count of a semaphore object.
If you switch to a semaphore, your code as shown would deadlock itself as soon as Application->ProcessMessages() is called and the semaphore counter drops to 0, because of your use of INFINITE timeouts. So use smaller timeouts to prevent that.
Try this:
//---------------------------------------------------------------------------
#include <vcl.h>
#pragma hdrstop
#include "Unit1.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner)
: TForm(Owner)
{
}
//---------------------------------------------------------------------------
HANDLE hSemaphore;
void __fastcall TForm1::FormShow(TObject *Sender)
{
hSemaphore = CreateSemaphore(NULL, 1, 1, NULL);
if (hSemaphore == NULL) {
OutputDebugString(L"failed to create semaphore");
}
Timer1->Enabled = false;
Timer1->Interval = 1000; // msec
Timer1->Enabled = true;
Timer2->Enabled = false;
Timer2->Interval = 200; // msec
Timer2->Enabled = true;
}
__fastcall TForm1::~TForm1()
{
if (hSemaphore)
CloseHandle(hSemaphore);
}
void __fastcall TForm1::Timer1Timer(TObject *Sender)
{
if (WaitForSingleObject(hSemaphore, 0) != WAIT_OBJECT_0) {
return;
}
if (CHK_update->Checked) {
String msg = L"Timer1 " + Now().FormatString(L"yyyy/mm/dd hh:nn:ss.zzz");
Memo1->Lines->Add(msg);
}
for(int loop=0; loop<10; loop++) {
Application->ProcessMessages();
Sleep(90); // msec
}
ReleaseSemaphore(hSemaphore, 1, NULL);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::Timer2Timer(TObject *Sender)
{
if (WaitForSingleObject(hSemaphore, 0) != WAIT_OBJECT_0) {
return;
}
if (CHK_update->Checked) {
String msg = L">>>Timer2 " + Now().FormatString(L"yyyy/mm/dd hh:nn:ss.zzz");
Memo1->Lines->Add(msg);
}
ReleaseSemaphore(hSemaphore, 1, NULL);
}
//---------------------------------------------------------------------------
On a side note: beware of giving a kernel-based synchronization object a name. That allows other processes to access it and mess around with its state behind your back. Don't name objects that you don't intend to share across process boundaries! Mutexes and semaphores are namable objects.

Memory Leak using Windows ThreadPool API

I am using Windows ThreadPools in my application, and am experiencing a memory leak of 136 bytes for every call to CreateThreadPoolWork(), as seen via UMDH:
+ 1257728 ( 1286424 - 28696) 9459 allocs BackTraceB0035CC
+ 9248 ( 9459 - 211) BackTraceB0035CC allocations
ntdll!RtlUlonglongByteSwap+B52
ntdll!TpAllocWork+8D
KERNEL32!CreateThreadpoolWork+25
... My Code ...
I am using Cleanup Group, so per the documentation I am not calling CloseThreadPoolWork().
My code for handling the ThreadPool is:
typedef PTP_WORK ThreadHandle_t;
typedef PTP_WORK_CALLBACK THREAD_ENTRY_POINT_T;
static PTP_POOL pool = NULL;
static TP_CALLBACK_ENVIRON CallBackEnviron;
static PTP_CLEANUP_GROUP cleanupgroup = NULL;
int mtInitialize()
{
InitializeThreadpoolEnvironment(&CallBackEnviron);
pool = CreateThreadpool(NULL);
if (NULL == pool)
{
return -1;
}
cleanupgroup = CreateThreadpoolCleanupGroup();
if (NULL == cleanupgroup)
{
return -1;
}
SetThreadpoolCallbackPool(&CallBackEnviron, pool);
SetThreadpoolCallbackCleanupGroup(&CallBackEnviron, cleanupgroup, NULL);
return 0; // Success
}
void mtDestroy()
{
CloseThreadpoolCleanupGroupMembers(cleanupgroup, FALSE, NULL);
CloseThreadpoolCleanupGroup(cleanupgroup);
DestroyThreadpoolEnvironment(&CallBackEnviron);
CloseThreadpool(pool);
}
//Create thread
ThreadHandle_t mtRunThread(THREAD_ENTRY_POINT_T entry_point, void *thread_args)
{
PTP_WORK work = NULL;
work = CreateThreadpoolWork(entry_point, thread_args, &CallBackEnviron);
if (NULL == work) {
// CreateThreadpoolWork() failed.
return 0;
}
SubmitThreadpoolWork(work);
return work;
}
//Wait for a thread to finish
void mtWaitForThread(ThreadHandle_t thread)
{
WaitForThreadpoolWorkCallbacks(thread, FALSE);
}
Am I doing something wrong?
Any ideas why I'm leaking memory?
I'm guessing you figured it out, given your comment, but the problem is that you only call CloseThreadpoolCleanupGroupMembers() in mtDestroy().
If you have a persistent thread pool the memory will not be freed unless you call CloseThreadpoolCleanupGroupMembers() periodically. Your code and comments suggests that you do, though I can't confirm this without the code responsible for creating and destroying your thread pool.
My recommendation for persistent thread pools is to just call CloseThreadpoolWork() in the callback functions. Microsoft's recommendations work better if you're creating and destroying thread pools, but CloseThreadpoolWork() is simpler and easier than periodically calling CloseThreadpoolCleanupGroupMembers() if you're maintaining one thread pool for the life of your application.
By the way, it's safe to do both as long as you tell CloseThreadpoolCleanupGroupMembers() to cancel any pending callbacks (pass fCancelPendingCallbacks as TRUE) to ensure CloseThreadpoolWork() is called on any cleaned up work items:
You can revoke the work object’s membership only by closing it, which
can be done on an individual basis with the CloseThreadpoolWork
function. The thread pool knows that the work object is a member of
the cleanup group and revokes its membership before closing it. This
ensures that the application doesn’t crash when the cleanup group
later attempts to close all of its members. The inverse isn’t true: If
you first instruct the cleanup group to close all of its members and
then call CloseThreadpoolWork on the now invalid work object, your
application will crash.
From Windows with C++ - Thread Pool Cancellation and Cleanup

GCD : How to write and read to variable from two threads

this may sound a newbie question anyway Im new to GCD,
I'm creating and running these two following threads. The first one puts data into ivar mMutableArray and the second one reads from it. How do i lock and unclock the threads to avoid crashes and keep the code thread safe ?
// Thread for writing data into mutable array
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
if (timer) {
dispatch_source_set_timer(timer, dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC * interval), interval * NSEC_PER_SEC, leeway);
dispatch_source_set_event_handler(timer, ^{
...
// Put data into ivar
[mMutableArray addObject:someObject];
...
});
dispatch_resume(timer);
}
// Thread for reading from mutable array
dispatch_source_t timer1 = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
if (timer1) {
dispatch_source_set_timer(timer1, dispatch_time(DISPATCH_TIME_NOW, NSEC_PER_SEC * interval), interval * NSEC_PER_SEC, leeway);
dispatch_source_set_event_handler(timer1, ^{
...
if (mMutableArray) {
// Read data from ivar
SomeObject* someobject = [mMutableArray lastObject];
}
...
});
dispatch_resume(timer1);
}
You are using it wrong, by locking access to the variables you are simply losing any benefit from GCD. Create a single serial queue which is associated to the variables you want to modify (in this case the mutable array). Then use that queue to both write and read to it, which will happen in guaranteed serial sequence and with minimum locking overhead. You can read more about it in "Asynchronous setters" at http://www.fieryrobot.com/blog/2010/09/01/synchronization-using-grand-central-dispatch/. As long as your access to the shared variable happens through its associated dispatch queue, you won't have concurrency problems ever.
I've used mutexes in my project and I'm quite happy with how it works at the moment.
Create the mutex and initialise it
pthread_mutex_t *mutexLock;
pthread_mutex_init(&_mutex, NULL);
Then put the lock around your code, once the second thread tries to get the lock, it will wait until the lock is freed again by the first thread. Note that you still might want to check if the first thread is actually the one writing to it.
pthread_mutex_lock(self_mutex);
{
** Code here
}
pthread_mutex_unlock(self_mutex);
You can still use #synchronized on your critical sections with GCD

How does -performSelector:withObject:afterDelay: work?

I am currently working under the assumption that -performSelector:withObject:afterDelay: does not utilize threading, but schedules an event to fire at a later date on the current thread. Is this correct?
More, specifically:
- (void) methodCalledByButtonClick {
for (id obj in array) {
[self doSomethingWithObj:obj];
}
}
static BOOL isBad = NO;
- (void) doSomethingWithObj:(id)obj {
if (isBad) {
return;
}
if ([obj isBad]) {
isBad = YES;
[self performSelector:#selector(resetIsBad) withObject:nil afterDelay:0.1];
return;
}
//Do something with obj
}
- (void) resetIsBad {
isBad = NO;
}
Is it guaranteed that -resetIsBad will not be called until after -methodCalledByButtonClick returns, assuming we are running on the main thread, even if -methodCalledByButtonClick takes an arbitrarily long time to complete?
From the docs:
Invokes a method of the receiver on
the current thread using the default
mode after a delay.
The discussion goes further:
This method sets up a timer to perform
the aSelector message on the current
thread’s run loop. The timer is
configured to run in the default mode
(NSDefaultRunLoopMode). When the timer
fires, the thread attempts to dequeue
the message from the run loop and
perform the selector. It succeeds if
the run loop is running and in the
default mode; otherwise, the timer
waits until the run loop is in the
default mode.
From this we can answer your second question. Yes, it's guaranteed, even with a shorter delay since the current thread is busy executing when performSelector is called. When the thread returns to the run loop and dequeues the selector, you'll have returned from your methodCalledByButtonClick.
performSelector:withObject:afterDelay: schedules a timer on the same thread to call the selector after the passed delay. If you sign up for the default run mode (i.e. don't use performSelector:withObject:afterDelay:inModes:), I believe it is guaranteed to wait until the next pass through the run loop, so everything on the stack will complete first.
Even if you call with a delay of 0, it will wait until the next loop, and behave as you want here. For more info refer to the docs.