Cancelling NSOperation from NSOperationQueue cause crash - iphone

I'm trying to build a download manager class that packets all the async download (every op has its own thread) operation in an NSOperation subclass to add them later in an NSOperationQueue. The download manager class (a singleton) also exposes few methods to work on the queue and cancel operations that matches some requirements. Those are steps to start creating kind of a Class Cluster(Abstract Factory) that returns different kinds of NSOperation for different types of common operation (upload, download,parse, etc).
The class seems to work pretty well with download operations, but if in the middle of those operations I call a method for cancel an operation, the operation is successfully cancelled but the app crashes few operation later. If I don't cancel any operations everything works fine. All operations are observed using KVO.
The method that remove the operation looks like that:
- (void) cancelDownloadOperationWithID:(NSString *)aUUID{
#synchronized(self){
[self.dowloadQueue setSuspended:YES]; //downloadQueue is an NSOperationQueue
NSArray * downloadOperations = [self.dowloadQueue operations];
NSPredicate * aPredicate = [NSPredicate predicateWithFormat:#"SELF.connectionID == %#",aUUID]; //SELF is the signleton instance of the download manager
NSArray * filteredArray = [downloadOperations filteredArrayUsingPredicate:aPredicate];
if ([filteredArray count]==0) {
[self.dowloadQueue setSuspended:NO];
return;
}
[filteredArray makeObjectsPerformSelector:#selector(cancel)];
NSLog(#"Cancelled %d operations",[filteredArray count]);
[self.dowloadQueue setSuspended:NO];
}
}
The crash log is pretty incomprehensible but is a BAD_EXC_ACCESS (a zombie perhaps), notice that I'm under ARC.
0x00a90ea8 <+0393> jle 0xa90d9f <____NSOQSchedule_block_invoke_0+128>
0x00a90eae <+0399> mov -0x38(%ebp),%ecx
0x00a90eb1 <+0402> mov -0x34(%ebp),%esi
0x00a90eb4 <+0405> mov (%esi,%ecx,1),%ecx
0x00a90eb7 <+0408> mov -0x40(%ebp),%esi
0x00a90eba <+0411> cmpb $0x0,(%ecx,%esi,1)
0x00a90ebe <+0415> jne 0xa90d9f <____NSOQSchedule_block_invoke_0+128>
0x00a90ec4 <+0421> mov (%edi,%eax,1),%esi
0x00a90ec7 <+0424> mov (%esi,%edx,1),%ebx
0x00a90eca <+0427> mov %ebx,-0x2c(%ebp)
0x00a90ecd <+0430> mov -0x44(%ebp),%ebx
0x00a90ed0 <+0433> cmpl $0x50,(%esi,%ebx,1)
0x00a90ed4 <+0437> mov %edi,%ebx
0x00a90ed6 <+0439> jne 0xa90e96 <____NSOQSchedule_block_invoke_0+375>
0x00a90ed8 <+0441> mov -0x48(%ebp),%ebx
0x00a90edb <+0444> cmpb $0x0,(%esi,%ebx,1)
0x00a90edf <+0448> mov %edi,%ebx
0x00a90ee1 <+0450> je 0xa90e96 <____NSOQSchedule_block_invoke_0+375>
Can some one give me suggestion about it?
Thanx Andrea

Well the answer was pretty simple. In the overridden -cancel method of the NSOperation subclass I was setting both the finished and executing vars triggering the proper KVO callbacks. The problem is that an operation stays in the NSOperationQueue even if it is cancelled, when the queue tries to launch the -start method on an NSOperationQueue that has triggered its KVO callback it crashes.
The work around is as follows: If the operation was cancelled while it was not executing, you must set the finish var to YES right after the start method implementation, otherwise if it was executing it's ok to set finished to YES and executing to NO.

The accepted answer works for me. Just to help clear this up in case anybody else runs into it, I also experienced this crash by setting isFinished improperly inside of my - cancel, before an async operation had begun executing.
Rather than doing that, I switched my - cancel to only change isFinished if the operation was already isExecuting, then in - start I set isFinished immediately as suggested here. Voilà, crash gone.

Here's a piece in swift using two previous answers:
override func cancel() {
super.cancel()
if executing {
executing = false
finished = true
}
task.cancel()
}
override func start() {
if cancelled {
finished = true
return
}
executing = true
main()
}
override func main() {
task.resume()
}

Related

Run a thread continuously without blocking main thread in swift

I wrote a code and I need to run it continuously. Initially I used RunLoop.current.run(). It works fine. The problem is it blocks the main thread. How can I run it in background continuously without blocking.
Basic class structure:
class Keylogger
{
func start()
{
let observer = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())
/* Connected and Disconnected Call Backs */
IOHIDManagerRegisterDeviceMatchingCallback(manager, Handle_DeviceMatchingCallback, observer)
IOHIDManagerRegisterDeviceRemovalCallback(manager, Handle_DeviceRemovalCallback, observer)
/* Input value Call Backs */
IOHIDManagerRegisterInputValueCallback(manager, Handle_IOHIDInputValueCallback, observer);
/* schedule */
IOHIDManagerScheduleWithRunLoop(manager, CFRunLoopGetMain(), CFRunLoopMode.defaultMode.rawValue)
print("Started")
}
}
And in main.swift
var logger = Keylogger()
logger.start()
RunLoop.current.run()
// Whatever written below this will not be executed obviously
I used DispatchQueue before for background tasks( which is just a piece of code) but how to execute it continuously?
I tried this:
var d = Keylogger()
var ff = {
d.start()
}
var f = DispatchQueue(label: "Keylogger", qos: .userInteractive, attributes: .concurrent)
f.async(execute: ff)
while true
{}
But the start() of Keylogger is never executed.
I thought of creating executable and running the executable through NSTask. Other than this is there any way to do it?
I think you are not understanding the purpose of RunLoop.current.run().
Your code works only when your program is running. Constantly.
Your keylogger code is ran on an another thread. So, to keep that thread alive, the main thread must be active. That is the reason for RunLoop.current.run().
So, try to use callbacks(like keylogger is using) and schedule it on another thread. Or, do everything you want and place RunLoop.current.run() at the very end of your code.

Memory Leak using Windows ThreadPool API

I am using Windows ThreadPools in my application, and am experiencing a memory leak of 136 bytes for every call to CreateThreadPoolWork(), as seen via UMDH:
+ 1257728 ( 1286424 - 28696) 9459 allocs BackTraceB0035CC
+ 9248 ( 9459 - 211) BackTraceB0035CC allocations
ntdll!RtlUlonglongByteSwap+B52
ntdll!TpAllocWork+8D
KERNEL32!CreateThreadpoolWork+25
... My Code ...
I am using Cleanup Group, so per the documentation I am not calling CloseThreadPoolWork().
My code for handling the ThreadPool is:
typedef PTP_WORK ThreadHandle_t;
typedef PTP_WORK_CALLBACK THREAD_ENTRY_POINT_T;
static PTP_POOL pool = NULL;
static TP_CALLBACK_ENVIRON CallBackEnviron;
static PTP_CLEANUP_GROUP cleanupgroup = NULL;
int mtInitialize()
{
InitializeThreadpoolEnvironment(&CallBackEnviron);
pool = CreateThreadpool(NULL);
if (NULL == pool)
{
return -1;
}
cleanupgroup = CreateThreadpoolCleanupGroup();
if (NULL == cleanupgroup)
{
return -1;
}
SetThreadpoolCallbackPool(&CallBackEnviron, pool);
SetThreadpoolCallbackCleanupGroup(&CallBackEnviron, cleanupgroup, NULL);
return 0; // Success
}
void mtDestroy()
{
CloseThreadpoolCleanupGroupMembers(cleanupgroup, FALSE, NULL);
CloseThreadpoolCleanupGroup(cleanupgroup);
DestroyThreadpoolEnvironment(&CallBackEnviron);
CloseThreadpool(pool);
}
//Create thread
ThreadHandle_t mtRunThread(THREAD_ENTRY_POINT_T entry_point, void *thread_args)
{
PTP_WORK work = NULL;
work = CreateThreadpoolWork(entry_point, thread_args, &CallBackEnviron);
if (NULL == work) {
// CreateThreadpoolWork() failed.
return 0;
}
SubmitThreadpoolWork(work);
return work;
}
//Wait for a thread to finish
void mtWaitForThread(ThreadHandle_t thread)
{
WaitForThreadpoolWorkCallbacks(thread, FALSE);
}
Am I doing something wrong?
Any ideas why I'm leaking memory?
I'm guessing you figured it out, given your comment, but the problem is that you only call CloseThreadpoolCleanupGroupMembers() in mtDestroy().
If you have a persistent thread pool the memory will not be freed unless you call CloseThreadpoolCleanupGroupMembers() periodically. Your code and comments suggests that you do, though I can't confirm this without the code responsible for creating and destroying your thread pool.
My recommendation for persistent thread pools is to just call CloseThreadpoolWork() in the callback functions. Microsoft's recommendations work better if you're creating and destroying thread pools, but CloseThreadpoolWork() is simpler and easier than periodically calling CloseThreadpoolCleanupGroupMembers() if you're maintaining one thread pool for the life of your application.
By the way, it's safe to do both as long as you tell CloseThreadpoolCleanupGroupMembers() to cancel any pending callbacks (pass fCancelPendingCallbacks as TRUE) to ensure CloseThreadpoolWork() is called on any cleaned up work items:
You can revoke the work object’s membership only by closing it, which
can be done on an individual basis with the CloseThreadpoolWork
function. The thread pool knows that the work object is a member of
the cleanup group and revokes its membership before closing it. This
ensures that the application doesn’t crash when the cleanup group
later attempts to close all of its members. The inverse isn’t true: If
you first instruct the cleanup group to close all of its members and
then call CloseThreadpoolWork on the now invalid work object, your
application will crash.
From Windows with C++ - Thread Pool Cancellation and Cleanup

Powershell Custom Cmdlet - Call WriteObject in Background Thread

Calling the WriteObject Method from a background thread is not possible!
Is there a possibility, to invoke/dispatch this method in the main thread of the powershell (like in WPF)?
Code sample:
protected override void ProcessRecord()
{
base.ProcessRecord();
...
Service.StartReading(filter, list => { WriteObject(list, true); } );
}
EDIT:
Any solution, workaround or quick fix?
thx,
Mathias
I found a solution, which solves my problem.
created a ConcurrentQueue
ConcurrentQueue<LogEntryInfoBase> logEntryQueue =
new ConcurrentQueue<LogEntryInfoBase>();
start a background thread to enqueue items to the ConcurrentQueue
Task.Factory.StartNew(() => Service.StartReading(
filter, EnqueueLogEntryInfoBases));
meanwhile, try to dequeue from this queue in the main thread
for ( ; ; )
{
LogEntryInfoBase logEntry = null;
logEntryQueue.TryDequeue(out logEntry);
if (logEntry != null)
{
WriteObject(logEntry);
}
Thread.Sleep(100);
}
From my point of view, this solution/fix is ugly, but it works for my current issue.
I was stuck on pretty much the same issue. In my opinion we can improve the solution by adding a sleep in the infinite loop. Of course we will need to have a global reference to our main thread and the background thread will need to call interrupt as soon as an item is added in queue.

iphone - how do I make a thread runs faster

I have two methods that I need to run, lets call them metA and metB.
When I start coding this app, I called both methods without using threads, but the app started freezing, so I decided to go with threads.
metA and metB are called by touch events, so they can occur any time in any order. They don't depend on each other.
My problem is the time it takes to either threads start running. There's a lag between the time the thread is created with
[NSThread detachNewThreadSelector:#selector(.... bla bla
and the time the thread starts running.
I suppose this time is related to the amount of time required by iOS to create the thread itself. How can I speed this? If I pre create both threads, how do I make them just do their stuff when needed and never terminate? I mean, a kind of sleeping thread that is always alive and works when asked and sleeps after that?
thanks.
If you want to avoid the expensive startup time of creating new threads, create both threads at startup as you suggested. To have them only run when needed, you can have them wait on a condition variable. Since you're using the NSThread class for threading, I'd recommend using the NSCondition class for condition variables (an alternative would be to use the POSIX threading (pthread) condition variables, pthread_cond_t).
One thing you'll have to be careful of is if you get another touch event while the thread is still running. In that case, I'd recommend using a queue to keep track of work items, and then the touch event handler can just add the work item to the queue, and the worker thread can process them as long as the queue is not empty.
Here's one way to do this:
typedef struct WorkItem
{
// information about the work item
...
struct WorkItem *next; // linked list of work items
} WorkItem;
WorkItem *workQueue = NULL; // head of linked list of work items
WorkItem *workQueueTail = NULL; // tail of linked list of work items
NSCondition *workCondition = NULL; // condition variable for the queue
...
-(id) init
{
if((self = [super init]))
{
// Make sure this gets initialized before the worker thread starts
// running
workCondition = [[NSCondition alloc] init];
// Start the worker thread
[NSThread detachNewThreadSelector:#selector(threadProc:)
toTarget:self withObject:nil];
}
return self;
}
// Suppose this function gets called whenever we receive an appropriate touch
// event
-(void) onTouch
{
// Construct a new work item. Note that this must be allocated on the
// heap (*not* the stack) so that it doesn't get destroyed before the
// worker thread has a chance to work on it.
WorkItem *workItem = (WorkItem *)malloc(sizeof(WorkItem));
// fill out the relevant info about the work that needs to get done here
...
workItem->next = NULL;
// Lock the mutex & add the work item to the tail of the queue (we
// maintain that the following invariant is always true:
// (workQueueTail == NULL || workQueueTail->next == NULL)
[workCondition lock];
if(workQueueTail != NULL)
workQueueTail->next = workItem;
else
workQueue = workItem;
workQueueTail = workItem;
[workCondition unlock];
// Finally, signal the condition variable to wake up the worker thread
[workCondition signal];
}
-(void) threadProc:(id)arg
{
// Loop & wait for work to arrive. Note that the condition variable must
// be locked before it can be waited on. You may also want to add
// another variable that gets checked every iteration so this thread can
// exit gracefully if need be.
while(1)
{
[workCondition lock];
while(workQueue == NULL)
{
[workCondition wait];
// The work queue should have something in it, but there are rare
// edge cases that can cause spurious signals. So double-check
// that it's not empty.
}
// Dequeue the work item & unlock the mutex so we don't block the
// main thread more than we have to
WorkItem *workItem = workQueue;
workQueue = workQueue->next;
if(workQueue == NULL)
workQueueTail = NULL;
[workCondition unlock];
// Process the work item here
...
free(workItem); // don't leak memory
}
}
If you can target iOS4 and higher, consider using blocks with Grand Central Dispatch asynch queue, which operates on background threads which the queue manages... or for backwards compatibility, as mentioned use NSOperations inside an NSOperation queue to have bits of work performed for you in the background. You can specify exactly how many background threads you want to support with an NSOperationQueue if both operations have to run at the same time.

How does -performSelector:withObject:afterDelay: work?

I am currently working under the assumption that -performSelector:withObject:afterDelay: does not utilize threading, but schedules an event to fire at a later date on the current thread. Is this correct?
More, specifically:
- (void) methodCalledByButtonClick {
for (id obj in array) {
[self doSomethingWithObj:obj];
}
}
static BOOL isBad = NO;
- (void) doSomethingWithObj:(id)obj {
if (isBad) {
return;
}
if ([obj isBad]) {
isBad = YES;
[self performSelector:#selector(resetIsBad) withObject:nil afterDelay:0.1];
return;
}
//Do something with obj
}
- (void) resetIsBad {
isBad = NO;
}
Is it guaranteed that -resetIsBad will not be called until after -methodCalledByButtonClick returns, assuming we are running on the main thread, even if -methodCalledByButtonClick takes an arbitrarily long time to complete?
From the docs:
Invokes a method of the receiver on
the current thread using the default
mode after a delay.
The discussion goes further:
This method sets up a timer to perform
the aSelector message on the current
thread’s run loop. The timer is
configured to run in the default mode
(NSDefaultRunLoopMode). When the timer
fires, the thread attempts to dequeue
the message from the run loop and
perform the selector. It succeeds if
the run loop is running and in the
default mode; otherwise, the timer
waits until the run loop is in the
default mode.
From this we can answer your second question. Yes, it's guaranteed, even with a shorter delay since the current thread is busy executing when performSelector is called. When the thread returns to the run loop and dequeues the selector, you'll have returned from your methodCalledByButtonClick.
performSelector:withObject:afterDelay: schedules a timer on the same thread to call the selector after the passed delay. If you sign up for the default run mode (i.e. don't use performSelector:withObject:afterDelay:inModes:), I believe it is guaranteed to wait until the next pass through the run loop, so everything on the stack will complete first.
Even if you call with a delay of 0, it will wait until the next loop, and behave as you want here. For more info refer to the docs.