Trap SIGINT in Cocoa macos application - swift

I am attempting to trap a SIGINT for a UI application made for MacOS. In the app delegate class, I see the following method:
func applicationWillTerminate(_ aNotification: Notification) {
}
However, a Ctrl + C, SIGINT, never gets caught in here. Reading around the internet, has shown that this function is not guaranteed to execute, especially if the app goes in the background.
What can I do in the app delegate to catch a SIGINT? Or is there an alternate place I am to catch the interrupt so that I can close the resources appropriately?

Charles's answer is correct, but his caveat ("Be sure only to call reentrant functions from within the handler") is an extreme limitation. It's possible to redirect handling of a signal to a safer environment using kqueue and CFFileDescriptor.
Technical Note TN2050: Observing Process Lifetimes Without Polling is on a different subject but illustrates the technique. There, Apple describes Charles's caveat this way:
Listening for a signal can be tricky because of the wacky execution
environment associated with signal handlers. Specifically, if you
install a signal handler (using signal or sigaction), you must be very
careful about what you do in that handler. Very few functions are safe
to call from a signal handler. For example, it is not safe to allocate
memory using malloc!
The functions that are safe from a signal handler (the async-signal
safe functions) are listed on the sigaction man page.
In most cases you must take extra steps to redirect incoming signals
to a more sensible environment.
I've taken the code illustration from there and modified it for handling SIGINT. Sorry, it's Objective-C. Here's the one-time setup code:
// Ignore SIGINT so it doesn't terminate the process.
signal(SIGINT, SIG_IGN);
// Create the kqueue and set it up to watch for SIGINT. Use the
// EV_RECEIPT flag to ensure that we get what we expect.
int kq = kqueue();
struct kevent changes;
EV_SET(&changes, SIGINT, EVFILT_SIGNAL, EV_ADD | EV_RECEIPT, NOTE_EXIT, 0, NULL);
(void) kevent(kq, &changes, 1, &changes, 1, NULL);
// Wrap the kqueue in a CFFileDescriptor. Then create a run-loop source
// from the CFFileDescriptor and add that to the runloop.
CFFileDescriptorContext context = { 0, self, NULL, NULL, NULL };
CFFileDescriptorRef kqRef = CFFileDescriptorCreate(NULL, kq, true, sigint_handler, &context);
CFRunLoopSourceRef rls = CFFileDescriptorCreateRunLoopSource(NULL, kqRef, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(), rls, kCFRunLoopDefaultMode);
CFRelease(rls);
CFFileDescriptorEnableCallBacks(kqRef, kCFFileDescriptorReadCallBack);
CFRelease(kqRef);
And here's how you would implement the sigint_handler callback referenced above:
static void sigint_handler(CFFileDescriptorRef f, CFOptionFlags callBackTypes, void *info)
{
struct kevent event;
(void) kevent(CFFileDescriptorGetNativeDescriptor(f), NULL, 0, &event, 1, NULL);
CFFileDescriptorEnableCallBacks(f, kCFFileDescriptorReadCallBack);
// You've been notified!
}
Note that this technique requires that you run the setup code on a thread which will live as long as you're interested in handling SIGINT (perhaps the app's lifetime) and service/run its run loop. Threads created by the system for its own purposes, such as those which service Grand Central Dispatch queues, are not suitable for this purpose.
The main thread of the app will work and you can use it. However, if the main thread locks up or become unresponsive, then it's not servicing its run loop and the SIGINT handler won't be called. Since SIGINT is often used to interrupt exactly such a stuck process, the main thread may not be suitable.
So, you may want to spawn a thread of your own just to monitor this signal. It should do nothing else, because anything else might cause it to get stuck, too. Even there, though, there are issues. Your handler function will be called on this background thread of yours and the main thread may still be locked up. There's a lot of stuff that is main-thread-only in the system libraries and you won't be able to do any of that. But you'll have much greater flexibility than in a POSIX-style signal handler.
I should add that GCD's dispatch sources can also monitor UNIX signals and are easier to work with, especially from Swift. However, they won't have a dedicated thread pre-created to run the handler. The handler will be submitted to a queue. Now, you may designate a high-priority/high-QOS queue, but I'm not entirely certain that the handler will get run if the process has numerous runaway threads already running. That is, a task that's actually running on a high-priority queue will take precedence over lower-priority threads or queues, but starting a new task might not. I'm not sure.

You can do this via the sigaction() function:
import Foundation
let handler: #convention(c) (Int32) -> () = { sig in
// handle the signal somehow
}
var action = sigaction(__sigaction_u: unsafeBitCast(handler, to: __sigaction_u.self),
sa_mask: 0,
sa_flags: 0)
sigaction(SIGINT, &action, nil)
Be sure only to call reentrant functions from within the handler.

Although I won't dare to compete with the great answers above, I think there is something important to say that may simplify the solution a lot.
I think your question mixes two very different levels of MacOS program life-cycle management.
True, you can run simple posix-style executables on Mac - but a Mac Cocoa app IS NOT a posix app with "attached UI". Mac application relies on a very verbose, complete, and feature-rich life-cycle mechanism that allows much more than anything posix process can -- from state preservation and restoration (when killed/re-launched) to being auto-launched in response to opening a file. Energy management, handling computer sleep, background and foreground transitions, attaching to the same app on close-by device, modes etc. etc.
Furthermore -- I can't see how one can ever expect a 'ctrl-C' from the keyboard to reach a Cocoa ("UI") application, as keyboard shortcuts are totally different in this context, and you can't run "UI" app from a terminal/shell synchronously.
Now the right wa to 'interrupt' a Cocoa app that is "stuck" or takes too long to perform something, is using the convention command-preiod key-combination, still widely implemented by many apps (Photoshop, video-editors, the Finder etc.) I'm sorry I could not locate this definition in Apple User Interface Guidelines - maybe it is no longer part of the standard. However, Ctrl-C certainly isn't!!!
you can implement Cmd-Period (⌘.) in your app (i.e. register for this high-level "interrupt" action, and handle it gracefully.
There is a good reason why NSApplication object wrapping the Cocoa App ignores SIGINT! it is completely out of context.

Related

Vala, GTK :How do I perform UI operations from a different thread?

I have a server listening for messages on a port running in a different thread. Now once it receives a message I need it to be displayed in a textbox.
Is there method like runOnUiThread() (which is in android) or equivalent in Vala, GTK?
Or else what are the alternatives?
Use GLib.Idle.add to schedule something in the event dispatch thread:
Idle.add(() => {
textbox.Entry = "foo";
return Source.REMOVE;
});
In contrast to many other Operating Systems, apparently you could perform UI operations from a non UI thread. I could successfully change the text of an Entry from the server thread. Not sure if this is recommended though.

Why does the order of a dispatched print() change when run in a Playground?

In a Udacity's GCD course there was a small quiz:
let q = DispatchQueue.global(qos: .userInteractive)
q.async { () -> Void in
print("tic")
}
print("tac")
Which will be printed first?
The correct answer is: tac, then tic. Seems logical.
But, why is it so only when I create an Xcode project? In a playground it prints tic then tac. What am I missing?
In GCD
DispatchQueue.global(qos: .userInteractive).async{}
is below
DispatchQueue.main.async{}
Even thought it has quality of service (qos) as UI thread it does not means it is a main thread. So may be there is a difference in performance with play ground and ui project
please check apples documentation as well.
Apples documentation on GCD
key to your answer is in the question what really are you asking the "system" to do, an by system that is the whatever the code is running on the playground, your computer/phone or emulator. You are executing an asynchronous "block" of code - print("tic") with some priority ".userInteractive". If the system handles the asynchronous block first or continues with "normal" execution depends on priority and available resources. With asynchronous calls there is no real why to guarantee that it is executed before or after the code continues that is the nature of being asynchronous i.e execute the block as soon as the system allows it, all without blocking your current work. So the difference you are seeing in playground vs a project/emulator is that the project/phone/emulator must keep the UI/primary thread responsive so it continue with print("tac"), while the development playground favors the thread executing print("tic"). Bottom line is it has to deal with priority of execution and the available resources and how its implemented on the system you're running the code.

Using libevent together with GCD (libdispatch) in Swift

I'm creating a server side app in Swift 3. I've chosen libevent for implementing networking code because it's cross-platform and doesn't suffer from C10k problem. Libevent implements it's own event loop, but I want to keep CFRunLoop and GCD (DispatchQueue.main.after etc) functional as well, so I need to glue them somehow.
This is what I've came up with:
var terminated = false
DispatchQueue.main.after(when: DispatchTime.now() + 3) {
print("Dispatch works!")
terminated = true
}
while !terminated {
switch event_base_loop(eventBase, EVLOOP_NONBLOCK) { // libevent
case 1:
break // No events were processed
case 0:
print("DEBUG: Libevent processed one or more events")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
RunLoop.current().run(mode: RunLoopMode.defaultRunLoopMode,
before: Date(timeIntervalSinceNow: 0.01))
}
This works, but introduces a latency of 0.01 sec. While RunLoop is sleeping, libevent won't be able to process events. Lowering this timeout increases CPU usage significantly when the app is idle.
I was also considering using only libevent, but third party libs in the project can use dispatch_async internally, so this can be problematic.
Running libevent's loop in a different thread makes synchronization more complex, is this the only way of solving this latency issue?
LINUX UPDATE. The above code does not work on Linux (2016-07-25-a Swift snapshot), RunLoop.current().run exists with an error. Below is a working Linux version reimplemented with a timer and dispatch_main. It suffers from the same latency issue:
let queue = dispatch_get_main_queue()
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue)
let interval = 0.01
let block: () -> () = {
guard !terminated else {
print("Quitting")
exit(0)
}
switch server.loop() {
case 1: break // Just idling
case 0: break //print("Libevent: processed event(s)")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
}
block()
let fireTime = dispatch_time(DISPATCH_TIME_NOW, Int64(interval * Double(NSEC_PER_SEC)))
dispatch_source_set_timer(timer, fireTime, UInt64(interval * Double(NSEC_PER_SEC)), UInt64(NSEC_PER_SEC) / 10)
dispatch_source_set_event_handler(timer, block)
dispatch_resume(timer)
dispatch_main()
A quick search of the Open Source Swift Foundation libraries on GitHub reveals that the support in CFRunLoop is (perhaps obviously) implemented differently on different platforms. This means, in essence, that RunLoop and libevent, with respect to cross-platform-ness, are just different ways to achieve the same thing. I can see the thinking behind the thought that libevent is probably better suited to server implementations, since CFRunLoop didn't grow up with that specific goal, but as far as being cross-platform goes, they're both barking up the same tree.
That said, the underlying synchronization primitives used by RunLoop and libevent are inherently private implementation details and, perhaps more importantly, different between platforms. From the source, it looks like RunLoop uses epoll on Linux, as does libevent, but on macOS/iOS/etc, RunLoop is going to use Mach ports as its fundamental primitive, but libevent looks like it's going to use kqueue. You might, with enough effort, be able to make a hybrid RunLoopSource that ties to a libevent source for a given platform, but this would likely be very fragile, and generally ill-advised, for a couple of reasons: Firstly, it would be based on private implementation details of RunLoop that are not part of the public API, and therefore subject to change at any time without notice. Second, assuming you didn't go through and do this for every platform supported by both Swift and libevent, you would have broken the cross-platform-ness of it, which was one of your stated reasons for going with libevent in the first place.
One additional option you might not have considered would be to use GCD by itself, without RunLoops. Look at the docs for dispatch_main. In a server application, there's (typically) nothing special about a "main thread," so dispatching to the "main queue", should be good enough (if needed at all). You can use dispatch "sources" to manage your connections, etc. I can't personally speak to how dispatch sources scale up to the C10K/C100K/etc. level, but they've seemed pretty lightweight and low-overhead in my experience. I also suspect that using GCD like this would likely be the most idiomatic way to write a server application in Swift. I've written up a small example of a GCD-based TCP echo server as part of another answer here.
If you were bound and determined to use both RunLoop and libevent in the same application, it would, as you guessed, be best to give libevent it's own separate thread, but I don't think it's as complex as you might think. You should be able to dispatch_async from libevent callbacks freely, and similarly marshal replies from GCD managed threads to libevent using libevent's multi-threading mechanisms fairly easily (i.e. either by running with locking on, or by marshaling your calls into libevent as events themselves.) Similarly, third party libraries using GCD should not be an issue even if you chose to use libevent's loop structure. GCD manages its own thread pools and would have no way of stepping on libevent's main loop, etc.
You might also consider architecting your application such that it didn't matter what concurrency and connection handling library you used. Then you could swap out libevent, GCD, CFStreams, etc. (or mix and match) depending on what worked best for a given situation or deployment. Choosing a concurrency approach is important, but ideally you wouldn't couple yourself to it so tightly that you couldn't switch if circumstances called for it.
When you have such an architecture, I'm generally a fan of the approach of using the highest level abstraction that gets the job done, and only driving down to lower level abstractions when specific circumstances require it. In this case, that would probably mean using CFStreams and RunLoops to start, and switching out to "bare" GCD or libevent later, if you hit a wall and also determined (through empirical measurement) that it was the transport layer and not the application layer that was the limiting factor. Very few non-trivial applications actually get to the C10K problem in the transport layer; things tend to have to scale "out" at the application layer first, at least for apps more complicated than basic message passing.

NSManagedObjectContext performBlockAndWait: doesn't execute on background thread?

I have an NSManagedObjectContext declared like so:
- (NSManagedObjectContext *) backgroundMOC {
if (backgroundMOC != nil) {
return backgroundMOC;
}
backgroundMOC = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
return backgroundMOC;
}
Notice that it is declared with a private queue concurrency type, so its tasks should be run on a background thread. I have the following code:
-(void)testThreading
{
/* ok */
[self.backgroundMOC performBlock:^{
assert(![NSThread isMainThread]);
}];
/* CRASH */
[self.backgroundMOC performBlockAndWait:^{
assert(![NSThread isMainThread]);
}];
}
Why does calling performBlockAndWait execute the task on the main thread rather than background thread?
Tossing in another answer, to try an explain why performBlockAndWait will always run in the calling thread.
performBlock is completely asynchronous. It will always enqueue the block onto the queue of the receiving MOC, and then return immediately. Thus,
[moc performBlock:^{
// Foo
}];
[moc performBlock:^{
// Bar
}];
will place two blocks on the queue for moc. They will always execute asynchronously. Some unknown thread will pull blocks off of the queue and execute them. In addition, those blocks are wrapped within their own autorelease pool, and also they will represent a complete Core Data user event (processPendingChanges).
performBlockAndWait does NOT use the internal queue. It is a synchronous operation that executes in the context of the calling thread. Of course, it will wait until the current operations on the queue have been executed, and then that block will execute in the calling thread. This is documented (and reasserted in several WWDC presentations).
Furthermore, performBockAndWait is re-entrant, so nested calls all happen right in that calling thread.
The Core Data engineers have been very clear that the actual thread in which a queue-based MOC operation runs is not important. It's the synchronization by using the performBlock* API that's key.
So, consider 'performBlock' as "This block is being placed on a queue, to be executed at some undetermined time, in some undetermined thread. The function will return to the caller as soon as it has been enqueued"
performBlockAndWait is "This block will be executed at some undetermined time, in this exact same thread. The function will return after this code has completely executed (which will occur after the current queue associated with this MOC has drained)."
EDIT
Are you sure of "performBlockAndWait does NOT use the internal queue"?
I think it does. The only difference is that performBlockAndWait will
wait until the block's completion. And what do you mean by calling
thread? In my understanding, [moc performBlockAndWait] and [moc
performBloc] both run on its private queue (background or main). The
important concept here is moc owns the queue, not the other way
around. Please correct me if I am wrong. – Philip007
It is unfortunate that I phrased the answer as I did, because, taken by itself, it is incorrect. However, in the context of the original question it is correct. Specifically, when calling performBlockAndWait on a private queue, the block will execute on the thread that called the function - it will not be put on the queue and executed on the "private thread."
Now, before I even get into the details, I want to stress that depending on internal workings of libraries is very dangerous. All you should really care about is that you can never expect a specific thread to execute a block, except anything tied to the main thread. Thus, expecting a performBlockAndWait to not execute on the main thread is not advised because it will execute on the thread that called it.
performBlockAndWait uses GCD, but it also has its own layer (e.g., to prevent deadlocks). If you look at the GCD code (which is open source), you can see how synchronous calls work - and in general they synchronize with the queue and invoke the block on the thread that called the function - unless the queue is the main queue or a global queue. Also, in the WWDC talks, the Core Data engineers stress the point that performBlockAndWait will run in the calling thread.
So, when I say it does not use the internal queue, that does not mean it does not use the data structures at all. It must synchronize the call with the blocks already on the queue, and those submitted in other threads and other asynchronous calls. However, when calling performBlockAndWait it does not put the block on the queue... instead it synchronizes access and runs the submitted block on the thread that called the function.
Now, SO is not a good forum for this, because it's a bit more complex than that, especially w.r.t the main queue, and GCD global queues - but the latter is not important for Core Data.
The main point is that when you call any performBlock* or GCD function, you should not expect it to run on any particular thread (except something tied to the main thread) because queues are not threads, and only the main queue will run blocks on a specific thread.
When calling the core data performBlockAndWait the block will execute in the calling thread (but will be appropriately synchronized with everything submitted to the queue).
I hope that makes sense, though it probably just caused more confusion.
EDIT
Furthermore, you can see the unspoken implications of this, in that the way in which performBlockAndWait provides re-entrant support breaks the FIFO ordering of blocks. As an example...
[context performBlockAndWait:^{
NSLog(#"One");
[context performBlock:^{
NSLog(#"Two");
}];
[context performBlockAndWait:^{
NSLog(#"Three");
}];
}];
Note that strict adherence to the FIFO guarantee of the queue would mean that the nested performBlockAndWait ("Three") would run after the asynchronous block ("Two") since it was submitted after the async block was submitted. However, that is not what happens, as it would be impossible... for the same reason a deadlock ensues with nested dispatch_sync calls. Just something to be aware of if using the synchronous version.
In general, avoid sync versions whenever possible because dispatch_sync can cause a deadlock, and any re-entrant version, like performBlockAndWait will have to make some "bad" decision to support it... like having sync versions "jump" the queue.
Why not? Grand Central Dispatch's block concurrency paradigm (which I assume MOC uses internally) is designed so that only the runtime and operating system need to worry about threads, not the developer (because the OS can do it better than you can do to having more detailed information). Too many people assume that queues are the same as threads. They are not.
Queued blocks are not required to run on any given thread (the exception being blocks in the main queue must execute on the main thread). So, in fact, sometimes sync (i.e. performBlockAndWait) queued blocks will run on the main thread if the runtime feels it would be more efficient than creating a thread for it. Since you are waiting for the result anyway, it wouldn't change the way your program functioned if the main thread were to hang for the duration of the operation.
This last part I am not sure if I remember correctly, but in the WWDC 2011 videos about GCD, I believe that it was mentioned that the runtime will make an effort to run on the main thread, if possible, for sync operations because it is more efficient. In the end though, I suppose the answer to "why" can only be answered by the people who designed the system.
I don't think that the MOC is obligated to use a background thread; it's just obligated to ensure that your code will not run into concurrency issues with the MOC if you use performBlock: or performBlockAndWait:. Since performBlockAndWait: is supposed to block the current thread, it seems reasonable to run that block on that thread.
The performBlockAndWait: call only makes sure that you execute the code in such a way that you don't introduce concurrency (i.e. on 2 threads performBlockAndWait: will not run at the same time, they will block each other).
The long and the short of it is that you can't depend on which thread a MOC operation runs on, well basically ever. I've learned the hard way that if you use GCD or just straight up threads, you always have to create local MOCs for each operation and then merge them to the master MOC.
There is a great library (MagicalRecord) that makes that process very simple.

What is the difference between GCD Dispatch Sources and select()?

I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?
The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.