What is the difference between GCD Dispatch Sources and select()? - iphone

I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?

The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.

Related

How does the computer implement callbacks?

I already know the general usage of callback. First,I register a "callback function",when some event occur,this function will be triggered(be executed).
What confuses me is how do I know if the event is occur? The solution I can get is polling.Is there a better way to check whether the event occur in less than the O(n) time ?
All right,Maybe the above question is too abstract.A more realistic description is does epoll_wait avoid using O(n) time to check whether the ready file descriptor?
If so, how did it do it?
Is there a callback mechanism that is different from polling essentially?
Usually, but not exclusively, callbacks get called after some peripheral I/O device signals an operation completion by raising a hardware interrupt. A long chain of stuff involving things like driver interrupt handlers, semaphores, protection ring changes, thread and process context changes, message assembly/enqueueing/requiring/handling/dispatching etc etc then cause your callback to be called, maybe by some system thread, or from a message-handling or signal-handling thread of your own that has to conform to a specific structure or constraint.
So no, polling is generally unnecessary, and unwanted.

Trap SIGINT in Cocoa macos application

I am attempting to trap a SIGINT for a UI application made for MacOS. In the app delegate class, I see the following method:
func applicationWillTerminate(_ aNotification: Notification) {
}
However, a Ctrl + C, SIGINT, never gets caught in here. Reading around the internet, has shown that this function is not guaranteed to execute, especially if the app goes in the background.
What can I do in the app delegate to catch a SIGINT? Or is there an alternate place I am to catch the interrupt so that I can close the resources appropriately?
Charles's answer is correct, but his caveat ("Be sure only to call reentrant functions from within the handler") is an extreme limitation. It's possible to redirect handling of a signal to a safer environment using kqueue and CFFileDescriptor.
Technical Note TN2050: Observing Process Lifetimes Without Polling is on a different subject but illustrates the technique. There, Apple describes Charles's caveat this way:
Listening for a signal can be tricky because of the wacky execution
environment associated with signal handlers. Specifically, if you
install a signal handler (using signal or sigaction), you must be very
careful about what you do in that handler. Very few functions are safe
to call from a signal handler. For example, it is not safe to allocate
memory using malloc!
The functions that are safe from a signal handler (the async-signal
safe functions) are listed on the sigaction man page.
In most cases you must take extra steps to redirect incoming signals
to a more sensible environment.
I've taken the code illustration from there and modified it for handling SIGINT. Sorry, it's Objective-C. Here's the one-time setup code:
// Ignore SIGINT so it doesn't terminate the process.
signal(SIGINT, SIG_IGN);
// Create the kqueue and set it up to watch for SIGINT. Use the
// EV_RECEIPT flag to ensure that we get what we expect.
int kq = kqueue();
struct kevent changes;
EV_SET(&changes, SIGINT, EVFILT_SIGNAL, EV_ADD | EV_RECEIPT, NOTE_EXIT, 0, NULL);
(void) kevent(kq, &changes, 1, &changes, 1, NULL);
// Wrap the kqueue in a CFFileDescriptor. Then create a run-loop source
// from the CFFileDescriptor and add that to the runloop.
CFFileDescriptorContext context = { 0, self, NULL, NULL, NULL };
CFFileDescriptorRef kqRef = CFFileDescriptorCreate(NULL, kq, true, sigint_handler, &context);
CFRunLoopSourceRef rls = CFFileDescriptorCreateRunLoopSource(NULL, kqRef, 0);
CFRunLoopAddSource(CFRunLoopGetCurrent(), rls, kCFRunLoopDefaultMode);
CFRelease(rls);
CFFileDescriptorEnableCallBacks(kqRef, kCFFileDescriptorReadCallBack);
CFRelease(kqRef);
And here's how you would implement the sigint_handler callback referenced above:
static void sigint_handler(CFFileDescriptorRef f, CFOptionFlags callBackTypes, void *info)
{
struct kevent event;
(void) kevent(CFFileDescriptorGetNativeDescriptor(f), NULL, 0, &event, 1, NULL);
CFFileDescriptorEnableCallBacks(f, kCFFileDescriptorReadCallBack);
// You've been notified!
}
Note that this technique requires that you run the setup code on a thread which will live as long as you're interested in handling SIGINT (perhaps the app's lifetime) and service/run its run loop. Threads created by the system for its own purposes, such as those which service Grand Central Dispatch queues, are not suitable for this purpose.
The main thread of the app will work and you can use it. However, if the main thread locks up or become unresponsive, then it's not servicing its run loop and the SIGINT handler won't be called. Since SIGINT is often used to interrupt exactly such a stuck process, the main thread may not be suitable.
So, you may want to spawn a thread of your own just to monitor this signal. It should do nothing else, because anything else might cause it to get stuck, too. Even there, though, there are issues. Your handler function will be called on this background thread of yours and the main thread may still be locked up. There's a lot of stuff that is main-thread-only in the system libraries and you won't be able to do any of that. But you'll have much greater flexibility than in a POSIX-style signal handler.
I should add that GCD's dispatch sources can also monitor UNIX signals and are easier to work with, especially from Swift. However, they won't have a dedicated thread pre-created to run the handler. The handler will be submitted to a queue. Now, you may designate a high-priority/high-QOS queue, but I'm not entirely certain that the handler will get run if the process has numerous runaway threads already running. That is, a task that's actually running on a high-priority queue will take precedence over lower-priority threads or queues, but starting a new task might not. I'm not sure.
You can do this via the sigaction() function:
import Foundation
let handler: #convention(c) (Int32) -> () = { sig in
// handle the signal somehow
}
var action = sigaction(__sigaction_u: unsafeBitCast(handler, to: __sigaction_u.self),
sa_mask: 0,
sa_flags: 0)
sigaction(SIGINT, &action, nil)
Be sure only to call reentrant functions from within the handler.
Although I won't dare to compete with the great answers above, I think there is something important to say that may simplify the solution a lot.
I think your question mixes two very different levels of MacOS program life-cycle management.
True, you can run simple posix-style executables on Mac - but a Mac Cocoa app IS NOT a posix app with "attached UI". Mac application relies on a very verbose, complete, and feature-rich life-cycle mechanism that allows much more than anything posix process can -- from state preservation and restoration (when killed/re-launched) to being auto-launched in response to opening a file. Energy management, handling computer sleep, background and foreground transitions, attaching to the same app on close-by device, modes etc. etc.
Furthermore -- I can't see how one can ever expect a 'ctrl-C' from the keyboard to reach a Cocoa ("UI") application, as keyboard shortcuts are totally different in this context, and you can't run "UI" app from a terminal/shell synchronously.
Now the right wa to 'interrupt' a Cocoa app that is "stuck" or takes too long to perform something, is using the convention command-preiod key-combination, still widely implemented by many apps (Photoshop, video-editors, the Finder etc.) I'm sorry I could not locate this definition in Apple User Interface Guidelines - maybe it is no longer part of the standard. However, Ctrl-C certainly isn't!!!
you can implement Cmd-Period (⌘.) in your app (i.e. register for this high-level "interrupt" action, and handle it gracefully.
There is a good reason why NSApplication object wrapping the Cocoa App ignores SIGINT! it is completely out of context.

swift stop loop and continue it with a button

is it possible to stop an for in loop and continue it and the stopped position with a button.
i mean something like this:
for x in 0..<5 {
if x == 3 {
// show a button
// this button have to be pressed
// which do some tasks
// when all tasks are finished >
// continue this loop at the stopped point
}
}
is this possible? if yes, how?
Short answer: use a semaphore
Longer answer:
Your situation is an example of the more general case of how to pause some computation, saving its current state (local variables, call stack, etc.), and later to resume it from the same point, with all state restored.
Some languages/systems provide coroutines to support this, others the more esoteric call with current continuation, neither are available to you (currently) Swift...
What you do have is Grand Central Dispatch (GCD), provided in Swift through Dispatch, which provides support for executing concurrent asynchronous tasks and synchronisation between them. Other concurrency mechanisms such as pthread are also available, but GCD tends to be recommended.
Using GCD an outline of a solution is:
Execute you loop as an asynchronous concurrent task. It must not be executing on the main thread or you will deadlock...
When you wish to pause:
Create a semaphore
Start another async task to display the button, run the other jobs etc. This new task must signal the semaphore when it is finished. The new task may call the main thread to perform UI operations.
Your loop task waits on the semaphore, this will block the loop task until the button task signals it.
This may sound complicated but with Swift block syntax and Dispatch it is quite simple. However you do need to read up on GCD first!
Alternatively you can ask whether you can restructure your solution into multiple parts so saving/restoring the current state is not required. You might find designs such as continuation passing style useful, which again is quite easy using Swift's blocks.
HTH

Clarification about Scala Future that never complete and its effect on other callbacks

While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).

Why does MCNearbyServiceAdvertiser use a dispatch queue internally?

While I was browsing through the iOS 7 runtime headers, something caught my eye. In the MCNearbyServiceAdvertiser class, part of the Multipeer Connectivity framework, a property called syncQueue is and multiple methods prefixed with sync are defined. Some of the methods both exist in a prefixed and non-prefixed version, such as startAdvertisingPeer and syncStartAdvertisingPeer.
My question is, what would be the purpose of both this property and these prefixed methods, and how are they combined?
(edit: removed the remark that the queue is serial as pointed out by CouchDeveloper, since we cannot know this)
As you know, the implementation is private.
Having a dispatch queue whose name is syncQueue may not mean that this queue is a serial queue. It might be a concurrent queue as well.
We can only have a guess what the startAdvertisingPeer and the "prefixed" version syncStartAdvertisingPeer might mean.
For example, in order to fulfill internal prerequisites startAdvertisingPeer might assume that it is always invoked from an execution context except the syncQueue. That way, it can synchronously dispatch to the syncQueue with invoking syncStartAdvertisingPeer without ending up in a deadlock. On the other hand, syncStartAdvertisingPeer will always assume to execute on the syncQueue, that way guaranteeing concurrency.
But, as stated, we don't know the actual details - it's just a rough guess. Usually, you should read the documentation - and not some private header details to draw a picture in your mind how this class might likely work.