While I was browsing through the iOS 7 runtime headers, something caught my eye. In the MCNearbyServiceAdvertiser class, part of the Multipeer Connectivity framework, a property called syncQueue is and multiple methods prefixed with sync are defined. Some of the methods both exist in a prefixed and non-prefixed version, such as startAdvertisingPeer and syncStartAdvertisingPeer.
My question is, what would be the purpose of both this property and these prefixed methods, and how are they combined?
(edit: removed the remark that the queue is serial as pointed out by CouchDeveloper, since we cannot know this)
As you know, the implementation is private.
Having a dispatch queue whose name is syncQueue may not mean that this queue is a serial queue. It might be a concurrent queue as well.
We can only have a guess what the startAdvertisingPeer and the "prefixed" version syncStartAdvertisingPeer might mean.
For example, in order to fulfill internal prerequisites startAdvertisingPeer might assume that it is always invoked from an execution context except the syncQueue. That way, it can synchronously dispatch to the syncQueue with invoking syncStartAdvertisingPeer without ending up in a deadlock. On the other hand, syncStartAdvertisingPeer will always assume to execute on the syncQueue, that way guaranteeing concurrency.
But, as stated, we don't know the actual details - it's just a rough guess. Usually, you should read the documentation - and not some private header details to draw a picture in your mind how this class might likely work.
Related
I'm using hunchentoot session values to make my server code re-entrant. Problem is that session values are, by definition, retained during the session, i.e., from one call from the same browser to the next, whereas what I really am looking for is what amount to thread-specific re-entrancy, so that all the values disappear between calls -- I want to treat each click as a separate "from scratch" event, even if they are from the same session . Easy enough to have the driver either set to nil, or delete my session values, but I'm wondering if there's a "correct" way to do this? I don't see any thread-based analog to hunchentoot:session-value in the documentation.
Thanks in advance for any guidance you can offer.
If you want a value to be "thread specific" and at the same time to be "from scratch" on every request, that requires that every request must be dispatched in a brand new thread. This is not the case according to the Hunchentoot documentation, which says that two models are supported: a single-threaded taskmaster and a thread-per-connection taskmaster.
If your configuration is multi-threaded, then a thread-specific variable bound in a request-handling can therefore be expected to be per-connection. In a single-threaded Hunchentoot setup, it will effectively be global, tied to the request servicing thread.
A thread-based analog to hunchentoot:session-value probably doesn't exist because it would only introduce behaviors into the web app which surprisingly change if the threading model is reconfigured, or if the request pattern from the browser changes. A browser can make multiple requests using the same connection, or close the connection between requests.
To extend the request objects with custom per-request, I would look into, perhaps, subclassing from the acceptor (how to do this is described in the docs). My custom acceptor would have a custom method of the process-connection generic function which would create extended/subclasses request objects carrying the extra stuff I wanted to put into a request.
Another way would be to have some global weak hash which binds request objects as keys to additional information.
I've been given the task to clean up some existing Swift code on our project which has just been converted to Swift 3. However, I keep seeing this which looks suspect to me.
OperationQueue().addOperation(someOperation)
Here are the concerns/issues I have...
The queue instance is created and used right there. No reference to it is stored for use elsewhere.
Because of the above, there will only ever be one operation in the queue, so why use the queue at all?
Since no one is holding a reference to the queue, under ARC, shouldn't it be instantly deallocated, and if so, what happens to the now-executing operation itself? Does it get interrupted, aborted or does it still complete?
Anyway, I'm wondering if I'm missing something or am unaware of a 'feature' of NSOperationQueue and NSOperations that make this code make sense. Can anyone shed light on this, or do you agree this is bad practice?
I've seen this pattern too. I think it works like NSURLConnection: the NSOperationQueue "knows" it has a pending operation and doesn't allow itself to go out of existence immediately. Also keep in mind that an NSOperationQueue isn't really a "thing"; it's a kind of front for an underlying dispatch queue.
It makes a certain sense to use this pattern in situations where there is no reasonable place to store a reference to the queue. And you can use it to powerful effect, as in this example where the operation has dependencies and thus is not executed until all the dependencies are.
Personally, however, if I'm not taking advantage of NSOperation features of that sort, I'd be more inclined to use GCD directly.
(As to your middle point, it would not make sense to execute on the main thread, because what if the operation is lengthy? You'd be blocking the main thread. However, do note that if all you're trying to say is "do this after everything else", Swift gives you defer.)
While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).
I want to know what is difference between perform selector in backgorund and detachNewThread
They Are identical. as you can see in Documentation section Click Here
performSelectorInBackground:withObject: The effect of calling this method is the same as if you called the detachNewThreadSelector:toTarget:withObject: method of NSThread with the current object, selector, and parameter object as parameters.
performSelectorInBackground:withObject: is easier way rather than NSThread.
However, NSThread can control its priority, stacksize, etc. If you'd like to customize the behavior, I recommend NSThread instead of performSelectorInBackground:withObject:.
I would look at it from a semantic point of view. There is no technical reason to use one or the other.
Use NSThread if you actually "think" of having a thread that "does something"; in particular, it will probably be the most appropiate way of creating a thread if your thread runs some form of event- or messaging loop. In such a case, the "thread object" is really just that; in many cases it's not an "application realm" object with actual application data, as these will be handed over to the thread in some way.
Use the NSObject-based methods if your thread is merely meant to run some single operation in the background. You don't really care about this being a "thread", and the object that you run this on is likely to be the "application realm" object with the data; there's no event- or messageloop to feed it commands from other threads.
Thus, I would base the decision on abstract factors, as in "what looks better in the given context". Having an NSThread "feels" like a more detached entity that is willing to offer services to multiple clients, whereas the NSObject method feels like it's closely attached to the data object that it runs with, and doesn't really deal with anything else unless it's vital to the cause.
I've been writing some code that replaces some existing:
while(runEventLoop){
if(select(openSockets, readFDS, writeFDS, errFDS, timeout) > 0){
// check file descriptors for activity and dispatch events based on same
}
}
socket reading code. I'd like to change this to use a GCD queue, so that I can pop events on to the queue using dispatch_async instead of maintaining a "must be called on next iteration" array. I also am already using a GCD queue to /contain/ this particular action, hence wanting to devolve it to a more natural GCD dispatch form. ( not a while() loop monopolizing a serial queue )
However, when I tried to refactor this into a form that relied on dispatch sources fired from event handlers tied to DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE on the socket descriptors, the library code that depended on this scheduling stopped working. My first assumption is that I'm misunderstanding the use of DISPATCH_SOURCE_TYPE_READ and DISPATCH_SOURCE_TYPE_WRITE - I had assumed that they would yield roughly the same behavior as calling select() with those socket descriptors.
Do I misunderstand GCD dispatch sources? Or, regarding the refactor, am I using it in a situation where it is not best suited?
The short answer to your question is: none. There are no differences, both GCD dispatch sources and select() do the same thing: they notify the user that a specific kernel event happened or that a particular condition holds true.
Note that, on a mac or iOS device you should not use select(), but rather the more advanced kqueue() and kevent() (or kevent64()).
You may certainly convert the code to use GCD dispatch sources, but you need to be careful not to break other code relying on this. So, this needs a complete inspection of the whole code handling signals, file descriptors, socket and all of the other low level kernel events.
May be a simpler solution could be to maintain the original code, simply adding GCD code in the part that react to events. Here, you dispatch events on different queues depending on the particular type of event.