I want to know what is difference between perform selector in backgorund and detachNewThread
They Are identical. as you can see in Documentation section Click Here
performSelectorInBackground:withObject: The effect of calling this method is the same as if you called the detachNewThreadSelector:toTarget:withObject: method of NSThread with the current object, selector, and parameter object as parameters.
performSelectorInBackground:withObject: is easier way rather than NSThread.
However, NSThread can control its priority, stacksize, etc. If you'd like to customize the behavior, I recommend NSThread instead of performSelectorInBackground:withObject:.
I would look at it from a semantic point of view. There is no technical reason to use one or the other.
Use NSThread if you actually "think" of having a thread that "does something"; in particular, it will probably be the most appropiate way of creating a thread if your thread runs some form of event- or messaging loop. In such a case, the "thread object" is really just that; in many cases it's not an "application realm" object with actual application data, as these will be handed over to the thread in some way.
Use the NSObject-based methods if your thread is merely meant to run some single operation in the background. You don't really care about this being a "thread", and the object that you run this on is likely to be the "application realm" object with the data; there's no event- or messageloop to feed it commands from other threads.
Thus, I would base the decision on abstract factors, as in "what looks better in the given context". Having an NSThread "feels" like a more detached entity that is willing to offer services to multiple clients, whereas the NSObject method feels like it's closely attached to the data object that it runs with, and doesn't really deal with anything else unless it's vital to the cause.
Related
I don't understand the GTask functionality? why do I need this?
In my mind it is like callback.. you set a callback to a source in some context and this callback is then called when event is happening.
In general, i'm a bit confused about what is a Context and a Task in GLib and why do we need them.
In my understanding there is a main loop (only 1?) that can run several contexts (what is a context?) and each context is related to several sources which in their turn have callbacks that are like handlers.
So can someone please make some sense for me in it all.
I don't understand the GTask functionality? why do I need this? In my mind it is like callback.. you set a callback to a source in some context and this callback is then called when event is happening.
The main functionality GTask exposes is easily and safely running a task in a thread and returning the result back to the main thread.
In general, i'm a bit confused about what is a Context and a Task in GLib and why do we need them. In my understanding there is a main loop (only 1?) that can run several contexts (what is a context?) and each context is related to several sources which in their turn have callbacks that are like handlers.
For simplicity I think its safe to consider contexts and loops the same thing and there can be multiple of them. So in order to be thread-safe the task must know which context the result is returned to.
I've been given the task to clean up some existing Swift code on our project which has just been converted to Swift 3. However, I keep seeing this which looks suspect to me.
OperationQueue().addOperation(someOperation)
Here are the concerns/issues I have...
The queue instance is created and used right there. No reference to it is stored for use elsewhere.
Because of the above, there will only ever be one operation in the queue, so why use the queue at all?
Since no one is holding a reference to the queue, under ARC, shouldn't it be instantly deallocated, and if so, what happens to the now-executing operation itself? Does it get interrupted, aborted or does it still complete?
Anyway, I'm wondering if I'm missing something or am unaware of a 'feature' of NSOperationQueue and NSOperations that make this code make sense. Can anyone shed light on this, or do you agree this is bad practice?
I've seen this pattern too. I think it works like NSURLConnection: the NSOperationQueue "knows" it has a pending operation and doesn't allow itself to go out of existence immediately. Also keep in mind that an NSOperationQueue isn't really a "thing"; it's a kind of front for an underlying dispatch queue.
It makes a certain sense to use this pattern in situations where there is no reasonable place to store a reference to the queue. And you can use it to powerful effect, as in this example where the operation has dependencies and thus is not executed until all the dependencies are.
Personally, however, if I'm not taking advantage of NSOperation features of that sort, I'd be more inclined to use GCD directly.
(As to your middle point, it would not make sense to execute on the main thread, because what if the operation is lengthy? You'd be blocking the main thread. However, do note that if all you're trying to say is "do this after everything else", Swift gives you defer.)
While re-reading scala.lan.org's page detailing Future here, I have stumbled up on the following sentence:
In the event that some of the callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the blocking construct (see below).
Why may the other callbacks not be executed at all? I may install a number of callbacks for a given Future. The thread that completes the Future, may or may not execute the callbacks. But, because one callback is not playing footsie, the rest should not be penalized, I think.
One possibility I can think of is the way ExecutionContext is configured. If it is configured with one thread, then this may happen, but that is a specific behaviour and a not generally expected behaviour.
Am I missing something obvious here?
Callbacks are called within an ExecutionContext that has an eventually limited number of threads - if not by the specific context implementation, then by the underlying operating system and/or hardware itself.
Let's say your system's limit is OS_LIMIT threads. You create OS_LIMIT + 1 callbacks. From those, OS_LIMIT callbacks immediately get a thread each - and none ever terminate.
How can you guarantee that the remaining 1 callback ever gets a thread?
Sure, there could be some detection mechanisms built into the Scala library, but it's not possible in the general case to make an optimal implementation: maybe you want the callback to run for a month.
Instead (and this seems to be the approach in the Scala library), you could provide facilities for handling situations that you, the developer, know are risky. This removes the element of surprise from the system.
Perhaps most importantly - it enables the developer to "bake in" the necessary information about handler/task characteristics directly into his/her program, rather than relying on some obscure piece of language functionality (which may change from version to version).
While I was browsing through the iOS 7 runtime headers, something caught my eye. In the MCNearbyServiceAdvertiser class, part of the Multipeer Connectivity framework, a property called syncQueue is and multiple methods prefixed with sync are defined. Some of the methods both exist in a prefixed and non-prefixed version, such as startAdvertisingPeer and syncStartAdvertisingPeer.
My question is, what would be the purpose of both this property and these prefixed methods, and how are they combined?
(edit: removed the remark that the queue is serial as pointed out by CouchDeveloper, since we cannot know this)
As you know, the implementation is private.
Having a dispatch queue whose name is syncQueue may not mean that this queue is a serial queue. It might be a concurrent queue as well.
We can only have a guess what the startAdvertisingPeer and the "prefixed" version syncStartAdvertisingPeer might mean.
For example, in order to fulfill internal prerequisites startAdvertisingPeer might assume that it is always invoked from an execution context except the syncQueue. That way, it can synchronously dispatch to the syncQueue with invoking syncStartAdvertisingPeer without ending up in a deadlock. On the other hand, syncStartAdvertisingPeer will always assume to execute on the syncQueue, that way guaranteeing concurrency.
But, as stated, we don't know the actual details - it's just a rough guess. Usually, you should read the documentation - and not some private header details to draw a picture in your mind how this class might likely work.
Is this:
[self showInWindow:window];
what get called after delay by this code:
[self performSelector:#selector(showInWindow:)
withObject:window
afterDelay:delay];
or am I misunderstanding the method?
Edit: the problem I'm having is that the method showInWindow get called after the delay but behaves like [self showInWindow:nil]. Any suggestion?
Yes, that's what gets called. (After the delay, of course.)
The documentation doesn't really explain what it means to "perform the selector", but what it means is exactly what you suspect.
There is one small difference between using performSelector:withObject: type methods and sending the message directly: they only work if the object is actually an object (that is, an id, a pointer to an Objective C object). But window obviously is an object.
(Strictly speaking, this isn't quite true. If you pass something that's the same size as an id or smaller, it will often work. In some cases it won't. In some cases it will work, but is illegal. In some cases, it will work and is legal but Apple strongly recommends against it. There are no cases where it's a good idea—so instead of learning the specific rules, just assume it never works. The only reason to bring this up is that this used to be common practice in Objective C back in the NeXT days, so you may occasionally still see it in other people's today.)
For more information about the performSelector: family, see the NSObject Protocol Reference, and the SO question Using -performSelector: vs. just calling the method. (For information specifically about the afterDelay: variants, see the documentation linked above.)
From the later edit to the question:
the problem I'm having is that the method showInWindow get called after the delay but behaves like [self showInWindow:nil]. Any suggestion?
First, in what way does it "behave like" the parameter is nil? Is the parameter actually nil? (Just log it in the showInWindow: implementation; if you haven't overridden the base implementation, just add an override that logs and calls the base.)
Second, if it actually is nil, was it nil at the time you sent performSelector:withObject:afterDelay:? If so, obviously it'll still be nil when the selector is sent. Also, make sure window really is an id rather than some other type. (Note that if you've got members, properties, globals, and/or locals sharing the name window, it can be confusing which one you're referring to. This is a common source of problems.)
If it's actually not nil when you schedule it, but is nil when it arrives, there are a few ways that could happen, but they're all less likely, and trickier to debug, than these two cases, so let's rule them out first.
Yes, that's what it does... Although keep in mind that it may take longer than the delay to execute. This method basically sets up an NSTimer in the current thread's run loop, so if your thread gets busy doing heavy duty work and the run loop takes longer than your delay to come back, your method will get executed later.