dispatch_apply takes a dispatch queue as a parameter, which allows you to choose which queue to execute the block on.
My understanding is that DispatchQueue.concurrentPerform in Swift is meant to replace dispatch_apply. But this function does not take a dispatch queue as a parameter. After googling around, I found this GCD tutorial which has this code:
let _ = DispatchQueue.global(qos: .userInitiated)
DispatchQueue.concurrentPerform(iterations: addresses.count) { index in
// do work here
}
And explains:
This implementation includes a curious line of code: let _ = DispatchQueue.global(qos: .userInitiated). Calling this tells causes GCD to use a queue with a .userInitiated quality of service for the concurrent calls.
My question is, does this actually work to specify the QoS? If so, how?
It would kind of make sense to me that there is no way to specify the queue for this, because a serial queue makes no sense in this context and only the highest QoS really makes sense given that this is a synchronous blocking function. But I can't find any documentation as to why it is possible to specify a queue with dispatch_apply but impossible(?) with DispatchQueue.concurrentPerform.
The author’s attempt to specify the queue quality of service (QoS) is incorrect. The concurrentPerform uses the current queue’s QoS if it can. You can confirm this by tracking through the source code:
concurrentPerform calls _swift_dispatch_apply_current.
_swift_dispatch_apply_current calls dispatch_apply with 0, i.e., DISPATCH_APPLY_AUTO, which is defined as a ...
... Constant to pass to dispatch_apply() or dispatch_apply_f() to request that the system automatically use worker threads that match the configuration of the current thread as closely as possible.
When submitting a block for parallel invocation, passing this constant as the queue argument will automatically use the global concurrent queue that matches the Quality of Service of the caller most closely.
This can also be confirmed by following dispatch_apply call dispatch_apply_f in which using DISPATCH_APPLY_AUTO results in the call to _dispatch_apply_root_queue. If you keep tumbling down the rabbit hole of swift-corelibs-libdispatch, you’ll see that this actually does use a global queue which is the same QoS as your current thread.
Bottom line, the correct way to specify the QoS is to dispatch the call to concurrentPerform to the desired queue, e.g.:
DispatchQueue.global(qos: .userInitiated).async {
DispatchQueue.concurrentPerform(iterations: 3) { (i) in
...
}
}
This is easily verified empirically by adding a break point and looking at the queue in the Xcode debugger:
Needless to say, the suggestion of adding the let _ = ... is incorrect. Consider the following:
DispatchQueue.global(qos: .utility).async {
let _ = DispatchQueue.global(qos: .userInitiated)
DispatchQueue.concurrentPerform(iterations: 3) { (i) in
...
}
}
This will run with “utility” QoS, not “user initiated”.
Again, this is easily verified empirically:
See WWDC 2017 video Modernizing Grand Central Dispatch for a discussion about DISPATCH_APPLY_AUTO and concurrentPerform.
Related
Could anyone help me to understand this code I created:
let cq = DispatchQueue(label: "downloadQueue", attributes: .concurrent)
cq.sync {
for i in 0..<10 {
sleep(2)
print(i)
}
}
print("all finished!")
And the output is serial order 1->10, with 2 sec waiting in between. In the end it will print out all finished
I understand that the the last part.
However my question is:
Shouldn't concurrent queue start multiple tasks at the same time?
So my original thought was: the 1-10 printing should be done concurrently, not necessarily in the serial order.
Could anyone explain the purpose of sync call on concurrent queue and give me an example why and when we need it?
If you want to demonstrate them running concurrently, you should dispatch the 10 tasks individually:
let cq = DispatchQueue(label: "downloadQueue", attributes: .concurrent)
for i in 0..<10 {
cq.async {
sleep(2)
print(i)
}
}
print("all finished queuing them!")
Note:
There are 10 dispatches to the concurrent queue, not one.
Each dispatched task runs concurrently with respect to other tasks dispatched to that queue (which is why we need multiple dispatches to illustrate the concurrency).
Also note that we dispatch asynchronously because we do not want to calling queue to wait for each dispatched task before dispatching the next.
You ask:
So my original thought was: the 1-10 printing should be done concurrently, not necessarily in the serial order.
Because they are inside a single dispatch, they will run as a single task, running in order. You need to put them in separate dispatches to see them run concurrently.
You go on to ask:
Could anyone explain the purpose of sync call on concurrent queue and give me an example why and when we need it?
The sync has nothing to do with whether the destination queue is serial or concurrent. The sync only dictates the behavior of the calling thread, namely, should the caller wait for the dispatched task to finish or not. In this case, you really do not want to wait, so you should use async.
As a general rule, you should avoid calling sync unless (a) you absolutely have to; and (b) you are willing to have the calling thread blocked until the sync task runs. So, with very few exceptions, one should use async. And, perhaps needless to say, we never block the main thread for more than a few milliseconds.
While using sync on a concurrent dispatch queue is generally avoided, one example you might encounter is the “reader-writer” synchronization pattern. In this case, “reads” happen synchronously (because you need to wait the result), but “writes” happen asynchronously with a barrier (because you do not need to wait, but you do not want it to happen concurrently with respect to anything else on that queue). A detailed discussion of using GCD for synchronization (esp the reader-writer pattern), is probably beyond the scope of this question. But search the web or StackOverflow for “GCD reader-writer” and you will find discussions on the topic.)
Let us graphically illustrate my revamped rendition of your code, using OSLog to create intervals in Instruments’ “Points of Interest” tool:
import os.log
private let log = OSLog(subsystem: "Foo", category: .pointsOfInterest)
class Foo {
func demonstration() {
let queue = DispatchQueue(label: "downloadQueue", attributes: .concurrent)
for i in 0..<10 {
queue.async { [self] in
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: "async", signpostID: id, "%d", i)
spin(for: 2)
os_signpost(.end, log: log, name: "async", signpostID: id)
}
}
print("all finished queuing them!")
}
func spin(for seconds: TimeInterval) {
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < seconds { }
}
}
When I profile this in Instruments (e.g. “Product” » “Profile”), choosing “Time Profiler” template (which includes “Points of Interest” tool), I see a graphical timeline of what is happening:
So let me draw your attention to two interesting aspects of the above:
The concurrent queue runs tasks concurrently, but because my iPhone’s CPU only has six cores, only six of them can actually run at the same time. The next four will have to wait until a core is available for that particular worker thread.
Note, this demonstration only works because I am not just calling sleep, but rather I am spinning for the desired time interval, to more accurately simulate some slow, blocking task. Spinning is a better proxy for slow synchronous task than sleep is.
This illustrates, as you noted, concurrent tasks may not appear in the precise order that they were submitted. This is because (a) they all were queued so quickly in succession; and (b) they run concurrently: There is a “race” as to which concurrently running thread gets to the logging statements (or “Points of Interest” intervals) first.
Bottom line, for those tasks running concurrently, because of races, they may not appear to run in order. That is how concurrent execution works.
No, you have to add multiple operations to the DispatchQueue, it will then run the multiple queues in parallel.
I have some code that looks like this:
DispatchQueue.global(qos: .userInitiated).async {
self.fetchProjects()
DispatchQueue.main.async {
self.constructMenu()
}
}
My question is are the blocks within the global block executed serially? When I add print statements, they always executed in the same order, but I'm not sure if I'm getting lucky as looking at the documentation, it says:
Tasks submitted to the returned queue are scheduled concurrently with respect to one another.
I wonder if anyone can shed any light on this?
EDIT:
Apologies, I don't think I made the question clear. I would like for the method constructMenu to only be called once fetchProjects has completed. From what I can tell (by logging print statements) this is the case.
But I'm not really sure why that's the case if what Apple's documentation above says (where each task is scheduled concurrently) is true.
Is code within an async block always executed serially, or is the fact that the code seems to execute serially a result of using DispatchQueue.main or is it just 'luck' and at some point constructMenu will actually return before fetchProjects?
I would like for the method constructMenu to only be called once fetchProjects has completed. From what I can tell (by logging print statements) this is the case.
Yes, this is the case.
But I'm not really sure why that's the case if what Apple's documentation above says (where each task is scheduled concurrently) is true.
Apple’s documentation is saying that two separate dispatches may run concurrently with respect to each other.
Consider:
DispatchQueue.global(qos: .userInitiated).async {
foo()
}
DispatchQueue.global(qos: .userInitiated).async {
bar()
}
In this case, foo and bar may end up running at the same time. This is what Apple means by “Tasks submitted to the returned queue are scheduled concurrently.”
But consider:
DispatchQueue.global(qos: .userInitiated).async {
foo()
bar()
}
In this case, bar will not run until we return from foo.
Is code within an async block always executed serially, or is the fact that the code seems to execute serially a result of using DispatchQueue.main or is it just ‘luck’ and at some point constructMenu will actually return before fetchProjects?
No luck involved. It will never reach the DispatchQueue.main.async line until you return from fetchProjects.
There is one fairly major caveat, though. This assumes that fetchProjects won’t return until the fetch is done. That means that fetchProjects better not be initiating any asynchronous processes of its own (i.e. no network requests). If it does, you probably want to supply it with a completion handler, and put the call to constructMenu in that completion handler.
Yes, blocks of code are submitted as blocks, and those blocks run sequentially. So for your example:
DispatchQueue.global(qos: .userInitiated).async {
self.fetchProjects()
DispatchQueue.main.async {
self.constructMenu()
}
}
fetchProjects() must complete before constructMenu is enqueued. There is no magic here. The block between {...} is submitted as a block. At some point in the future it will executed. But the pieces of the block will not be considered in any granular way. They will start at the top, fetchProjects, and then the next line of code will be executed, DispatchQueue.main.async, which accepts as a parameter another block. The complier doesn't know anything about these blocks. It just passes them to functions, and those functions put them on queues.
DispatchQueue.global is a concurrent queue that mean all tasks submitted run asynchronously at the same time , if you need it serially create a custom queue
let serial = DispatchQueue(label: "com.queueName")
serial.sync {
///
}
Considering this trivial piece of code
DispatchQueue.global().async {
print("2")
}
print("1")
we can say that output will be as following:
1
2
Are there any circumstances under which order of execution will be different (disregarding kind of used queue)? May they be forced to appear manually if any?
You said:
we can say that output will be as following ...
No, at best you can only say that the output will often/frequently be in that order, but is not guaranteed to be so.
The code snippet is dispatching code asynchronously to a global, concurrent queue, which will run it on a separate worker thread, and, in the absence of any synchronization, you have a classic race condition between the current thread and that worker thread. You have no guarantee of the sequence of these print statements, though, in practice, you will frequently see 1 before 2.
this is one of the common questions at tech interview; and for some reason interviewers expect conclusion that order of execution is constant here. So (as it was not aligned with my understanding) I decided to clarify.
Your understanding is correct, that the sequence of these two print statements is definitely not guaranteed.
Are there any circumstances under which order of execution will be different
A couple of thoughts:
By adjusting queue priorities, for example, you can change the likelihood that 1 will appear before 2. But, again, not guaranteed.
There are a variety of mechanisms to guarantee the order.
You can use a serial queue ... I gather you didn’t want to consider using another/different queue, but it is generally the right solution, so any discussion on this topic would be incomplete without that scenario;
You can use dispatch group ... you can notify on the global queue when the current queue satisfies the group;
You can use dispatch semaphores ... semaphores are a classic answer to the question, but IMHO semaphores should used sparingly as it’s so easy to make mistakes ... plus blocking threads is never a good idea;
For the sake of completeness, we should mention that you really can use any synchronization mechanism, such as locks.
I do a quick test too. Normally I will do a UI update after code execution on global queue is complete, and I would usually put in the end of code block step 2.
But today I suddenly found even I put that main queue code in the beginning of global queue block step 1, it still is executed after all global queue code execution is completed.
DispatchQueue.global().async {
// step 1
DispatchQueue.main.async {
print("1. on the main thread")
}
// code global queue
print("1. off the main thread")
print("2. off the main thread")
// step 2
DispatchQueue.main.async {
print("2. on the main thread")
}
}
Here is the output:
1. off the main thread
2. off the main thread
1. on the main thread
2. on the main thread
I'm watching the concurrent programming talk from WWDC, found here and I'm a little confused about the meaning of async. In Javascript world, we use "async" to describe processes that happen "out of order" or practically, processes that are not guaranteed to return before the next line of code is executed. However, in this talk it seems as though what the presenter is demonstrating is in fact a series of tasks being executed one after the other (ie. synchronous behavior)
Then in this code example
let queue = DispatchQueue(label: "com.example.imagetransform")
queue.async {
let smallImage = image.resize(to: rect)
DispatchQueue.main.async {
imageView.image = smallImage
}
}
This DispatchQueue doesn't seem to behave like the FIFO data structure it's supposed to be on first glance. I see that they are passing in a closure into the queue.async method, but I'm not sure how to add more blocks, or tasks, for it to execute.
I'm guessing the async nature of this is similar to Javascript callbacks in the sense that all the variables being assigned in the closure are only scoped to that closure and we can only act on that data within the closure.
DispatchQueues are FIFO with regard to the start of tasks. If they are concurrent queues, or if you're submitting async tasks, then there's no FIFO guarantee on when they finish
Concurrent queues allow multiple tasks to run at the same time. Tasks are guaranteed to start in the order they were added. Tasks can finish in any order and you have no knowledge of the time it will take for the next task to start, nor the number of tasks that are running at any given time.
- https://www.raywenderlich.com/148513/grand-central-dispatch-tutorial-swift-3-part-1
I have read the tutorial about GCD and Dispatch Queue in Swift 3
But I'm really confused about the order of synchronous execution and asynchronous execution and main queue and background queue.
I know that if we use sync then we execute them one after the precious one, if we use async then we can use QoS to set their priority, but how about this case?
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
let queue2 = DispatchQueue(label: "com.appcoda.myqueue2")
for i in 1000..<1005 {
print(i)
}
queue1.async {
for i in 0..<5{
print(i)
}
}
queue2.sync {
for i in 100..<105{
print( i)
}
}
}
The outcome shows that we ignore the asynchronous execution. I know queue2 should be completed before queue1 since it's synchronous execution but why we ignore the asynchronous execution and
what is the actual difference between async, sync and so-called main queue?
You say:
The outcome shows that we ignore the asynchronous execution. ...
No, it just means that you didn't give the asynchronously dispatched code enough time to get started.
I know queue2 should be completed before queue1 since it's synchronous execution ...
First, queue2 might not complete before queue1. It just happens to. Make queue2 do something much slower (e.g. loop through a few thousand iterations rather than just five) and you'll see that queue1 can actually run concurrently with respect to what's on queue2. It just takes a few milliseconds to get going and the stuff on your very simple queue2 is finishing before the stuff on queue1 gets a chance to start.
Second, this behavior is not technically because it's synchronous execution. It's just that async takes a few milliseconds to get it's stuff running on some worker thread, whereas the synchronous call, because of optimizations that I won't bore you with, gets started more quickly.
but why we ignore the asynchronous execution ...
We don't "ignore" it. It just takes a few milliseconds to get started.
and what is the actual difference between async, sync and so-called main queue?
"Async" merely means that the current thread may carry on and not wait for the dispatched code to run on some other thread. "Sync" means that the current thread should wait for the dispatched code to finish.
The "main thread" is a different topic and simply refers to the primary thread that is created for driving your UI. In practice, the main thread is where most of your code runs, basically running everything except that which you manually dispatch to some background queue (or code that is dispatched there for you, e.g. URLSession completion handlers).
sync and async are related to the same thread / queue. To see the difference please run this code:
func queuesWithQoS() {
let queue1 = DispatchQueue(label: "com.appcoda.myqueue1")
queue1.async {
for i in 0..<5{
print(i)
}
}
print("finished")
queue1.sync {
for i in 0..<5{
print(i)
}
}
print("finished")
}
The main queue is the thread the entire user interface (UI) runs on.
First of all I prefer to use the term "delayed" instead of "ignored" about the code execution, because all your code in your question is executed.
QoS is an enum, the first class means the highest priority, the last one the lowest priority, when you don't specify any priority you have a queue with default priority and default is in the middle:
userInteractive
userInitiated
default
utility
background
unspecified
Said that, you have two synchronous for-in loops and one async, where the priority is based by the position of the loops and the kind of the queues (sync/async) in the code because here we have 3 different queues (following the instructions about your link queuesWithQoS() could be launched in viewDidAppearso we can suppose is in the main queue)
The code show the creation of two queues with default priority, so the sequence of the execution will be:
the for-in loop with 1000..<1005 in the main queue
the synchronous queue2 with default priority
the asynchronous queue1 (not ignored, simply delayed) with default priority
Main queue have always the highest priority where all the UI instructions are executed.