Semaphore is not waiting swift - swift

I'm trying to do 3 async requests and control the load with semaphores to know when all have loaded.
I Init the semaphore in this way:
let sem = dispatch_semaphore_create(2);
Then send to background the waiting for semaphore code:
let backgroundQueue = dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)
dispatch_async(backgroundQueue) { [unowned self] () -> Void in
println("Waiting for filters load")
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
println("Loaded")
}
Then I signal it 3 times (on each request onSuccess and onFailure):
dispatch_semaphore_signal(sem)
But when the signal code arrives it already passed the semaphore wait code, it never waits to subtract the semaphore count.
why?

You've specified dispatch_semaphore_create with a parameter of 2 (which is like calling dispatch_semaphore_signal twice), and then signal it three more times (for a total of five), but you appear to have only one wait (which won't wait at all because you started your semaphore with a count of 2).
That's obviously not going to work. Even if you fixed that (e.g. use zero for the creation of the semaphore and then issue three waits) this whole approach is inadvisable because you're unnecessarily tying up a thread waiting for the the other requests to finish.
This is a textbook candidate for dispatch groups. So you would generally use the following:
Create a dispatch_group_t:
dispatch_group_t group = dispatch_group_create();
Then do three dispatch_group_enter, once before each request.
In each of the three onSuccess/onFailure blocks pairs, do a dispatch_group_leave in both block.
Create a dispatch_group_notify block that will be performed when all of the requests are done.

Related

Does DispatchQueue run in its own thread?

I'm trying to understand how DispatchQueues really work and I'd like to know if this is safe to assume that DispatchQueues have their own managing thread ? For example let's take the serial queue. In my understanding - since it is serial the new task can be started only after the end of the current one, so "someone" have to dequeue the new task from the queue and submit it for execution! So, basically it seems that the queue have to leave within its own thread which will dispatch the tasks that are stored in the queue. Is this correct or I misunderstood something?
No, you should not assume that DispatchQueues have their own managed threads, and it doesn't have to execute all tasks on the same thread. It only guarantees that the next task is picked up after the previous one completes:
Work submitted to dispatch queues executes on a pool of threads managed by the system. Except for the dispatch queue representing your app's main thread, the system makes no guarantees about which thread it uses to execute a task.
(source)
Practically it is very possible, that the same thread will run several or all sequential tasks from the same sequential queue - provided they run close to each other (in time). I will speculate that this is not by a pure coincidence, but by optimization (avoids context switches). But it's not a guarantee.
In fact you can do this little experiment:
let serialQueue = DispatchQueue(label: "my.serialqueue")
var incr: Int = 0
DispatchQueue.concurrentPerform(iterations: 5) { iteration in
// Rundomize time of access to serial queue
sleep(UInt32.random(in: 1...30))
// Schedule execution on serial queue
serialQueue.async {
incr += 1
print("\(iteration) \(Date().timeIntervalSince1970) incremented \(incr) on \(Thread.current)")
}
}
You will see something like this:
3 1612651601.6909518 incremented 1 on <NSThread: 0x600000fa0d40>{number = 7, name = (null)}
4 1612651611.689259 incremented 2 on <NSThread: 0x600000faf280>{number = 9, name = (null)}
0 1612651612.68934 incremented 3 on <NSThread: 0x600000fb4bc0>{number = 3, name = (null)}
2 1612651617.690246 incremented 4 on <NSThread: 0x600000fb4bc0>{number = 3, name = (null)}
1 1612651622.690335 incremented 5 on <NSThread: 0x600000faf280>{number = 9, name = (null)}
Iterations start concurrently, but we make them sleep for a random time, so that they access a serial queue at different times. The result is that it's not very likely that the same thread picks up every task, although task execution is perfectly sequential.
Now if you remove a sleep on top, causing all iterations request access to sequential queue at the same time, you will most likely see that all tasks will run on the same thread, which I think is optimization, not coincidence:
4 1612651665.3658218 incremented 1 on <NSThread: 0x600003c94880>{number = 6, name = (null)}
3 1612651665.366118 incremented 2 on <NSThread: 0x600003c94880>{number = 6, name = (null)}
2 1612651665.366222 incremented 3 on <NSThread: 0x600003c94880>{number = 6, name = (null)}
0 1612651665.384039 incremented 4 on <NSThread: 0x600003c94880>{number = 6, name = (null)}
1 1612651665.3841062 incremented 5 on <NSThread: 0x600003c94880>{number = 6, name = (null)}
Here's an excellent read on topic of iOS Concurrency "Underlying Truth"
It's true that GCD on macOS includes some direct support from kernel. All the code is open source and can be viewed at: https://opensource.apple.com/source/libdispatch/
However, a Linux implementation of the same Dispatch APIs is available as part of the Swift open source project (swift-corelibs-libdispatch). This implementation does not use any special kernel support, but is just implemented using pthreads. From that project's readme:
libdispatch on Darwin [macOS] is a combination of logic in the xnu kernel alongside the user-space Library. The kernel has the most information available to balance workload across the entire system. As a first step, however, we believe it is useful to bring up the basic functionality of the library using user-space pthread primitives on Linux. Eventually, a Linux kernel module could be developed to support more informed thread scheduling.
To address your question specifically — it's not correct that each queue has a managing thread. A queue is more like a data structure that assists in the management of threads — an abstraction that makes it so you as the developer don't have to think about thread details.
How threads are created and used is up to the system and can vary depending on what you do with your queues. For example, using .sync() on a queue often just acquires a lock and executes the block on the calling thread, even if the queue is a concurrent queue. You can see this by setting a breakpoint and observing which thread you're running on:
let syncQueue = DispatchQueue(label: "syncQueue", attributes: .concurrent)
print("Before sync")
syncQueue.sync {
print("On queue")
}
print("After sync")
On the other hand, multiple async tasks can run at once on a concurrent queue, backed by multiple threads. In practice, the global queues seem to use up to 64 threads at once (the code prints "used 64 threads"):
var threads: Set<Thread> = []
let threadQueue = DispatchQueue(label: "threads set")
let group = DispatchGroup()
for _ in 0..<100 {
group.enter()
DispatchQueue.global(qos: .default).async {
sleep(2)
let thisThread = Thread.current
threadQueue.sync { _ = threads.insert(thisThread) }
group.leave()
}
}
group.wait() // wait for all async() blocks to finish
print("Used \(threads.count) threads")
But without the sleep(), the tasks finish quickly, and the system doesn't need to use so many threads (the program prints "Used 20 threads", or 30, or some lower number).
The main queue is another serial queue, which runs as part of your application lifecycle, or can be run manually with dispatchMain().

Skip new tasks if the queue is not empty. Swift

I have a piece of code that spams long running tasks 5-6 times per second. Each task takes some time to finish. I want to ignore all the other tasks while 1 is being executed. After it finishes a fresh one should take its place.
There are a bunch of tools being used for concurrency in Swift 4.2. What would work the best?
For solving of this problem you can use GCD or Operation. In case that you have describ I would use Operation. Using this approach you can have a bit more user friendly control over Operation that are executing (stoping, cancelling....).
Small example:
let queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
queue.addOperation { print("🤠") }
queue.addOperation { print("🤓") }
queue.addOperation { print("👺") }
In this case operations are executed one by one.

Variable Multithread Access - Corruption

In a nutshell:
I have one counter variable that is accessed from many threads. Although I've implemented multi-thread read/write protections, the variable seems to still -in an inconsistent way- get written to simultaneously, leading to incorrect results from the counter.
Getting into the weeds:
I'm using a "for loop" that triggers roughly 100 URL requests in the background, each in its “DispatchQueue.global(qos: .userInitiated).async” queue.
These processes are async, once they finish they update a “counter” variable. This variable is supposed to be multi-thread protected, meaning it’s always accessed from one thread and it’s accessed syncronously. However, something is wrong, from time to time the variable will be accessed simultaneously by two threads leading to the counter not updating correctly. Here's an example, lets imagine we have 5 URLs to fetch:
We start with the Counter variable at 5.
1 URL Request Finishes -> Counter = 4
2 URL Request Finishes -> Counter = 3
3 URL Request Finishes -> Counter = 2
4 URL Request Finishes (and for some reason – I assume variable is accessed at the same time) -> Counter 2
5 URL Request Finishes -> Counter = 1
As you can see, this leads to the counter being 1, instead of 0, which then affects other parts of the code. This error happens inconsistently.
Here is the multi-thread protection I use for the counter variable:
Dedicated Global Queue
//Background queue to syncronize data access fileprivate let
globalBackgroundSyncronizeDataQueue = DispatchQueue(label:
"globalBackgroundSyncronizeSharedData")
Variable is always accessed via accessor:
var numberOfFeedsToFetch_Value: Int = 0
var numberOfFeedsToFetch: Int {
set (newValue) {
globalBackgroundSyncronizeDataQueue.sync() {
self.numberOfFeedsToFetch_Value = newValue
}
}
get {
return globalBackgroundSyncronizeDataQueue.sync {
numberOfFeedsToFetch_Value
}
}
}
I assume I may be missing something but I've used profiling and all seems to be good, also checked the documentation and I seem to be doing what they recommend. Really appreciate your help.
Thanks!!
Answer from Apple Forums:https://forums.developer.apple.com/message/322332#322332:
The individual accessors are thread safe, but an increment operation
isn't atomic given how you've written the code. That is, while one
thread is getting or setting the value, no other threads can also be
getting or setting the value. However, there's nothing preventing
thread A from reading the current value (say, 2), thread B reading the
same current value (2), each thread adding one to this value in their
private temporary, and then each thread writing their incremented
value (3 for both threads) to the property. So, two threads
incremented but the property did not go from 2 to 4; it only went from
2 to 3. You need to do the whole increment operation (get, increment
the private value, set) in an atomic way such that no other thread can
read or write the property while it's in progress.

Generic Grand Central Dispatch

I have code set up like below. It is my understanding that queue1 should finish all operations THEN move to queue2. However, as soon as my async operation starts, queue2 begins. This defeats the purpose of GCD.. what am I doing wrong? This outputs:
did this finish
queue2
then some time later, prints success from image download
..I want to make it clear that if I put in other code in queue1, such as print("test") or a loop 0..10 printing i, all those operations will complete before moving to queue2. It seems the async download is messing with it, how can I fix this? There is no documentation anywhere, I used This very hepful guide from AppCoda http://www.appcoda.com/grand-central-dispatch/
let queue1 = DispatchQueue(label: "com.matt.myqueue1")
let queue2 = DispatchQueue(label: "com.matt.myqueue2")
let group1 = DispatchGroup()
let group2 = DispatchGroup()
let item = DispatchWorkItem{
// async stuff happening like downloading an image
// print success if image downloads
}
queue1.sync(execute: item)
item.notify(queue1, execute: {
print("did this finish?")
})
queue2.sync {
print("queue2")
}
let item = DispatchWorkItem{
// async stuff happening like downloading an image
// print success if image downloads
}
OK, defines it, but nothing runs yet.
queue1.sync(execute: item)
Execute item and kick off its async events. Immediately return after that. Nothing here says "wait for those unrelated asynchronous events to complete." The system doesn't even have a way to know that there are additional async calls inside of functions you call. How would it know whether object.doit() includes async calls or not (and whether those are async calls you meant to wait for)? It just knows when item returns, continue.
This is what group1 is supposed to be used for (you don't seem to use it for anything). Somewhere down inside these "async stuff happening" you're supposed to tell the system that it finished by leaving the group. (I have no idea what group2 is for. It's never used either.)
item.notify(queue1, execute: {
print("did this finish?")
})
item already finished. We know it has to have finished already, because it was run with sync, and that doesn't return until its item has. So this block will be immediately scheduled on queue1.
queue2.sync {
print("queue2")
}
Completely unrelated and could run before or after the "did this finish" code, we schedule a block on queue2.
What you probably meant was:
let queue1 = DispatchQueue(label: "com.matt.myqueue1")
let group1 = DispatchGroup()
group1.enter()
// Kick off async stuff.
// These usually return quickly, so there's no need for your own queue.
// At some point, when you want to say this is "done", often in some
// completion handler, you call group1.leave(), for example:
... completionHandler: { group1.leave() }
// When all that finishes, print
group.notify(queue: queue1) { print("did this finish?") }
EVERYTHING is initially queued from the main queue, however at some point you switch from main queue to a background queue and you should NOT expect a synchronized queue would wait for what is has enqueued on another queue. They are irrelevant. If that was the case then always and always no matter what, everything is to wait for whatever it asked to run.*
so here's what I see is happening.
queue1 is happily finished, it has done everything it was suppose to by enqueuing item on another queue <-- that's all it was suppose to do. Since the 'async stuff' is async... it won't wait for it to finish. <-- actually if you use breakpoints inside the async you would see that the breakpoints would jump to } meaning they don't wait for a background queue they just jump to the end of the queue, since they are no longer on the main thread
then since it was a sync queue it wall wait till it's done. Once done it will go through its notify...now here's where it get's tricky: depending on how fast what you do in the async... 'print success' will either get called first or "queue2" though here obviously queue2 is returning/finishing sooner.
similar to what you see in this answer.
*: A mother (main queue) tells it's child1 to your room and bring book1 back, then tells child2 to your room and bring book2 back, then tells child3 to your room and bring book3 back. Each child is being ran from its own queue ( not the mother's queue).
The mother doesn't wait for child1 to come back...so it can tell child2 to go. The mother only tells child 1 go...then child2 go...then child 3 go.
However child2 is not told (to go) before child 1, nor child3 is told before child2 or child1 <-- this is because of the serial-ness of main queue. They get dispatched serially, but their completion order depends on how fast each child/queue finishes

Code with a potential deadlock

// down = acquire the resource
// up = release the resource
typedef int semaphore;
semaphore resource_1;
semaphore resource_2;
void process_A(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
void process_B(void) {
down(&resource_2);
down(&resource_1);
use_both_resources();
up(&resource_1);
up(&resource_2);
}
Why does this code causes deadlock?
If we change the code of process_B where the both processes ask for the resources in the same order as:
void process_B(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
Then there is no deadlock.
Why so?
Imagine that process A is running and try to get the resource_1 and gets it.
Now, process B takes control and try to get resource_2. And gets it. Now, process B tries to get resource_1 and does not get it, because it belongs to resource A. Then, process B goes to sleep.
Process A gets control again and try to get resource_2, but it belongs to process B. Now he goes to sleep too.
At this point, process A is waiting for resource_2 and process B is waiting for resource_1.
If you change the order, process B will never lock resource_2 unless it gets resource_1 first, the same for process A.
They will never be dead locked.
A necessary condition for a deadlock is a cycle of resource acquisitions. The first example constructs this a cycle 1->2->1. The second example acquires the resources in a fixed order which makes a cycle and henceforth a deadlock impossible.