I receive a lot of data and want to process it off the main thread. Most of the things can be asynchronous in nature, however sometimes there is critical data coming in with each update invalidating the previous iteration. At the moment I set up everything like this:
func onReceivedData(d: Data){ //is on main thread unfortunately
//some work 1
DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {
//some work 2
if(saveToFile) {
DispatchQueue.global(qos: DispatchQoS.QoSClass.utility).async {
//UserDefaults write
}
}
}
}
Now I am not sure if I understood the DispatchQueue correctly. Is it possible when data A comes in before data B, that data B for some reason may reach the UserDefaults write earlier than data A? Do the DispatchQueues push operations in a serial manner or can they also operate in parallel, potentially writing data B before data A, leaving the UserDefaults in an invalid state?
From the Apple Developer documentation on DispatchQueue.global():
Tasks submitted to the returned queue are scheduled concurrently with respect to one another.
This means that the global() queues are concurrent dispatch queues.
If you want to ensure that the tasks execute in the order that you schedule them, you should create your own serial DispatchQueue.
let queue = DispatchQueue(label: "my-queue")
queue.async {
// Work 1...
}
queue.async {
// Work 2...
}
"Work 1" will always run before "Work 2" in this serial queue (which may not be the case with a concurrent queue).
Alternatively, you could look into using a Swift actor which can more efficiently control access to shared mutable state in a memory-safe way.
Related
Could anyone provide advice on how to lock between threads in Swift? Specifically, I have code that separates the model from the view. The model processes adds, updates, and deletes separately before committing for the view to access. I have code that runs on the background thread to keep the main thread nice and snappy. The sample code below is similar to what Iβve implemented and it works. However, I worry that itβs overly complicated with the DispatchQueue and locks. I donβt know of a better way to lock between threads and this seems to work but Iβm sure someone smarter than I can show a more elegant solution? Any advice is greatly appreciated.
class MyClass {
private let model = MyDataModel()
private let syncQueue = DispatchQueue(label: "MyClass")
private let lock = NSLock()
/**
This function can be called from several different places on several different threads.
*/
func processAdds() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
syncQueue.sync {
self.lock.lock() // Is this overkill?
// Modify the model.
self.model.calculatePendingAdds()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
self.lock.unlock() // I think this is not overkill because I can't exit the syncQueue before the main thread is finished updating the UI.
}
}
/**
This function can be called from several different places on several different threads.
*/
func processDeletes() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
syncQueue.sync {
self.lock.lock() // Is this overkill?
// Modify the model.
self.model.calculatePendingDeletes()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
self.lock.unlock() // I think this is not overkill because I can't exit the syncQueue before the main thread is finished updating the UI.
}
}
}
After digging around and trying some experimenting I realized using a custom dispatch queue with a lock was indeed overkill. The dispatch queue acts like a lock already so using it with a lock was redundant. I really just needed a lock that could be unlocked on a different thread so I figured a mutex was better. I got rid of the custom queue and lock and replaced it with this:
class MyClass {
private let model = MyDataModel()
private let mutex = pthread_mutex_t()
/**
This function can be called from several different places on several different threads.
*/
func processAdds() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
pthread_mutex_lock(&self.mutex)
// Modify the model.
self.model.calculatePendingAdds()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
pthread_mutex_unlock(&self.mutex)
}
}
/**
This function can be called from several different places on several different threads.
*/
func processDeletes() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
pthread_mutex_lock(&self.mutex)
// Modify the model.
self.model.calculatePendingDeletes()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
pthread_mutex_unlock(&self.mutex)
}
}
}
Seems simple and works.
How to lock variable and prevent from different thread changing it at the same time, which leads to error?
I tried using
func lock(obj: AnyObject, blk:() -> ()) {
objc_sync_enter(obj)
blk()
objc_sync_exit(obj)
}
but i still have multithreading issues.
Shared Value
If you have a shared value that you want to access in a thread safe way like this
var list:[Int] = []
DispatchQueue
You can create your own serial DispatchQueue.
let serialQueue = DispatchQueue(label: "SerialQueue")
Dispatch Synch
Now different threads can safely access list, you just need to write the code into a closure dispatched to your serial queue.
serialQueue.sync {
// update list <---
}
// This line will always run AFTER the closure on the previous line πππ
Since the serial queue executes the closures one at the time, the access to list will be safe.
Please note that the previous code will block the current thread until the closure is executed.
Dispatch Asynch
If you don't want to block the current thread until the closure is processed by the serial queue, you can dispatch the closure asynchronously
serialQueue.async {
// update list <---
}
// This line can run BEFORE the closure on the previous line πππ
Swift's concurrency support isn't there yet. It sounds like it might be developed a bit in Swift 5. An excellent article is Matt Gallagher's Mutexes and Closure Capture in Swift, which looks at various solutions but recommends pthread_mutex_t. The choice of approach depends on other aspects of what you're writing - there's much to consider with threading.
Could you provide a specific simple example that's failing you?
I would like to perform some code synchronously in the background, I really thought this is the way to go:
let queue = DispatchQueue.global(qos: .default)
queue.async {
print("\(Thread.isMainThread)")
}
but this prints true unless I use queue.async. async isn't possible as then the code will be executed in parallel. How can I achieve running multiple blocks synchronously in the background?
What I would like to achieve: synchronize events in my app with the devices calendar, which happens in the background. The method which does this can be called from different places multiple times so I would like to keep this in order and in the background.
Async execution isn't your problem, since you only care about the order of execution of your code blocks relative to each other but not relative to the main thread. You shouldn't block the main thread, which is in fact DispatchQueue.main and not DispatchQueue.global.
What you should do is execute your code on a serial queue asynchronously, so you don't block the main thread, but you still ensure that your code blocks execute sequentially.
You can achieve this using the following piece of code:
let serialQueue = DispatchQueue(label: "serialQueue")
serialQueue.async{ //call this whenever you need to add a new work item to your queue
//call function here
}
DispatchQueue is not equal to a Thread. Think of it as of a kind of abstraction over the thread pool.
That being said, main queue is indeed "fixed" on the main thread. And that is why, when you synchronously dispatch a work item from the main queue, you are still on the main thread.
To actually execute sync code in the background, you have to already be in the background:
DispatchQueue.global().async {
DispatchQueue.global().sync {
print("\(Thread.isMainThread)")
}
}
This will print false.
Also, as user #rmaddy correctly pointed out in comments, doing any expensive tasks synchronously from the main queue might result in your program becoming unresponsive, since the main thread is responsible for the UI updates.
I've been using NSLocks to synchronize touchy parts of code, but have been running into issues due to the fact that they must be unlocked from the same thread that they were locked from. Then I found that GCD's DispatchSemaphores seem to do the same thing, with the added convenience that they can be signaled from any thread. I was wondering, though, if this convenience comes at the price of thread-safety. Is it advisable to replace
let lock = NSLock()
lock.lock()
// do things...
lock.unlock()
with
let semaphore = DispatchSemaphore(value: 1)
semaphore.wait()
// do things...
semaphore.signal()
or will I run into issues regarding thread-safety anyway?
Yes they have the same function, both to deal with producer-consumer problem.
Semaphore allows more than one thread to access a shared resource if it is configured accordingly. You can make the execution of the blocks in the same concurrent dispatchQueue.
{
semaphore.wait()
// do things...
semaphore.signal()
}
Actually the same applies to Lock, if you only want one thread to touch the resource at one time, in the concurrent way.
I found this to be helpful: https://priteshrnandgaonkar.github.io/concurrency-with-swift-3/
Since asking this, I have mostly switched over to another way of locking blocks of code: serial dispatch queues. I use it like this:
let queue = DispatchQueue(label: "<your label here>")
queue.async {
// do things...
}
The queue is serial by default, meaning it acts as a lock that releases when the block is exited. Therefore, it's not appropriate if you need to lock on an asynchronous operation, but it works a charm in most cases.
I am using swift 3.0 running under iOS 10.0 and I want to craft some code that fires when a batch condition is met.
for i in 0 ..< rex {
async code, disappears and does it stuff
}
Imagine the async code is a collection of URL requests, that basically background as soon as I loop thru them. Now how can I fire off more code when "rex" requests have completed?
I thought of setting up a timer to watch and check every second, but its surely not a good solution.
I thought kicking off another thread to simply watch the data being collected, and fire when its quota is full, but well that's worse then the timer really.
I am thinking to include a test at the end of each URL request to see if it was the last that completed and than uses the NotificationCenter, but is this the optimal solution?
While OperationQueue (aka NSOperationQueue) is a good choice in many cases, it's not suitable for your use case. The problem is that URL requests are called asynchronously. Your NSOperation will finish before you get a response from the webservice.
Use DispatchGroup instead
let group = DispatchGroup()
// We need to dispatch to a background queue because we have
// to wait for the response from the webservice
DispatchQueue.global(qos: .utility).async {
for i in 0 ..< rex {
group.enter() // signal that you are starting a new task
URLSession.shared.dataTask(with: urls[i]) { data, response, error in
// handle your response
// ....
group.leave() // signal that you are done with the task
}.resume()
}
group.wait() // don't ever call wait() on the main queue
// Now all requests are complete
}
So I'm pretty sure what you want can be found here. Basically you want to use GCD and have a completion closure. It's one line of code, which always makes me giggle. A longer post on the topic is here.
What you're looking for is NSOperationQueue (or OperationQueue in Swift 3). Here's a Swift tutorial (might be a bit out of date). Here's Apple's documentation on it -- in Swift 3 they drop all the NS prefixes, so it's OperationQueue / Operation.
Basically you should add each of your URL tasks as an Operation to an OperationQueue, and have a "done" Operation with each of your URL tasks as a dependency, and add it to the queue. Then as soon as all your URL tasks are done, it will call your done operation, which you can set up to do whatever you want.
You will probably need to subclass Operation so you can update the isExecuting and isFinished properties properly. This question may be of some help here.