Could anyone provide advice on how to lock between threads in Swift? Specifically, I have code that separates the model from the view. The model processes adds, updates, and deletes separately before committing for the view to access. I have code that runs on the background thread to keep the main thread nice and snappy. The sample code below is similar to what I’ve implemented and it works. However, I worry that it’s overly complicated with the DispatchQueue and locks. I don’t know of a better way to lock between threads and this seems to work but I’m sure someone smarter than I can show a more elegant solution? Any advice is greatly appreciated.
class MyClass {
private let model = MyDataModel()
private let syncQueue = DispatchQueue(label: "MyClass")
private let lock = NSLock()
/**
This function can be called from several different places on several different threads.
*/
func processAdds() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
syncQueue.sync {
self.lock.lock() // Is this overkill?
// Modify the model.
self.model.calculatePendingAdds()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
self.lock.unlock() // I think this is not overkill because I can't exit the syncQueue before the main thread is finished updating the UI.
}
}
/**
This function can be called from several different places on several different threads.
*/
func processDeletes() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
syncQueue.sync {
self.lock.lock() // Is this overkill?
// Modify the model.
self.model.calculatePendingDeletes()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
self.lock.unlock() // I think this is not overkill because I can't exit the syncQueue before the main thread is finished updating the UI.
}
}
}
After digging around and trying some experimenting I realized using a custom dispatch queue with a lock was indeed overkill. The dispatch queue acts like a lock already so using it with a lock was redundant. I really just needed a lock that could be unlocked on a different thread so I figured a mutex was better. I got rid of the custom queue and lock and replaced it with this:
class MyClass {
private let model = MyDataModel()
private let mutex = pthread_mutex_t()
/**
This function can be called from several different places on several different threads.
*/
func processAdds() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
pthread_mutex_lock(&self.mutex)
// Modify the model.
self.model.calculatePendingAdds()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
pthread_mutex_unlock(&self.mutex)
}
}
/**
This function can be called from several different places on several different threads.
*/
func processDeletes() {
assert(!Thread.isMainThread)
// Ensure no other thread sneaks in and modifies the model while we're working.
pthread_mutex_lock(&self.mutex)
// Modify the model.
self.model.calculatePendingDeletes()
// Commit the model.
self.model.commit()
// Do some long running stuff with the committed data.
self.model.doStuff()
DispatchQueue.main.async {
self.updateTheUI() // Must be done on the main thread but we don't want another background thread sneaking in and modifying the model.
// Only release the lock when this main thread async block is finished.
pthread_mutex_unlock(&self.mutex)
}
}
}
Seems simple and works.
Related
Once upon the time, before Async/Await came, we use to make a simple request to the server with URLSession dataTask. The callback being not automatically called on the main thread and we had to dispatch manually to the main thread in order to perform some UI work. Example:
DispatchQueue.main.async {
// UI work
}
Omitting this will lead to the app to crash since we try to update the UI on different queue than the main one.
Now with Async/Await things got easier. We still have to dispatch to the main queue using MainActor.
await MainActor.run {
// UI work
}
The weird thing is that even when I don't use the MainActor the code inside my Task seems to run on the main thread and updating the UI seems to be safe.
Task {
let api = API(apiConfig: apiConfig)
do {
let posts = try await api.getPosts() // Checked this and the code of getPosts is running on another thread.
self.posts = posts
self.tableView.reloadData()
print(Thread.current.description)
} catch {
// Handle error
}
}
I was expecting my code to lead to crash since I am trying to update the table view theorically not from the main thread but the log says I am on the main thread. The print logs the following:
<_NSMainThread: 0x600003bb02c0>{number = 1, name = main}
Does this mean there is no need to check which queue we are in before performing UI stuff?
Regarding Task {…}, that will “create an unstructured task that runs on the current actor” (see Swift Concurrency: Unstructured Concurrency). That is a great way to launch an asynchronous task from a synchronous context. And, if called from the main actor, this Task will also be on the main actor.
In your case, I would move the model update and UI refresh to a function that is marked as running on the main actor:
#MainActor
func update(with posts: [Post]) async {
self.posts = posts
tableView.reloadData()
}
Then you can do:
Task {
let api = API(apiConfig: apiConfig)
do {
let posts = try await api.getPosts() // Checked this and the code of getPosts is running on another thread.
self.update(with: posts)
} catch {
// Handle error
}
}
And the beauty of it is that if you’re not already on the main actor, the compiler will tell you that you have to await the update method. The compiler will tell you whether you need to await or not.
If you haven’t seen it, I might suggest watching WWDC 2021 video Swift concurrency: Update a sample app. It offers lots of practical tips about converting code to Swift concurrency, but specifically at 24:16 they walk through the evolution from DispatchQueue.main.async {…} to Swift concurrency (e.g., initially suggesting the intuitive MainActor.run {…} step, but over the next few minutes, show why even that is unnecessary, but also discuss the rare scenario where you might want to use this function).
As an aside, in Swift concurrency, looking at Thread.current is not reliable. Because of this, this practice is likely going to be prohibited in a future compiler release.
If you watch WWDC 2021 Swift concurrency: Behind the scenes, you will get a glimpse of the sorts of mechanisms underpinning Swift concurrency and you will better understand why looking at Thread.current might lead to all sorts of incorrect conclusions.
I receive a lot of data and want to process it off the main thread. Most of the things can be asynchronous in nature, however sometimes there is critical data coming in with each update invalidating the previous iteration. At the moment I set up everything like this:
func onReceivedData(d: Data){ //is on main thread unfortunately
//some work 1
DispatchQueue.global(qos: DispatchQoS.QoSClass.userInitiated).async {
//some work 2
if(saveToFile) {
DispatchQueue.global(qos: DispatchQoS.QoSClass.utility).async {
//UserDefaults write
}
}
}
}
Now I am not sure if I understood the DispatchQueue correctly. Is it possible when data A comes in before data B, that data B for some reason may reach the UserDefaults write earlier than data A? Do the DispatchQueues push operations in a serial manner or can they also operate in parallel, potentially writing data B before data A, leaving the UserDefaults in an invalid state?
From the Apple Developer documentation on DispatchQueue.global():
Tasks submitted to the returned queue are scheduled concurrently with respect to one another.
This means that the global() queues are concurrent dispatch queues.
If you want to ensure that the tasks execute in the order that you schedule them, you should create your own serial DispatchQueue.
let queue = DispatchQueue(label: "my-queue")
queue.async {
// Work 1...
}
queue.async {
// Work 2...
}
"Work 1" will always run before "Work 2" in this serial queue (which may not be the case with a concurrent queue).
Alternatively, you could look into using a Swift actor which can more efficiently control access to shared mutable state in a memory-safe way.
Just started to learning about GCD and I am running into trouble because my code is still ran on the main thread while I created a background queue. This is my code:
import UIKit
class ViewController: UIViewController {
let queue = DispatchQueue(label: "internalqueue", qos: .background)
override func viewDidLoad() {
super.viewDidLoad()
dispatchFun {
assert(Thread.isMainThread)
let x = UIView()
}
}
func dispatchFun(handler: #escaping (() -> ())) {
queue.sync {
handler()
}
}
}
Surprising enough (for me), is that this code doesn't throw any error! I would expect the assertion would fail. I would expect the code is not ran on the main thread. In the debugger I see that when constructing the x instance, that I am in my queue on thread 1 (by seeing the label). Strange, because normally I see the main thread label on thread 1. Is my queue scheduled on the main thread (thread 1)?
When I change sync for async, the assertion fails. This is what I would expect to happen with sync aswell. Below is an attached image of the threads when the assertion failed. I would expect to see the exact same debug information when I use sync instead of async.
When reading the sync description in the Swift source, I read the following:
/// As an optimization, `sync(execute:)` invokes the work item on the thread which
/// submitted it, except when the queue is the main queue or
/// a queue targetting it.
Again: except when the queue is the main queue
Why does the sync method on a background dispatch queue cases the code to run on the main thread, but async doesn't? I can clearly read that the sync method on a queue shouldn't be ran on the main thread, but why does my code ignore that scenario?
I believe you’re misreading that comment in the header. It’s not a question of whether you’re dispatching from the main queue, but rather if you’re dispatching to the main queue.
So, here is the well known sync optimization where the dispatched block will run on the current thread:
let backgroundQueue = DispatchQueue(label: "internalqueue", attributes: .concurrent)
// We'll dispatch from main thread _to_ background queue
func dispatchingToBackgroundQueue() {
backgroundQueue.sync {
print(#function, "this sync will run on the current thread, namely the main thread; isMainThread =", Thread.isMainThread)
}
backgroundQueue.async {
print(#function, "but this async will run on the background queue's thread; isMainThread =", Thread.isMainThread)
}
}
When you use sync, you’re telling GCD “hey, have this thread wait until the other thread runs this block of code”. So, GCD is smart enough to figure out “well, if this thread is going to not do anything while I’m waiting for the block of code to run, I might as well run it here if I can, and save the costly context switch to another thread.”
But in the following scenario, we’re doing something on some background queue and want to dispatch it back to the main queue. In this case, GCD will not do the aforementioned optimization, but rather will always run the task dispatched to the main queue on the main queue:
// but this time, we'll dispatch from background queue _to_ the main queue
func dispatchingToTheMainQueue() {
backgroundQueue.async {
DispatchQueue.main.sync {
print(#function, "even though it’s sync, this will still run on the main thread; isMainThread =", Thread.isMainThread)
}
DispatchQueue.main.async {
print(#function, "needless to say, this async will run on the main thread; isMainThread =", Thread.isMainThread)
}
}
}
It does this because there are certain things that must run on the main queue (such as UI updates), and if you’re dispatching it to the main queue, it will always honor that request, and not try to do any optimization to avoid context switches.
Let’s consider a more practical example of the latter scenario.
func performRequest(_ url: URL) {
URLSession.shared.dataTask(with: url) { data, _, _ in
DispatchQueue.main.sync {
// we're guaranteed that this actually will run on the main thread
// even though we used `sync`
}
}
}
Now, generally we’d use async when dispatching back to the main queue, but the comment in the sync header documentation is just letting us know that this task dispatched back to the main queue using sync will actually run on the main queue, not on URLSession’s background queue as you might otherwise fear.
Let's consider:
/// As an optimization, `sync(execute:)` invokes the work item on the thread which
/// submitted it, except when the queue is the main queue or
/// a queue targetting it.
You're invoking sync() on your own queue. Is that queue the main queue or targeting the main queue? No, it's not. So, the exception isn't relevant and only this part is:
sync(execute:) invokes the work item on the thread which submitted it
So, the fact that your queue is a background queue doesn't matter. The block is executed by the thread where sync() was called, which is the main thread (which called viewDidLoad(), which called dispatchFun()).
I would like to perform some code synchronously in the background, I really thought this is the way to go:
let queue = DispatchQueue.global(qos: .default)
queue.async {
print("\(Thread.isMainThread)")
}
but this prints true unless I use queue.async. async isn't possible as then the code will be executed in parallel. How can I achieve running multiple blocks synchronously in the background?
What I would like to achieve: synchronize events in my app with the devices calendar, which happens in the background. The method which does this can be called from different places multiple times so I would like to keep this in order and in the background.
Async execution isn't your problem, since you only care about the order of execution of your code blocks relative to each other but not relative to the main thread. You shouldn't block the main thread, which is in fact DispatchQueue.main and not DispatchQueue.global.
What you should do is execute your code on a serial queue asynchronously, so you don't block the main thread, but you still ensure that your code blocks execute sequentially.
You can achieve this using the following piece of code:
let serialQueue = DispatchQueue(label: "serialQueue")
serialQueue.async{ //call this whenever you need to add a new work item to your queue
//call function here
}
DispatchQueue is not equal to a Thread. Think of it as of a kind of abstraction over the thread pool.
That being said, main queue is indeed "fixed" on the main thread. And that is why, when you synchronously dispatch a work item from the main queue, you are still on the main thread.
To actually execute sync code in the background, you have to already be in the background:
DispatchQueue.global().async {
DispatchQueue.global().sync {
print("\(Thread.isMainThread)")
}
}
This will print false.
Also, as user #rmaddy correctly pointed out in comments, doing any expensive tasks synchronously from the main queue might result in your program becoming unresponsive, since the main thread is responsible for the UI updates.
I've been using NSLocks to synchronize touchy parts of code, but have been running into issues due to the fact that they must be unlocked from the same thread that they were locked from. Then I found that GCD's DispatchSemaphores seem to do the same thing, with the added convenience that they can be signaled from any thread. I was wondering, though, if this convenience comes at the price of thread-safety. Is it advisable to replace
let lock = NSLock()
lock.lock()
// do things...
lock.unlock()
with
let semaphore = DispatchSemaphore(value: 1)
semaphore.wait()
// do things...
semaphore.signal()
or will I run into issues regarding thread-safety anyway?
Yes they have the same function, both to deal with producer-consumer problem.
Semaphore allows more than one thread to access a shared resource if it is configured accordingly. You can make the execution of the blocks in the same concurrent dispatchQueue.
{
semaphore.wait()
// do things...
semaphore.signal()
}
Actually the same applies to Lock, if you only want one thread to touch the resource at one time, in the concurrent way.
I found this to be helpful: https://priteshrnandgaonkar.github.io/concurrency-with-swift-3/
Since asking this, I have mostly switched over to another way of locking blocks of code: serial dispatch queues. I use it like this:
let queue = DispatchQueue(label: "<your label here>")
queue.async {
// do things...
}
The queue is serial by default, meaning it acts as a lock that releases when the block is exited. Therefore, it's not appropriate if you need to lock on an asynchronous operation, but it works a charm in most cases.