I've been working on some code recently where I want to ensure that certain tasks run sequentially always execute on the same thread.
As an experiment to get a feel for how this would work I created my own thread using this example Thread Example
My code works fine in that I call a method on the global queue then I enqueue an operation on my Custom Thread.
func currentQueueName() -> String? {
let name = __dispatch_queue_get_label(nil)
return String(cString: name, encoding: .utf8)
}
DispatchQueue.global().async {
print(self.currentQueueName())
myThread.enqueue {
print(self.currentQueueName())
//prints "com.apple.root.default-qos.overcommit"
for i in 1...100 {
print(i)
}
}
}
Whenever I print out the current thread during my Custom Thread Execution the thread name says.
("com.apple.root.default-qos.overcommit")
I don't get any errors or crashes.
1) What exactly does this over commit mean?
2) How have I caused it by using my own thread?
3) Is it dangerous to be seeing this message in production code?
4) If it is dangerous how can I use my Custom Thread safely
Update
After reading a post on Swift forum I'm beginning to think that over commit queue refers to any thread that isn't from Dispatch Queue Global.
I'm still not 100% certain though.
Related
Once upon the time, before Async/Await came, we use to make a simple request to the server with URLSession dataTask. The callback being not automatically called on the main thread and we had to dispatch manually to the main thread in order to perform some UI work. Example:
DispatchQueue.main.async {
// UI work
}
Omitting this will lead to the app to crash since we try to update the UI on different queue than the main one.
Now with Async/Await things got easier. We still have to dispatch to the main queue using MainActor.
await MainActor.run {
// UI work
}
The weird thing is that even when I don't use the MainActor the code inside my Task seems to run on the main thread and updating the UI seems to be safe.
Task {
let api = API(apiConfig: apiConfig)
do {
let posts = try await api.getPosts() // Checked this and the code of getPosts is running on another thread.
self.posts = posts
self.tableView.reloadData()
print(Thread.current.description)
} catch {
// Handle error
}
}
I was expecting my code to lead to crash since I am trying to update the table view theorically not from the main thread but the log says I am on the main thread. The print logs the following:
<_NSMainThread: 0x600003bb02c0>{number = 1, name = main}
Does this mean there is no need to check which queue we are in before performing UI stuff?
Regarding Task {…}, that will “create an unstructured task that runs on the current actor” (see Swift Concurrency: Unstructured Concurrency). That is a great way to launch an asynchronous task from a synchronous context. And, if called from the main actor, this Task will also be on the main actor.
In your case, I would move the model update and UI refresh to a function that is marked as running on the main actor:
#MainActor
func update(with posts: [Post]) async {
self.posts = posts
tableView.reloadData()
}
Then you can do:
Task {
let api = API(apiConfig: apiConfig)
do {
let posts = try await api.getPosts() // Checked this and the code of getPosts is running on another thread.
self.update(with: posts)
} catch {
// Handle error
}
}
And the beauty of it is that if you’re not already on the main actor, the compiler will tell you that you have to await the update method. The compiler will tell you whether you need to await or not.
If you haven’t seen it, I might suggest watching WWDC 2021 video Swift concurrency: Update a sample app. It offers lots of practical tips about converting code to Swift concurrency, but specifically at 24:16 they walk through the evolution from DispatchQueue.main.async {…} to Swift concurrency (e.g., initially suggesting the intuitive MainActor.run {…} step, but over the next few minutes, show why even that is unnecessary, but also discuss the rare scenario where you might want to use this function).
As an aside, in Swift concurrency, looking at Thread.current is not reliable. Because of this, this practice is likely going to be prohibited in a future compiler release.
If you watch WWDC 2021 Swift concurrency: Behind the scenes, you will get a glimpse of the sorts of mechanisms underpinning Swift concurrency and you will better understand why looking at Thread.current might lead to all sorts of incorrect conclusions.
I am trying to learn the swift concurrency but it brings in a lot of confusion. I understood that a Task {} is an asynchronous unit and will allow us to bridge the async function call from a synchronous context. And it is similar to DispatchQueue.Global() which in turn will execute the block on some arbitrary thread.
override func viewDidLoad() {
super.viewDidLoad()
Task {
do {
let data = try await asychronousApiCall()
print(data)
} catch {
print("Request failed with error: \(error)")
}
}
for i in 1...30000 {
print("Thread \(Thread.current)")
}
}
my asychronousApiCall function is below
func asychronousApiCall() async throws -> Data {
print("starting with asychronousApiCall")
print("Thread \(Thread.current)")
let url = URL(string: "https://www.stackoverflow.com")!
// Use the async variant of URLSession to fetch data
// Code might suspend here
let (data, _) = try await URLSession.shared.data(from: url)
return data
}
When I try this implementation. I always see that "starting with asychronousApiCall" is printed after the for loop is done and the thread is MainThread.
like this
Thread <_NSMainThread: 0x600000f10500>{number = 1, name = main}
You said:
I understood that a Task {} is an asynchronous unit and will allow us to bridge the async function call from a synchronous context.
Yes.
You continue:
And it is similar to DispatchQueue.global() which in turn will execute the block on some arbitrary thread.
No, if you call it from the main actor, it is more akin to DispatchQueue.main.async { … }. As the documentation says, it “[r]uns the given nonthrowing operation asynchronously as part of a new top-level task on behalf of the current actor” [emphasis added]. I.e., if you are currently on the main actor, the task will be run on behalf of the main actor, too.
While it is a probably mistake to dwell on direct GCD-to-concurrency mappings, Task.detached { … } is more comparable to DispatchQueue.global().async { … }.
You commented:
Please scroll to figure 8 in last of the article. It has a normal task and Thread is print is some other thread.
figure 8
In that screen snapshot, they are showing that prior to the suspension point (i.e., before the await) it was on the main thread (which makes sense, because it is running it on behalf of the same actor). But they are also highlighting that after the suspension point, it was on another thread (which might seem counterintuitive, but it is what can happen after a suspension point). This is very common behavior in Swift concurrency, though it can vary.
FWIW, in your example above, you only examine the thread before the suspension point and not after. The take-home message of figure 8 is that the thread used after the suspension point may not be the same one used before the suspension point.
If you are interested in learning more about some of these implementation details, I might suggest watching WWDC 2021 video Swift concurrency: Behind the scenes.
While it is interesting to look at Thread.current, it should be noted that Apple is trying to wean us off of this practice. E.g., in Swift 5.7, if we look at Thread.current from an asynchronous context, we get a warning:
Class property 'current' is unavailable from asynchronous contexts; Thread.current cannot be used from async contexts.; this is an error in Swift 6
The whole idea of Swift concurrency is that we stop thinking in terms of threads and we instead let Swift concurrency choose the appropriate thread on our behalf (which cleverly avoids costly context switches where it can; sometimes resulting code that runs on threads other than what we might otherwise expect).
In Swift, if Thread.current.isMainThread == false, then is it safe to DispatchQueue.main.sync recursively once?
The reason I ask is that, in my company's app, we had a crash that turned out to be due to some UI method being called from off the main thread, like:
public extension UIViewController {
func presentModally(_ viewControllerToPresent: UIViewController, animated flag: Bool, completion: (() -> Void)? = nil) {
// some code that sets presentation style then:
present(viewControllerToPresent, animated: flag, completion: completion)
}
}
Since this was getting called from many places, some of which would sometimes call it from a background thread, we were getting crashes here and there.
Fixing all the call sites was not feasible due to the app being over a million lines of code, so my solution to this was simply to check if we're on the main thread, and if not, then redirect the call to the main thread, like so:
public extension UIViewController {
func presentModally(_ viewControllerToPresent: UIViewController, animated flag: Bool, completion: (() -> Void)? = nil) {
guard Thread.current.isMainThread else {
DispatchQueue.main.sync {
presentModally(viewControllerToPresent, animated: flag, completion: completion)
}
return
}
// some code that sets presentation style then:
present(viewControllerToPresent, animated: flag, completion: completion)
}
}
The benefits of this approach seem to be:
Preservation of execution order. If the caller is off the main thread, we'll redirect onto the main thread, then execute the same function before we return -- thus preserving the normal execution order that the would have happened had the original function been called from the main thread, since functions called on the main thread (or any other thread) execute synchronously by default.
Ability to implicitly reference self without compiler warnings. In Xcode 11.4, performing this call synchronously also satisfies the compiler that it's OK to implicitly retain self, since the dispatch context will be entered then exited before the original function call returns -- so we don't get any new compiler warnings from this approach. That's nice and clean.
More focused diffs via less indentation. It avoids wrapping the entire function body in a closure (like you'd normally see done if Dispatch.main.async { ... } was used, where the whole body must now be indented a level deeper, incurring whitespace diffs in your PR that can lead to annoying merge conflicts and make it harder for reviewers to distinguish the salient elements in GitHub's PR diff views).
Meanwhile the alternative, DispatchQueue.main.async, would seem to have the following drawbacks:
Potentially changes expected execution order. The function would return before executing the dispatched closure, which in turn means that self could have deallocated before it runs. That means we'd have to explicitly retain self (or weakify it) to avoid a compiler warning. It also means that, in this example, present(...) would not get called before the function would return to the caller. This could cause the modal to pop-up after some other code subsequent to the call site, leading to unintended behavior.
Requirement of either weakifying or explicitly retaining self. This is not really a drawback but it's not as clean, stylistically, as being able to implicitly retain self.
So the question is: are these assumptions all correct, or am I missing something here?
My colleagues who reviewed the PR seemed to feel that using "DispatchQueue.main.sync" is somehow inherently bad and risky, and could lead to a deadlock. While I realize that using this from the main thread would indeed deadlock, here we explicitly avoid that here using a guard statement to make sure we're NOT on the main thread first.
Despite being presented with all the above rationale, and despite being unable to explain to me how a deadlock could actually happen given that the dispatch only happens if the function gets called off the main thread to begin with, my colleagues still have deep reservations about this pattern, feeling that it could lead to a deadlock or block the UI in unexpected ways.
Are those fears founded? Or is this pattern perfectly safe?
This pattern is definitely not “perfectly” safe. One can easily contrive a deadlock:
let group = DispatchGroup()
DispatchQueue.global().async(group: group) {
self.presentModally(controller, animated: true)
}
group.wait()
Checking that isMainThread is false is insufficient, strictly speaking, to know whether it’s safe to dispatch synchronously to the main thread.
But that’s not the real issue. You obviously have some routine somewhere that thinks it’s running on the main thread, when it’s not. Personally, I’d be worried about what else that code did while operating under this misconception (e.g. unsynchronized model updates, etc.).
Your workaround, rather than fixing the root cause of the problem, is just hiding it. As a general rule, I would not suggest coding around bugs introduced elsewhere in the codebase. You really should just figure out where you’re calling this routine from a background thread and resolve that.
In terms of how to find the problem, hopefully the stack trace associated with the crash will tell you. I’d also suggest adding a breakpoint for the main thread checker by clicking on that little arrow next to it in the scheme settings:
Then exercise the app and if it encounters this issue, it will pause execution at the offending line, which can be very useful in tracking down these issues. That often is much easier than reverse-engineering from the stack trace.
I agree with the comments that you have some structural difficulties with your code.
But there are still times in which I need code to run on the main thread and I don't know if I'm already on the main thread or not. This has occurred often enough that I wrote a ExecuteOnMain() function just for this:
dispatch_queue_t MainSequentialQueue( )
{
static dispatch_queue_t mainQueue;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
#if HAS_MAIN_RUNLOOP
// If this process has a main thread run loop, queue sequential tasks to run on the main thread
mainQueue = dispatch_get_main_queue();
#else
// If the process doesn't execute in a run loop, create a sequential utility thread to perform these tasks
mainQueue = dispatch_queue_create("main-sequential",DISPATCH_QUEUE_SERIAL);
#endif
});
return mainQueue;
}
BOOL IsMainQueue( )
{
#if HAS_MAIN_RUNLOOP
// Return YES if this code is already executing on the main thread
return [NSThread isMainThread];
#else
// Return YES if this code is already executing on the sequential queue, NO otherwise
return ( MainSequentialQueue() == dispatch_get_current_queue() );
#endif
}
void DispatchOnMain( dispatch_block_t block )
{
// Shorthand for asynchronously dispatching a block to execute on the main thread
dispatch_async(MainSequentialQueue(),block);
}
void ExecuteOnMain( dispatch_block_t block )
{
// Shorthand for synchronously executing a block on the main thread before returning.
// Unlike dispatch_sync(), this won't deadlock if executed on the main thread.
if (IsMainQueue())
// If this is the main thread, execute the block immediately
block();
else
// If this is not the main thread, queue the block to execute on the main queue and wait for it to finish
dispatch_sync(MainSequentialQueue(),block);
}
A bit late, but I had a need for this type of solution too. I had some common code that could be invoked from both the main thread and background threads, and updated the UI. My solution to the generic use case was:
public extension UIViewController {
func runOnUiThread(closure: #escaping () -> ()) {
if Thread.isMainThread {
closure()
} else {
DispatchQueue.main.sync(execute: closure)
}
}
}
Then to call it from a UIViewController:
runOnUiThread {
code here
}
As others have pointed out, this is not completely safe. You might have some code on background thread that is invoked from the main thread, synchronously. If that background code then calls the code above, it will attempt to run on the main thread and will create a deadlock. The main thread is waiting for the background code to execute, and the background code will wait for the main thread to be free.
So I've been playing about with NetworkExtension to to make a toy VPN implementation and I ran into an issue with the completion handlers/asynchronously running code. I'll run you through my train of thought/expirments and would appreciate any pointers at areas where I am mistaken, and how to resolve this issue!
Here's the smallest reproducible bit of code (obviously you will need to import NetworkExtension):
let semaphore = DispatchSemaphore(value: 0)
NETunnelProviderManager.loadAllFromPreferences { managers, error in
print("2 during")
semaphore.signal()
}
print("1 before")
semaphore.wait()
print("3 after")
With my understanding of semaphores and asynchronous code I'd expect the printouts to occur in the order:
1 before
2 during
3 after
However the program hangs at "1 before". If I remove the semaphore.wait() line, the printout occurs as expected in the order: 1, 3, 2 (as the closure runs later).
So after a bit of digging around with the debugger, it looks like the semaphore trap loop is blocking up execution. This sparked me to read around a bit into queues, and I discovered that changing it to the following works:
// ... as before
DispatchQueue.global().async {
semaphore.wait()
print("3 after")
}
This makes some sense as the blocking .wait() call is now being called asynchronously in a separate thread. However, this solution is not desired for me as in my actual implementation I am actually capturing the results from the closure and returning them later, in something that looks like this:
let semaphore = DispatchSemaphore(value: 0)
var results: [NETunnelProviderManager]? = nil
NETunnelProviderManager.loadAllFromPreferences { managers, error in
print("2 during")
results = managers
semaphore.signal()
}
print("1 before")
// DispatchQueue.global().async {
semaphore.wait()
print("3 after")
// }
return results
Obviously I cannot return data from from the async closure, and moving the return out of it would make it defunct. Acdditionally, adding another semaphore to make things synchronous exhibits the same issue as before just moving the problem along in a chain.
As a result, I decided to try putting the .loadAllFromPreferences() call and completion handler in an async closure and leave everything else as in the original code snippet:
// ...
DispatchQueue.global().async {
NETunnelProviderManager.loadAllFromPreferences { loadedManagers, error in
print("2 during")
semaphore.signal()
}
}
// ...
However this does not work and the .wait() call is never passed - as before. I assume that somehow the sempahore is still blocking the thread and not allowing anything to execute, meaning whatever in the system is managing the queue is not running the async block? However I'm clutching at straws here, and fear my original conclusion may not have been right.
This is where I'm starting to get out of my depth, so I'd like to know what is actually going on, and what resolution would you recommend to get the results from .loadAllFromPreferences() in a synchronous manner?
Thanks!
From the documentation for NETunnelProviderManager loadAllFromPreferences:
This block will be executed on the caller’s main thread after the load operation is complete
So we know that the completion handler is on the main thread.
We also know that the call to DispatchSemaphore wait will block whatever thread it is running on. Given this evidence, you must be calling all of this code from the main thread. Since your call to wait is blocking the main thread, the completion handler can never be called because the main thread is blocked.
This is made clear by your attempt to call wait on some global background queue. That allows the completion block to be called because your use of wait is no longer blocking the main thread.
And your attempt to call loadAllFromPreferences from a global background queue doesn't change anything because its completion block is still called on the main thread and your call to wait is still on the main thread.
It's a bad idea to block the main thread at all. The proper solution is to refactor whatever method this code is in to use its own completion handler instead of trying to use a normal return value.
Just started to learning about GCD and I am running into trouble because my code is still ran on the main thread while I created a background queue. This is my code:
import UIKit
class ViewController: UIViewController {
let queue = DispatchQueue(label: "internalqueue", qos: .background)
override func viewDidLoad() {
super.viewDidLoad()
dispatchFun {
assert(Thread.isMainThread)
let x = UIView()
}
}
func dispatchFun(handler: #escaping (() -> ())) {
queue.sync {
handler()
}
}
}
Surprising enough (for me), is that this code doesn't throw any error! I would expect the assertion would fail. I would expect the code is not ran on the main thread. In the debugger I see that when constructing the x instance, that I am in my queue on thread 1 (by seeing the label). Strange, because normally I see the main thread label on thread 1. Is my queue scheduled on the main thread (thread 1)?
When I change sync for async, the assertion fails. This is what I would expect to happen with sync aswell. Below is an attached image of the threads when the assertion failed. I would expect to see the exact same debug information when I use sync instead of async.
When reading the sync description in the Swift source, I read the following:
/// As an optimization, `sync(execute:)` invokes the work item on the thread which
/// submitted it, except when the queue is the main queue or
/// a queue targetting it.
Again: except when the queue is the main queue
Why does the sync method on a background dispatch queue cases the code to run on the main thread, but async doesn't? I can clearly read that the sync method on a queue shouldn't be ran on the main thread, but why does my code ignore that scenario?
I believe you’re misreading that comment in the header. It’s not a question of whether you’re dispatching from the main queue, but rather if you’re dispatching to the main queue.
So, here is the well known sync optimization where the dispatched block will run on the current thread:
let backgroundQueue = DispatchQueue(label: "internalqueue", attributes: .concurrent)
// We'll dispatch from main thread _to_ background queue
func dispatchingToBackgroundQueue() {
backgroundQueue.sync {
print(#function, "this sync will run on the current thread, namely the main thread; isMainThread =", Thread.isMainThread)
}
backgroundQueue.async {
print(#function, "but this async will run on the background queue's thread; isMainThread =", Thread.isMainThread)
}
}
When you use sync, you’re telling GCD “hey, have this thread wait until the other thread runs this block of code”. So, GCD is smart enough to figure out “well, if this thread is going to not do anything while I’m waiting for the block of code to run, I might as well run it here if I can, and save the costly context switch to another thread.”
But in the following scenario, we’re doing something on some background queue and want to dispatch it back to the main queue. In this case, GCD will not do the aforementioned optimization, but rather will always run the task dispatched to the main queue on the main queue:
// but this time, we'll dispatch from background queue _to_ the main queue
func dispatchingToTheMainQueue() {
backgroundQueue.async {
DispatchQueue.main.sync {
print(#function, "even though it’s sync, this will still run on the main thread; isMainThread =", Thread.isMainThread)
}
DispatchQueue.main.async {
print(#function, "needless to say, this async will run on the main thread; isMainThread =", Thread.isMainThread)
}
}
}
It does this because there are certain things that must run on the main queue (such as UI updates), and if you’re dispatching it to the main queue, it will always honor that request, and not try to do any optimization to avoid context switches.
Let’s consider a more practical example of the latter scenario.
func performRequest(_ url: URL) {
URLSession.shared.dataTask(with: url) { data, _, _ in
DispatchQueue.main.sync {
// we're guaranteed that this actually will run on the main thread
// even though we used `sync`
}
}
}
Now, generally we’d use async when dispatching back to the main queue, but the comment in the sync header documentation is just letting us know that this task dispatched back to the main queue using sync will actually run on the main queue, not on URLSession’s background queue as you might otherwise fear.
Let's consider:
/// As an optimization, `sync(execute:)` invokes the work item on the thread which
/// submitted it, except when the queue is the main queue or
/// a queue targetting it.
You're invoking sync() on your own queue. Is that queue the main queue or targeting the main queue? No, it's not. So, the exception isn't relevant and only this part is:
sync(execute:) invokes the work item on the thread which submitted it
So, the fact that your queue is a background queue doesn't matter. The block is executed by the thread where sync() was called, which is the main thread (which called viewDidLoad(), which called dispatchFun()).