Is it ok to use DispatchQueue inside Task? - swift

I'm now converting some of my codes into concurrency codes with async-await and Task. One thing I wonder is it ok to use DispatchQueue inside Task instances like
Task {
await someHeavyStuff()
DispatchQueue.main.async {
someUIThreadStuff()
}
}
As I know Task and DispatchQueue has little different mechanism to handle asynchronous things, so I am worried that using both could mess up the threading system.
(I know that I can use MainActor.run {} in this case)

You get away with DispatchQueue.main.async { … }, but you really should just retire this pattern. But if you have a big complicated project that you are slowly transitioning to Swift concurrency, and do not have the time and clean this up quite yet, yes, you can get away with this GCD call for now.
But the right solution is to just mark someUIThreadStuff as #MainActor and retire the DispatchQueue.main.async { … }. It is such a trivial fix, as is MainActor.run { … }. Of everything you might be tackling in the transition to Swift concurrency, this is one of the easiest things to just do right, and get rid of GCD API.
Where you have to be especially careful as you transition to Swift concurrency is where you use locks and semaphores or where you block the current thread. Swift concurrency can not reason about these, and those can be sources of problems. But a lingering dispatch to the main queue is unlikely to cause problems, though you should certainly excise it at your earliest convenience. See Swift concurrency: Behind the scenes, particularly the discussions about the runtime contract to never prevent forward progress.
As I look at your code snippet, I would be more concerned about the Task { … } to start someHeavyStuff. The name “startHeavyStuff” suggests something that is computationally expensive, blocking the current thread. But Task { … } is for launching asynchronous tasks on the current actor, not for running “heavy” tasks on a background thread. Now, someHeavyStuff is getting it off the current actor somehow, then ignore this caveat. But be careful you do not assume that Task { … } will behave like DispatchQueue.global().async { … }, because it does not.
I would suggest watching WWDC 2021 Swift concurrency: Update a sample app. It walks through a very practical exercise of refactoring your legacy code.

Related

How to code a slow background task made of of several slow steps using async / await

I have a background task that has several slow steps to be processed in sequence. I am trying to understand any difference between the following two approaches in Swift.
First approach:
import SwiftUI
struct ContentView: View {
var body: some View {
Button("Start Slow Task") {
Task.detached(priority: .background) {
await slowBackgroundTask()
}
}
}
}
func slowBackgroundTask() async {
slowStep1()
slowStep2()
slowStep3()
}
func slowStep1() {
// takes a long time
}
func slowStep2() {
// takes a long time
}
func slowStep3() {
// takes a long time
}
Approach 2 is the same ContentView, but with the functions changed as follows.
func slowBackgroundTask() async {
await slowStep1()
await slowStep2()
await slowStep3()
}
func slowStep1() async {
// takes a long time
}
func slowStep2() async {
// takes a long time
}
func slowStep3() async {
// takes a long time
}
Is there any difference between these two patterns? I would be most grateful to understand this better.
Both versions build and run. I am trying to understand the difference.
Regarding the difference between these two patterns, they both will run the three slow steps sequentially. There are some very modest differences between the two, but likely nothing observable. I’d lean towards the first snippet, as there are fewer switches between tasks, but it almost certainly doesn’t matter too much.
FWIW, one would generally only mark a function as async if it is actually is asynchronous, i.e., if it has an await suspension point somewhere inside it. As The Swift Programming Language says,
An asynchronous function or asynchronous method is a special kind of function or method that can be suspended while it’s partway through execution.
But merely adding async qualifiers to synchronous functions has no material impact on their underlying synchronous behavior.
Now, if you are looking for a real performance improvement, the question is whether you might benefit from parallel execution (which neither of the snippets in the question can achieve). It depends upon a number of considerations:
Are the subtasks actually independent of each other? Or is subtask 1 dependent upon the results of subtask 2? Etc.
Are these subtasks trying to interact some some shared resource? Or is there going to be resource contention as you synchronize interaction with that shared resource?
Is there actually enough work being done in each subtask to justify the (admittedly very modest) overhead of parallel execution? If the subtasks are not sufficiently computationally intensive, introducing parallelism can actually make it slower.
But, if answering the above questions, you conclude that you do want to attempt parallelism, that begs the question as to how you might do that. You could use async let. Or you could use a “task group”. Or you could just make three independent detached tasks. There are a number of ways of tackling it. We would probably want to know more about these three subtasks to advise you further.
As a final, preemptive, observation as you consider parallel execution, please note that the iOS simulators suffer from artificially constrained “cooperative thread pools”. E.g., see Maximum number of threads with async-await task groups. In short, when testing parallel execution in Swift concurrency, it is best to test on a physical device, not a simulator.

is GCD really Thread-Safe?

I have studied GCD and Thread-Safe.
In apple document, GCD is Thread-Safe that means multiple thread can access.
And I learned meaning of Thread-Safe that always give same result whenever multiple thread access to some object.
I think that meaning of Thread-Safe and GCD's Thread-Safe is not same because I tested some case which is written below to sum 0 to 9999.
"something.n" value is not same when I excute code below several time.
If GCD is Thread-Safe , Why isn't "something.n" value same ?
I'm really confused with that.. Could you help me?
I really want to master Thread-Safe!!!
class Something {
var n = 0
}
class ViewController: UIViewController {
let something = Something()
var concurrentQueue = DispatchQueue(label: "asdf", attributes: .concurrent)
override func viewDidLoad() {
super.viewDidLoad()
let group = DispatchGroup()
for idx in 0..<10000 {
concurrentQueue.async(group: group) {
self.something.n += idx
}
}
group.notify(queue: .main ) {
print(self.something.n)
}
}
}
You said:
I have studied GCD and Thread-Safe. In apple document, GCD is Thread-Safe that means multiple thread can access. And I learned meaning of Thread-Safe that always give same result whenever multiple thread access to some object.
They are saying the same thing. A block of code is thread-safe only if it is safe to invoke it from different threads at the same time (and this thread safety is achieved by making sure that the critical portion of code cannot run on one thread at the same time as another thread).
But let us be clear: Apple is not saying that if you use GCD, that your code is automatically thread-safe. Yes, the dispatch queue objects, themselves, are thread-safe (i.e. you can safely dispatch to a queue from whatever thread you want), but that doesn’t mean that your own code is necessarily thread-safe. If one’s code is accessing the same memory from multiple threads concurrently, one must provide one’s own synchronization to prevent writes simultaneous with any other access.
In the Threading Programming Guide: Synchronization, which predates GCD, Apple outlines various mechanisms for synchronizing code. You can also use a GCD serial queue for synchronization. If you using a concurrent queue, one achieves thread safety if you use a “barrier” for write operations. See the latter part of this answer for a variety of ways to achieve thread safety.
But be clear, Apple is not introducing a different definition of “thread-safe”. As they say in that aforementioned guide:
When it comes to thread safety, a good design is the best protection you have. Avoiding shared resources and minimizing the interactions between your threads makes it less likely for those threads to interfere with each other. A completely interference-free design is not always possible, however. In cases where your threads must interact, you need to use synchronization tools to ensure that when they interact, they do so safely.
And in the Concurrency Programming Guide: Migrating Away from Threads: Eliminating Lock-Based Code, which was published when GCD was introduced, Apple says:
For threaded code, locks are one of the traditional ways to synchronize access to resources that are shared between threads. ... Instead of using a lock to protect a shared resource, you can instead create a queue to serialize the tasks that access that resource.
But they are not saying that you can just use GCD concurrent queues and automatically achieve thread-safety, but rather that with careful and proper use of GCD queues, one can achieve thread-safety without using locks.
By the way, Apple provides tools to help you diagnose whether your code is thread-safe, namely the Thread Sanitizer (TSAN). See Diagnosing Memory, Thread, and Crash Issues Early.
Your current queue is concurrent
var concurrentQueue = DispatchQueue(label: "asdf", attributes: . concurrent)
which means that code can run in any order inside every async
Every run has a different order but variable is accessed only by one part at a time

How can I use sleep in while without freezing all app?

I have a while loop which looping until some expression get true for breaking the loop, inside this loop I got a sleep(1), everything works fine, the only problem is the app get frozen until end of the while loop which is logical, I am trying to find out a way that the app would not get frozen while "while" working! Maybe multithreading programming? is it possible?
var repeatLoop = true
var count: Int = 0
while repeatLoop {
print(count)
count += 1
sleep(1)
if count >= 10
{
repeatLoop = false
}
}
print("finished!")
PS: My Goal is finding the Answer for this question rather than changing the way of solving, I mean there is so many way to get the same result without sleep or while.
With this program you can't! You need to look for options in your programming framework which can do multi threading. Your sleep() function must allow other threads to run.
One nice solution is to invert the logic and use Timers. instead of call sleep to block the operation till the next interaction. Your timer will call your routine many times you want.
Look this: The ultimate guide to Timer
The main thread / queue is the interface thread / queue, so time-consuming activity there freezes the interface.
Maybe multithreading programming
Indeed. The answer to your actual question is: do the time-consuming activity on a background thread / queue.
DispatchQueue.global(qos: .background).async {
// your code here
}
Of course, this raises the spectre of concurrency-related issues involving multiple simultaneous execution of the same code, non-thread-safe conflicts between accesses to shared data, etc. But that is the price of doing time-consuming things, and you just have to learn to cope.
The app won't freeze if you don't run this code on the main/UI thread.
There are several ways to do this on iOS. Here's one quick way:
DispatchQueue.global(qos: .background).async
{
// Your code here.
}

Why does the order of a dispatched print() change when run in a Playground?

In a Udacity's GCD course there was a small quiz:
let q = DispatchQueue.global(qos: .userInteractive)
q.async { () -> Void in
print("tic")
}
print("tac")
Which will be printed first?
The correct answer is: tac, then tic. Seems logical.
But, why is it so only when I create an Xcode project? In a playground it prints tic then tac. What am I missing?
In GCD
DispatchQueue.global(qos: .userInteractive).async{}
is below
DispatchQueue.main.async{}
Even thought it has quality of service (qos) as UI thread it does not means it is a main thread. So may be there is a difference in performance with play ground and ui project
please check apples documentation as well.
Apples documentation on GCD
key to your answer is in the question what really are you asking the "system" to do, an by system that is the whatever the code is running on the playground, your computer/phone or emulator. You are executing an asynchronous "block" of code - print("tic") with some priority ".userInteractive". If the system handles the asynchronous block first or continues with "normal" execution depends on priority and available resources. With asynchronous calls there is no real why to guarantee that it is executed before or after the code continues that is the nature of being asynchronous i.e execute the block as soon as the system allows it, all without blocking your current work. So the difference you are seeing in playground vs a project/emulator is that the project/phone/emulator must keep the UI/primary thread responsive so it continue with print("tac"), while the development playground favors the thread executing print("tic"). Bottom line is it has to deal with priority of execution and the available resources and how its implemented on the system you're running the code.

Using libevent together with GCD (libdispatch) in Swift

I'm creating a server side app in Swift 3. I've chosen libevent for implementing networking code because it's cross-platform and doesn't suffer from C10k problem. Libevent implements it's own event loop, but I want to keep CFRunLoop and GCD (DispatchQueue.main.after etc) functional as well, so I need to glue them somehow.
This is what I've came up with:
var terminated = false
DispatchQueue.main.after(when: DispatchTime.now() + 3) {
print("Dispatch works!")
terminated = true
}
while !terminated {
switch event_base_loop(eventBase, EVLOOP_NONBLOCK) { // libevent
case 1:
break // No events were processed
case 0:
print("DEBUG: Libevent processed one or more events")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
RunLoop.current().run(mode: RunLoopMode.defaultRunLoopMode,
before: Date(timeIntervalSinceNow: 0.01))
}
This works, but introduces a latency of 0.01 sec. While RunLoop is sleeping, libevent won't be able to process events. Lowering this timeout increases CPU usage significantly when the app is idle.
I was also considering using only libevent, but third party libs in the project can use dispatch_async internally, so this can be problematic.
Running libevent's loop in a different thread makes synchronization more complex, is this the only way of solving this latency issue?
LINUX UPDATE. The above code does not work on Linux (2016-07-25-a Swift snapshot), RunLoop.current().run exists with an error. Below is a working Linux version reimplemented with a timer and dispatch_main. It suffers from the same latency issue:
let queue = dispatch_get_main_queue()
let timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue)
let interval = 0.01
let block: () -> () = {
guard !terminated else {
print("Quitting")
exit(0)
}
switch server.loop() {
case 1: break // Just idling
case 0: break //print("Libevent: processed event(s)")
default: // -1
print("Unhandled error in network backend")
exit(1)
}
}
block()
let fireTime = dispatch_time(DISPATCH_TIME_NOW, Int64(interval * Double(NSEC_PER_SEC)))
dispatch_source_set_timer(timer, fireTime, UInt64(interval * Double(NSEC_PER_SEC)), UInt64(NSEC_PER_SEC) / 10)
dispatch_source_set_event_handler(timer, block)
dispatch_resume(timer)
dispatch_main()
A quick search of the Open Source Swift Foundation libraries on GitHub reveals that the support in CFRunLoop is (perhaps obviously) implemented differently on different platforms. This means, in essence, that RunLoop and libevent, with respect to cross-platform-ness, are just different ways to achieve the same thing. I can see the thinking behind the thought that libevent is probably better suited to server implementations, since CFRunLoop didn't grow up with that specific goal, but as far as being cross-platform goes, they're both barking up the same tree.
That said, the underlying synchronization primitives used by RunLoop and libevent are inherently private implementation details and, perhaps more importantly, different between platforms. From the source, it looks like RunLoop uses epoll on Linux, as does libevent, but on macOS/iOS/etc, RunLoop is going to use Mach ports as its fundamental primitive, but libevent looks like it's going to use kqueue. You might, with enough effort, be able to make a hybrid RunLoopSource that ties to a libevent source for a given platform, but this would likely be very fragile, and generally ill-advised, for a couple of reasons: Firstly, it would be based on private implementation details of RunLoop that are not part of the public API, and therefore subject to change at any time without notice. Second, assuming you didn't go through and do this for every platform supported by both Swift and libevent, you would have broken the cross-platform-ness of it, which was one of your stated reasons for going with libevent in the first place.
One additional option you might not have considered would be to use GCD by itself, without RunLoops. Look at the docs for dispatch_main. In a server application, there's (typically) nothing special about a "main thread," so dispatching to the "main queue", should be good enough (if needed at all). You can use dispatch "sources" to manage your connections, etc. I can't personally speak to how dispatch sources scale up to the C10K/C100K/etc. level, but they've seemed pretty lightweight and low-overhead in my experience. I also suspect that using GCD like this would likely be the most idiomatic way to write a server application in Swift. I've written up a small example of a GCD-based TCP echo server as part of another answer here.
If you were bound and determined to use both RunLoop and libevent in the same application, it would, as you guessed, be best to give libevent it's own separate thread, but I don't think it's as complex as you might think. You should be able to dispatch_async from libevent callbacks freely, and similarly marshal replies from GCD managed threads to libevent using libevent's multi-threading mechanisms fairly easily (i.e. either by running with locking on, or by marshaling your calls into libevent as events themselves.) Similarly, third party libraries using GCD should not be an issue even if you chose to use libevent's loop structure. GCD manages its own thread pools and would have no way of stepping on libevent's main loop, etc.
You might also consider architecting your application such that it didn't matter what concurrency and connection handling library you used. Then you could swap out libevent, GCD, CFStreams, etc. (or mix and match) depending on what worked best for a given situation or deployment. Choosing a concurrency approach is important, but ideally you wouldn't couple yourself to it so tightly that you couldn't switch if circumstances called for it.
When you have such an architecture, I'm generally a fan of the approach of using the highest level abstraction that gets the job done, and only driving down to lower level abstractions when specific circumstances require it. In this case, that would probably mean using CFStreams and RunLoops to start, and switching out to "bare" GCD or libevent later, if you hit a wall and also determined (through empirical measurement) that it was the transport layer and not the application layer that was the limiting factor. Very few non-trivial applications actually get to the C10K problem in the transport layer; things tend to have to scale "out" at the application layer first, at least for apps more complicated than basic message passing.