Concurrently run async tasks with unnamed async let - swift

With Swift concurrency, is it possible to have something almost like an 'unnamed' async let?
Here is an example. You have the following actor:
actor MyActor {
private var foo: Int = 0
private var bar: Int = 0
func setFoo(to value: Int) async {
foo = value
}
func setBar(to value: Int) async {
bar = value
}
func printResult() {
print("foo =", foo)
print("bar =", bar)
}
}
Now I want to set foo and bar using the given methods. Simple usage would look like the following:
let myActor = MyActor()
await myActor.setFoo(to: 5)
await myActor.setBar(to: 10)
await myActor.printResult()
However this code is sequentially run. For all intents and purposes, assume setFoo(to:) and setBar(to:) may be a long running task. You can also assume the methods are mutually exclusive (don't share variables & won't affect each other).
To make this code current, async let can be used. However, this just starts the tasks until they are awaited later on. In my example you'll notice I don't need the return value from these methods. All I need is that before printResult() is called, the previous tasks have completed.
I could come up with the following:
let myActor = MyActor()
async let tempFoo: Void = myActor.setFoo(to: 5)
async let tempBar: Void = myActor.setBar(to: 10)
let _ = await [tempFoo, tempBar]
await myActor.printResult()
Explicitly creating these tasks and then awaiting an array of them seems incorrect. Is this really the best way?

This can be achieved with a task group using withTaskGroup(of:returning:body:). The method calls are individual tasks, and then we await waitForAll() which continues when all tasks have completed.
Code:
await withTaskGroup(of: Void.self) { group in
let myActor = MyActor()
group.addTask {
await myActor.setFoo(to: 5)
}
group.addTask {
await myActor.setBar(to: 10)
}
await group.waitForAll()
await myActor.printResult()
}

I made your actor a class to allow concurrent execution of the two methods.
import Foundation
final class Jeep {
private var foo: Int = 0
private var bar: Int = 0
func setFoo(to value: Int) {
print("begin foo")
foo = value
sleep(1)
print("end foo \(value)")
}
func setBar(to value: Int) {
print("begin bar")
bar = value
sleep(2)
print("end bar \(bar)")
}
func printResult() {
print("printResult foo:\(foo), bar:\(bar)")
}
}
let jeep = Jeep()
let blocks = [
{ jeep.setFoo(to: 1) },
{ jeep.setBar(to: 2) },
]
// ...WORK
RunLoop.current.run(mode: RunLoop.Mode.default, before: NSDate(timeIntervalSinceNow: 5) as Date)
Replace WORK with one of these:
// no concurrency, ordered execution
for block in blocks {
block()
}
jeep.printResult()
// concurrency, unordered execution, tasks created upfront programmatically
Task {
async let foo: Void = blocks[0]()
async let bar: Void = blocks[1]()
await [foo, bar]
jeep.printResult()
}
// concurrency, unordered execution, tasks created upfront, but started by the system (I think)
Task {
await withTaskGroup(of: Void.self) { group in
for block in blocks {
group.addTask { block() }
}
}
// when the initialization closure exits, all child tasks are awaited implicitly
jeep.printResult()
}
// concurrency, unordered execution, awaited in order
Task {
let tasks = blocks.map { block in
Task { block() }
}
for task in tasks {
await task.value
}
jeep.printResult()
}
// tasks created upfront, all tasks start concurrently, produce result as soon as they finish
let stream = AsyncStream<Void> { continuation in
Task {
let tasks = blocks.map { block in
Task { block() }
}
for task in tasks {
continuation.yield(await task.value)
}
continuation.finish()
}
}
Task {
// now waiting for all values, bad use of a stream, I know
for await value in stream {
print(value as Any)
}
jeep.printResult()
}

Related

Task #Sendable operation

Writing a simple code:
class App {
private var value = 0
func start() async throws {
await withTaskGroup(of: Void.self) { group in
for _ in 1...100 {
group.addTask(operation: self.increment) // 1
group.addTask {
await self.increment() // 2
}
group.addTask {
self.value += 1 // 3
}
}
}
}
#Sendable private func increment() async {
self.value += 1 // 4
}
}
I got compile time warnings at lines 2, 3:
Capture of 'self' with non-sendable type 'App' in a #Sendable closure
However with enabled Thread Sanitizer, and removed lines 2 and 3, I got ThreadSanitizer runtime warning at line 4:
Swift access race in (1) suspend resume partial function for asyncTest.App.increment#Sendable () async -> () at 0x106003c80
So I have questions:
What is the difference between using these 3 .addTask ways?
What does #Sendable attribute does?
How can I make increment() function thread safe (data race free)?
For illustration of how to achieve thread-safety, consider:
class Counter {
private var value = 0
func incrementManyTimes() async {
await withTaskGroup(of: Void.self) { group in
for _ in 1...1_000_000 {
group.addTask {
self.increment() // no `await` as this is not `async` method
}
}
await group.waitForAll()
if value != 1_000_000 {
print("not thread-safe apparently; value =", value) // not thread-safe apparently; value = 994098
} else {
print("ok")
}
}
}
private func increment() { // note, this isn't `async` (as there is no `await` suspension point in here)
value += 1
}
}
That illustrates that it is not thread-safe, validating the warning from TSAN. (Note, I bumped the iteration count to make it easier to manifest the symptoms of non-thread-safe code.)
So, how would you make it thread-safe? Use an actor:
actor Counter {
private var value = 0
func incrementManyTimes() async {
await withTaskGroup(of: Void.self) { group in
for _ in 1...1_000_000 {
group.addTask {
await self.increment()
}
}
await group.waitForAll()
if value != 1_000_000 {
print("not thread-safe apparently; value =", value)
} else {
print("ok") // ok
}
}
}
private func increment() { // note, this still isn't `async`
value += 1
}
}
If you really want to use a class, add your own old-school synchronization. Here I am using a lock, but you could use a serial GCD queue, or whatever you want.
class Counter {
private var value = 0
let lock = NSLock()
func incrementManyTimes() async {
await withTaskGroup(of: Void.self) { group in
for _ in 1...1_000_000 {
group.addTask {
self.increment()
}
}
await group.waitForAll()
if value != 1_000_000 {
print("not thread-safe apparently; value =", value)
} else {
print("ok") // ok
}
}
}
private func increment() {
lock.synchronize {
value += 1
}
}
}
extension NSLocking {
func synchronize<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}
Or, if you want to make Counter a Sendable type, too, confident that you are properly doing the synchronization yourself, like above, you can declare it as final and declare that it is Sendable, admittedly #unchecked by the compiler:
final class Counter: #unchecked Sendable {
private var value = 0
let lock = NSLock()
func incrementManyTimes() async {
// same as above
}
private func increment() {
lock.synchronize {
value += 1
}
}
}
But because the compiler cannot possibly reason about the actual “sendability” itself, you have to designate this as a #unchecked Sendable to let it know that you personally have verified it is really Sendable.
But actor is the preferred mechanism for ensuring thread-safety, eliminating the need for this custom synchronization logic.
For more information, see WWDC 2021 video Protect mutable state with Swift actors or 2022’s Eliminate data races using Swift Concurrency.
See Sendable documentation for a discussion of what the #Sendable attribute does.
BTW, I suspect you know this, but for the sake of future readers, while increment is fine for illustrative purposes, it is not a good candidate for parallelism. This parallelized rendition is actually slower than a simple, single-threaded solution. To achieve performance gains of parallelism, you need to have enough work on each thread to justify the modest overhead that parallelism entails.
Also, when testing parallelism, be wary of using the simulator, which significantly/artificially constrains the cooperative thread pool used by Swift concurrency. Test on macOS target, or a physical device. Neither the simulator nor a playground is a good testbed for this sort of parallelism exercise.

Swift's "async let" error thrown depends on the order the tasks are made

I'm trying to understand async let error handling and it's not making a lot of sense in my head. It seems that if I have two parallel requests, the first one throwing an exception doesn't cancel the other request. In fact it just depends on the order in which they are made.
My testing setup:
struct Person {}
struct Animal {}
enum ApiError: Error { case person, animal }
class Requester {
init() {}
func getPeople(waitingFor waitTime: UInt64, throwError: Bool) async throws -> [Person] {
try await waitFor(waitTime)
if throwError { throw ApiError.person }
return []
}
func getAnimals(waitingFor waitTime: UInt64, throwError: Bool) async throws -> [Animal] {
try await waitFor(waitTime)
if throwError { throw ApiError.animal }
return []
}
func waitFor(_ seconds: UInt64) async throws {
do {
try await Task.sleep(nanoseconds: NSEC_PER_SEC * seconds)
} catch {
print("Error waiting", error)
throw error
}
}
}
The exercise.
class ViewController: UIViewController {
let requester = Requester()
override func viewDidLoad() {
super.viewDidLoad()
Task {
async let animals = self.requester.getAnimals(waitingFor: 1, throwError: true)
async let people = self.requester.getPeople(waitingFor: 2, throwError: true)
let start = Date()
do {
// let (_, _) = try await (people, animals)
let (_, _) = try await (animals, people)
print("No error")
} catch {
print("error: ", error)
}
print(Date().timeIntervalSince(start))
}
}
}
For simplicity, from now on I'll just past the relevant lines of code and output.
Scenario 1:
async let animals = self.requester.getAnimals(waitingFor: 1, throwError: true)
async let people = self.requester.getPeople(waitingFor: 2, throwError: true)
let (_, _) = try await (animals, people)
Results in:
error: animal
1.103397011756897
Error waiting CancellationError()
This one works as expected. The slower request, takes 2 seconds, but was cancelled after 1 second (when the fastest one throws)
Scenario 2:
async let animals = self.requester.getAnimals(waitingFor: 2, throwError: true)
async let people = self.requester.getPeople(waitingFor: 1, throwError: true)
let (_, _) = try await (animals, people)
Results in:
error: animal
2.2001450061798096
Now this one is not expected for me. The people request takes 1 second to throw an error and we still wait 2 seconds and the error is animal.
My expectation is that this should have been 1 second and people error.
Scenario 3:
async let animals = self.requester.getAnimals(waitingFor: 2, throwError: true)
async let people = self.requester.getPeople(waitingFor: 1, throwError: true)
let (_, _) = try await (people, animals)
Results in:
error: person
1.0017549991607666
Error waiting CancellationError()
Now this is expected. The difference here is that I swapped the order of the requests but changing to try await (people, animals).
It doesn't matter which method throws first, we always get the first error, and the time spent also depends on that order.
Is this behaviour expected/normal? Am I seeing anything wrong, or am I testing this wrong?
I'm surprised this isn't something people are not talking about more. I only found another question like this in developer forums.
Please help. :)
From https://github.com/apple/swift-evolution/blob/main/proposals/0317-async-let.md
async let (l, r) = {
return await (left(), right())
// ->
// return (await left(), await right())
}
meaning that the entire initializer of the async let is a single task,
and if multiple asynchronous function calls are made inside it, they
are performed one-by one.
Here is a more structured approach with behavior that makes sense.
struct ContentView: View {
var body: some View {
Text("Hello, world!")
.padding()
.task {
let requester = Requester()
let start = Date()
await withThrowingTaskGroup(of: Void.self) { group in
let animalTask = Task {
try await requester.getAnimals(waitingFor: 1, throwError: true)
}
group.addTask { animalTask }
group.addTask {
try await requester.getPeople(waitingFor: 2, throwError: true)
}
do {
for try await _ in group {
}
group.cancelAll()
} catch ApiError.animal {
group.cancelAll()
print("animal threw")
} catch ApiError.person {
group.cancelAll()
print("person threw")
} catch {
print("someone else")
}
}
print(Date().timeIntervalSince(start))
}
}
}
The idea is to add each task to a throwing group and then loop through each task.
Cora hit the nail on the head (+1). The async let of a tuple will just await them in order. Instead, consider a task group.
But you do not need to cancel the other items in the group. See “Task Group Cancellation” discussion in the withThrowingTaskGroup(of:returning:body:) documentation:
Throwing an error in one of the tasks of a task group doesn’t immediately cancel the other tasks in that group. However, if you call
next() in the task group and propagate its error, all other tasks are
canceled. For example, in the code below, nothing is canceled and the
group doesn’t throw an error:
withThrowingTaskGroup { group in
group.addTask { throw SomeError() }
}
In contrast, this example throws SomeError and cancels all of the tasks in the group:
withThrowingTaskGroup { group in
group.addTask { throw SomeError() }
try group.next()
}
An individual task throws its error in the corresponding call to Group.next(), which gives you a chance to handle the individual error or to let the group rethrow the error.
Or you can waitForAll, which will cancel the other tasks:
let start = Date()
do {
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { let _ = try await self.requester.getAnimals(waitingFor: 1, throwError: true) }
group.addTask { let _ = try await self.requester.getPeople(waitingFor: 2, throwError: true) }
try await group.waitForAll()
}
} catch {
print("error: ", error)
}
print(Date().timeIntervalSince(start))
Bottom line, task groups do not dictate the order in which the tasks are awaited. (They also do not dictate the order in which they complete, either, so you often have to collating task group results into an order-independent structure or re-order the results.)
You asked how you would go about collecting the results. There are a few options:
You can define group tasks such that they do not “return” anything (i.e. child of Void.self), but update an actor (Creatures, below) in the addTask calls and then extract your tuple from that:
class ViewModel1 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
let results = Creatures()
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { try await results.update(with: self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)) }
group.addTask { try await results.update(with: self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)) }
try await group.waitForAll()
}
return await (results.animals, results.people)
}
}
private extension ViewModel1 {
/// Creatures
///
/// A private actor used for gathering results
actor Creatures {
var animals: [Animal] = []
var people: [Person] = []
func update(with animals: [Animal]) {
self.animals = animals
}
func update(with people: [Person]) {
self.people = people
}
}
}
You can define group tasks that return enumeration case with associated value, and then extracts the results when done:
class ViewModel2 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
try await withThrowingTaskGroup(of: Creatures.self) { group in
group.addTask { try await .animals(self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)) }
group.addTask { try await .people(self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)) }
return try await group.reduce(into: ([], [])) { previousResult, creatures in
switch creatures {
case .animals(let values): previousResult.0 = values
case .people(let values): previousResult.1 = values
}
}
}
}
}
private extension ViewModel2 {
/// Creatures
///
/// A private enumeration with associated types for the types of results
enum Creatures {
case animals([Animal])
case people([Person])
}
}
For the sake of completeness, you don't have to use task group if you do not want. E.g., you can manually cancel earlier task if prior one canceled.
class ViewModel3 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
let animalsTask = Task {
try await self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)
}
let peopleTask = Task {
do {
return try await self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)
} catch {
animalsTask.cancel()
throw error
}
}
return try await (animalsTask.value, peopleTask.value)
}
}
This is not a terribly scalable pattern, which is why task groups might be a more attractive option, as they handle the cancelation of pending tasks for you (assuming you iterate through the group as you build the results).
FWIW, there are other task group alternatives, too, but there is not enough in your question to get too specific in this regard. For example, I can imagine some protocol-as-type implementations if all of the tasks returned an array of objects that conformed to a Creature protocol.
But hopefully the above illustrate a few patterns for using task groups to enjoy the cancelation capabilities while still collating the results.

Understanding actor and making it thread safe

I have an actor that is processing values and is then publishing the values with a Combine Publisher.
I have problems understanding actors, I thought when using actors in an async context, it would automatically be serialised. However, the numbers get processed in different orders and not in the expected order (see class tests for comparison).
I understand that if I would wrap Task around the for loop that then this would be returned serialised, but my understanding is, that I could call a function of an actor and this would then be automatically serialised.
How can I make my actor thread safe so it publishes the values in the expected order even if it is called from a different thread?
import XCTest
import Combine
import CryptoKit
actor AddNumbersActor {
private let _numberPublisher: PassthroughSubject<(Int,String), Never> = .init()
nonisolated lazy var numberPublisher = _numberPublisher.eraseToAnyPublisher()
func process(_ number: Int) {
let string = SHA512.hash(data: Data(String(number).utf8))
.description
_numberPublisher.send((number, string))
}
}
class AddNumbersClass {
private let _numberPublisher: PassthroughSubject<(Int,String), Never> = .init()
lazy var numberPublisher = _numberPublisher.eraseToAnyPublisher()
func process(_ number: Int) {
let string = SHA512.hash(data: Data(String(number).utf8))
.description
_numberPublisher.send((number, string))
}
}
final class TestActorWithPublisher: XCTestCase {
var subscription: AnyCancellable?
override func tearDownWithError() throws {
subscription = nil
}
func testActor() throws {
let addNumbers = AddNumbersActor()
var numbersResults = [(int: Int, string: String)]()
let expectation = expectation(description: "numberOfExpectedResults")
let numberCount = 1000
subscription = addNumbers.numberPublisher
.sink { results in
print(results)
numbersResults.append(results)
if numberCount == numbersResults.count {
expectation.fulfill()
}
}
for number in 1...numberCount {
Task {
await addNumbers.process(number)
}
}
wait(for: [expectation], timeout: 5)
print(numbersResults.count)
XCTAssertEqual(numbersResults[10].0, 11)
XCTAssertEqual(numbersResults[100].0, 101)
XCTAssertEqual(numbersResults[500].0, 501)
}
func testClass() throws {
let addNumbers = AddNumbersClass()
var numbersResults = [(int: Int, string: String)]()
let expectation = expectation(description: "numberOfExpectedResults")
let numberCount = 1000
subscription = addNumbers.numberPublisher
.sink { results in
print(results)
numbersResults.append(results)
if numberCount == numbersResults.count {
expectation.fulfill()
}
}
for number in 1...numberCount {
addNumbers.process(number)
}
wait(for: [expectation], timeout: 5)
print(numbersResults.count)
XCTAssertEqual(numbersResults[10].0, 11)
XCTAssertEqual(numbersResults[100].0, 101)
XCTAssertEqual(numbersResults[500].0, 501)
}
}
``
Using actor does indeed serialize access.
The issue you're running into is that the tests aren't testing whether calls to process() are serialized, they are testing the execution order of the calls. And the execution order of the Task calls is not guaranteed.
Try changing your AddNumbers objects so that instead of the output order reflecting the order in which the calls were made, they will succeed if calls are serialized but will fail if concurrent calls are made. You can do this by keeping a count variable, incrementing it, sleeping a bit, then publishing the count. Concurrent calls will fail, since count will be incremented multiple times before its returned.
If you make that change, the test using an Actor will pass. The test using a class will fail if it calls process() concurrently:
DispatchQueue.global(qos: .default).async {
addNumbers.process()
}
It will also help to understand that Task's scheduling depends on a bunch of stuff. GCD will spin up tons of threads, whereas Swift concurrency will only use 1 worker thread per available core (I think!). So in some execution environments, just wrapping your work in Task { } might be enough to serialize it for you. I've been finding that iOS simulators act as if they have a single core, so task execution ends up being serialized. Also, otherwise unsafe code will work if you ensure the task runs on the main actor, since it guarantees serial execution:
Task { #MainActor in
// ...
}
Here are modified tests showing all this:
class TestActorWithPublisher: XCTestCase {
actor AddNumbersActor {
private let _numberPublisher: PassthroughSubject<Int, Never> = .init()
nonisolated lazy var numberPublisher = _numberPublisher.eraseToAnyPublisher()
var count = 0
func process() {
// Increment the count here
count += 1
// Wait a bit...
Thread.sleep(forTimeInterval: TimeInterval.random(in: 0...0.010))
// Send it back. If other calls to process() were made concurrently, count may have been incremented again before being sent:
_numberPublisher.send(count)
}
}
class AddNumbersClass {
private let _numberPublisher: PassthroughSubject<Int, Never> = .init()
lazy var numberPublisher = _numberPublisher.eraseToAnyPublisher()
var count = 0
func process() {
count += 1
Thread.sleep(forTimeInterval: TimeInterval.random(in: 0...0.010))
_numberPublisher.send(count)
}
}
var subscription: AnyCancellable?
override func tearDownWithError() throws {
subscription = nil
}
func testActor() throws {
let addNumbers = AddNumbersActor()
var numbersResults = [Int]()
let expectation = expectation(description: "numberOfExpectedResults")
let numberCount = 1000
subscription = addNumbers.numberPublisher
.sink { results in
numbersResults.append(results)
if numberCount == numbersResults.count {
expectation.fulfill()
}
}
for _ in 1...numberCount {
Task.detached(priority: .high) {
await addNumbers.process()
}
}
wait(for: [expectation], timeout: 10)
XCTAssertEqual(numbersResults, Array(1...numberCount))
}
func testClass() throws {
let addNumbers = AddNumbersClass()
var numbersResults = [Int]()
let expectation = expectation(description: "numberOfExpectedResults")
let numberCount = 1000
subscription = addNumbers.numberPublisher
.sink { results in
numbersResults.append(results)
if numberCount == numbersResults.count {
expectation.fulfill()
}
}
for _ in 1...numberCount {
DispatchQueue.global(qos: .default).async {
addNumbers.process()
}
}
wait(for: [expectation], timeout: 5)
XCTAssertEqual(numbersResults, Array(1...numberCount))
}
}

How does await in swift work with tuples?

I'm trying to make sure I understand the behavior of await. Suppose we have the following functions:
func do() async {
//code
}
func stuff() async {
//code
}
The following statements will run do and stuff sequentially:
await do()
await stuff()
But the following statement will run do and stuff in parallel correct?
await (do(), stuff())
I'm not sure how to check in Xcode if my code runs in parallel or in sequence.
Short answer:
If you want concurrent execution, either use async let pattern or a task group.
Long answer:
You said:
But the following statement will run do and stuff in parallel correct?
await (do(), stuff())
No, they will not.
This is best illustrated empirically by:
Make the task take enough time that concurrency behavior can easily be manifested; and
Use “Points of Interest” instrument (e.g., by picking the “Time Profiler” template) in Instruments to graphically represent intervals graphically over time.
Consider this code, using the tuple approach:
import os.log
private let log = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: .pointsOfInterest)
And:
func example() async {
let values = await (self.doSomething(), self.doSomethingElse())
print(values)
}
func doSomething() async -> Int {
spin(#function)
return 1
}
func doSomethingElse() async -> Int {
spin(#function)
return 42
}
func spin(_ name: StaticString) {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: name, signpostID: id, "begin")
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < 1 { } // spin for one second
os_signpost(.end, log: log, name: name, signpostID: id, "end")
}
That results in a graph that shows that it is not happening concurrently:
Whereas:
func example() async {
async let foo = self.doSomething()
async let bar = self.doSomethingElse()
let values = await (foo, bar)
print(values)
}
That does result in concurrent execution:
Now, in the above examples, I changed the functions so that they returned values (as that is really the only context where using tuples makes any practical sense).
But if they did not return values and you wanted them to run in parallel, you might use a task group:
func experiment() async {
await withTaskGroup(of: Void.self) { group in
group.addTask { await self.doSomething() }
group.addTask { await self.doSomethingElse() }
}
}
func doSomething() async {
spin(#function)
}
func doSomethingElse() async {
spin(#function)
}
That results in the same graph where they run in parallel.
You can also just create Task instances and then await them:
func experiment() async {
async let task1 = Task { await doSomething() }
async let task2 = Task { await doSomethingElse() }
_ = await task1
_ = await task2
}
But task groups offer greater flexibility when the number of tasks being created may not be known at compile-time.
It seems that in order to get concurrent execution you must have an explicit async let for each function:
actor A {
var t = 1
func tt() -> Int {
for i in 0 ... 1000000 {
t += i
}
let s = t
t = 1
return s
}
}
var a = A()
var b = A()
func go() {
Task {
var d = Date()
await (a.tt(), b.tt())
print("time=1",d.timeIntervalSinceNow)
d = Date()
await a.tt()
await b.tt()
print("time2=",d.timeIntervalSinceNow)
d = Date()
async let q = (a.tt(), b.tt())
await q
print("time3=",d.timeIntervalSinceNow)
d = Date()
async let q1 = a.tt()
async let q2 = b.tt()
await q1
await q2
print("time4=",d.timeIntervalSinceNow)
d = Date()
async let q3 = a.tt()
async let q4 = b.tt()
await (q3, q4)
print("time5=",d.timeIntervalSinceNow)
}
}
printout:
time1= -0.4335060119628906
time2= -0.435217022895813
time3= -0.4430699348449707
time4= -0.23430800437927246
time5= -0.23900198936462402

What is the correct way to await the completion of two Tasks in Swift 5.5 in a function that does not support concurrency?

I have an app that does some processing given a string, this is done in 2 Tasks. During this time i'm displaying an animation. When these Tasks complete i need to hide the animation. The below code works, but is not very nice to look at. I believe there is a better way to do this?
let firTask = Task {
/* Slow-running code */
}
let airportTask = Task {
/* Even more slow-running code */
}
Task {
_ = await firTask.result
_ = await airportTask.result
self.isVerifyingRoute = false
}
Isn't the real problem that this is a misuse of Task? A Task, as you've discovered, is not really of itself a thing you can await. If the goal is to run slow code in the background, use an actor. Then you can cleanly call an actor method with await.
let myActor = MyActor()
await myActor.doFirStuff()
await myActor.doAirportStuff()
self.isVerifyingRoute = false
However, we also need to make sure we're on the main thread when we talk to self — something that your code omits to do. Here's an example:
actor MyActor {
func doFirStuff() async {
print("starting", #function)
await Task.sleep(2 * 1_000_000_000)
print("finished", #function)
}
func doAirportStuff() async {
print("starting", #function)
await Task.sleep(2 * 1_000_000_000)
print("finished", #function)
}
}
func test() {
let myActor = MyActor()
Task {
await myActor.doFirStuff()
await myActor.doAirportStuff()
Task { #MainActor in
self.isVerifyingRoute = false
}
}
}
Everything happens in the right mode: the time-consuming stuff happens on background threads, and the call to self happens on the main thread. A cleaner-looking way to take care of the main thread call, in my opinion, would be to have a #MainActor method:
func test() {
let myActor = MyActor()
Task {
await myActor.doFirStuff()
await myActor.doAirportStuff()
self.finish()
}
}
#MainActor func finish() {
self.isVerifyingRoute = false
}
I regard that as elegant and clear.
I would make the tasks discardable with an extension. Perhaps something like this:
extension Task {
#discardableResult
func finish() async -> Result<Success, Failure> {
await self.result
}
}
Then you could change your loading task to:
Task {
defer { self.isVerifyingRoute = false }
await firTask.finish()
await airportTask.finish()
}