How does await in swift work with tuples? - swift

I'm trying to make sure I understand the behavior of await. Suppose we have the following functions:
func do() async {
//code
}
func stuff() async {
//code
}
The following statements will run do and stuff sequentially:
await do()
await stuff()
But the following statement will run do and stuff in parallel correct?
await (do(), stuff())
I'm not sure how to check in Xcode if my code runs in parallel or in sequence.

Short answer:
If you want concurrent execution, either use async let pattern or a task group.
Long answer:
You said:
But the following statement will run do and stuff in parallel correct?
await (do(), stuff())
No, they will not.
This is best illustrated empirically by:
Make the task take enough time that concurrency behavior can easily be manifested; and
Use “Points of Interest” instrument (e.g., by picking the “Time Profiler” template) in Instruments to graphically represent intervals graphically over time.
Consider this code, using the tuple approach:
import os.log
private let log = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: .pointsOfInterest)
And:
func example() async {
let values = await (self.doSomething(), self.doSomethingElse())
print(values)
}
func doSomething() async -> Int {
spin(#function)
return 1
}
func doSomethingElse() async -> Int {
spin(#function)
return 42
}
func spin(_ name: StaticString) {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: name, signpostID: id, "begin")
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < 1 { } // spin for one second
os_signpost(.end, log: log, name: name, signpostID: id, "end")
}
That results in a graph that shows that it is not happening concurrently:
Whereas:
func example() async {
async let foo = self.doSomething()
async let bar = self.doSomethingElse()
let values = await (foo, bar)
print(values)
}
That does result in concurrent execution:
Now, in the above examples, I changed the functions so that they returned values (as that is really the only context where using tuples makes any practical sense).
But if they did not return values and you wanted them to run in parallel, you might use a task group:
func experiment() async {
await withTaskGroup(of: Void.self) { group in
group.addTask { await self.doSomething() }
group.addTask { await self.doSomethingElse() }
}
}
func doSomething() async {
spin(#function)
}
func doSomethingElse() async {
spin(#function)
}
That results in the same graph where they run in parallel.
You can also just create Task instances and then await them:
func experiment() async {
async let task1 = Task { await doSomething() }
async let task2 = Task { await doSomethingElse() }
_ = await task1
_ = await task2
}
But task groups offer greater flexibility when the number of tasks being created may not be known at compile-time.

It seems that in order to get concurrent execution you must have an explicit async let for each function:
actor A {
var t = 1
func tt() -> Int {
for i in 0 ... 1000000 {
t += i
}
let s = t
t = 1
return s
}
}
var a = A()
var b = A()
func go() {
Task {
var d = Date()
await (a.tt(), b.tt())
print("time=1",d.timeIntervalSinceNow)
d = Date()
await a.tt()
await b.tt()
print("time2=",d.timeIntervalSinceNow)
d = Date()
async let q = (a.tt(), b.tt())
await q
print("time3=",d.timeIntervalSinceNow)
d = Date()
async let q1 = a.tt()
async let q2 = b.tt()
await q1
await q2
print("time4=",d.timeIntervalSinceNow)
d = Date()
async let q3 = a.tt()
async let q4 = b.tt()
await (q3, q4)
print("time5=",d.timeIntervalSinceNow)
}
}
printout:
time1= -0.4335060119628906
time2= -0.435217022895813
time3= -0.4430699348449707
time4= -0.23430800437927246
time5= -0.23900198936462402

Related

Run function after two functions has ran

So let's say I have these three functions:
func func_1() {
Task { #MainActor in
let state = try await api.get1State(v!)
print("cState func_1: \(state!)")
}
}
func func_2() {
Task { #MainActor in
let state = try await api.get2State(v!)
print("cState func_2: \(state!)")
}
}
func func_3() {
Task { #MainActor in
let state = try await api.get3State(v!)
print("cState func_3: \(state!)")
}
}
Since these function get info from api, it might take a few seconds.
How can I run func_3, after both func_1 and func_2 is done running?
I would advise avoiding Task (which opts out of structured concurrency, and loses all the benefits that entails) unless you absolutely have to. E.g., I generally try to limit Task to those cases where I are going from a non-asynchronous context to an asynchronous one. Where possible, I try to stay within structured concurrency.
As The Swift Programming Language: Concurrency says:
Tasks are arranged in a hierarchy. Each task in a task group has the same parent task, and each task can have child tasks. Because of the explicit relationship between tasks and task groups, this approach is called structured concurrency. Although you take on some of the responsibility for correctness, the explicit parent-child relationships between tasks lets Swift handle some behaviors like propagating cancellation for you, and lets Swift detect some errors at compile time.
And I would avoid creating functions (func_1, func_2, and func_3) that fetch a value and throw it away. You would presumably return the values.
If func_1 and func_2 return different types, you could use async let. E.g., if you're not running func_3 until the first two are done, perhaps it uses those values as inputs:
func runAll() async throws {
async let foo = try await func_1()
async let bar = try await func_2()
let baz = try await func_3(foo: foo, bar: bar)
}
func func_1() async throws -> Foo {
let foo = try await api.get1State(v!)
print("cState func_1: \(foo)")
return foo
}
func func_2() async throws -> Bar {
let bar = try await api.get2State(v!)
print("cState func_2: \(bar)")
return bar
}
func func_3(foo: Foo, bar: Bar) async throws -> Baz {
let baz = try await api.get3State(foo, bar)
print("cState func_3: \(baz)")
return baz
}
Representing that visually using “Points of Interest” tool in Instruments:
The other pattern, if func_1 and func_2 return the same type, is to use a task group:
func runAll() async throws {
let results = try await withThrowingTaskGroup(of: Foo.self) { group in
group.addTask { try await func_1() }
group.addTask { try await func_2() }
return try await group.reduce(into: [Foo]()) { $0.append($1) } // note, this will be in the order that they complete; we often use a dictionary instead
}
let baz = try await func_3(results)
}
func func_1() async throws -> Foo { ... }
func func_2() async throws -> Foo { ... }
func func_3(_ values: [Foo]) async throws -> Baz { ... }
There are lots of permutations of the pattern, so do not get lost in the details here. The basic idea is that (a) you want to stay within structured concurrency; and (b) use async let or TaskGroup for those tasks you want to run in parallel.
I hate to mention it, but for the sake of completeness, you can used Task and unstructured concurrency. From the same document I referenced above:
Unstructured Concurrency
In addition to the structured approaches to concurrency described in the previous sections, Swift also supports unstructured concurrency. Unlike tasks that are part of a task group, an unstructured task doesn’t have a parent task. You have complete flexibility to manage unstructured tasks in whatever way your program needs, but you’re also completely responsible for their correctness.
I would avoid this because you need to handle/capture the errors manually and is somewhat brittle, but you can return the Task objects, and await their respective result:
func func_1() -> Task<(), Error> {
Task { #MainActor [v] in
let state = try await api.get1State(v!)
print("cState func_1: \(state)")
}
}
func func_2() -> Task<(), Error> {
Task { #MainActor [v] in
let state = try await api.get2State(v!)
print("cState func_2: \(state)")
}
}
func func_3() -> Task<(), Error> {
Task { #MainActor [v] in
let state = try await api.get3State(v!)
print("cState func_3: \(state)")
}
}
func runAll() async throws {
let task1 = func_1()
let task2 = func_2()
let _ = await task1.result
let _ = await task2.result
let _ = await func_3().result
}
Note, I did not just await func_1().result directly, because you want the first two tasks to run concurrently. So launch those two tasks, save the Task objects, and then await their respective result before launching the third task.
But, again, your future self will probably thank you if you remain within the realm of structured concurrency.

Swift's "async let" error thrown depends on the order the tasks are made

I'm trying to understand async let error handling and it's not making a lot of sense in my head. It seems that if I have two parallel requests, the first one throwing an exception doesn't cancel the other request. In fact it just depends on the order in which they are made.
My testing setup:
struct Person {}
struct Animal {}
enum ApiError: Error { case person, animal }
class Requester {
init() {}
func getPeople(waitingFor waitTime: UInt64, throwError: Bool) async throws -> [Person] {
try await waitFor(waitTime)
if throwError { throw ApiError.person }
return []
}
func getAnimals(waitingFor waitTime: UInt64, throwError: Bool) async throws -> [Animal] {
try await waitFor(waitTime)
if throwError { throw ApiError.animal }
return []
}
func waitFor(_ seconds: UInt64) async throws {
do {
try await Task.sleep(nanoseconds: NSEC_PER_SEC * seconds)
} catch {
print("Error waiting", error)
throw error
}
}
}
The exercise.
class ViewController: UIViewController {
let requester = Requester()
override func viewDidLoad() {
super.viewDidLoad()
Task {
async let animals = self.requester.getAnimals(waitingFor: 1, throwError: true)
async let people = self.requester.getPeople(waitingFor: 2, throwError: true)
let start = Date()
do {
// let (_, _) = try await (people, animals)
let (_, _) = try await (animals, people)
print("No error")
} catch {
print("error: ", error)
}
print(Date().timeIntervalSince(start))
}
}
}
For simplicity, from now on I'll just past the relevant lines of code and output.
Scenario 1:
async let animals = self.requester.getAnimals(waitingFor: 1, throwError: true)
async let people = self.requester.getPeople(waitingFor: 2, throwError: true)
let (_, _) = try await (animals, people)
Results in:
error: animal
1.103397011756897
Error waiting CancellationError()
This one works as expected. The slower request, takes 2 seconds, but was cancelled after 1 second (when the fastest one throws)
Scenario 2:
async let animals = self.requester.getAnimals(waitingFor: 2, throwError: true)
async let people = self.requester.getPeople(waitingFor: 1, throwError: true)
let (_, _) = try await (animals, people)
Results in:
error: animal
2.2001450061798096
Now this one is not expected for me. The people request takes 1 second to throw an error and we still wait 2 seconds and the error is animal.
My expectation is that this should have been 1 second and people error.
Scenario 3:
async let animals = self.requester.getAnimals(waitingFor: 2, throwError: true)
async let people = self.requester.getPeople(waitingFor: 1, throwError: true)
let (_, _) = try await (people, animals)
Results in:
error: person
1.0017549991607666
Error waiting CancellationError()
Now this is expected. The difference here is that I swapped the order of the requests but changing to try await (people, animals).
It doesn't matter which method throws first, we always get the first error, and the time spent also depends on that order.
Is this behaviour expected/normal? Am I seeing anything wrong, or am I testing this wrong?
I'm surprised this isn't something people are not talking about more. I only found another question like this in developer forums.
Please help. :)
From https://github.com/apple/swift-evolution/blob/main/proposals/0317-async-let.md
async let (l, r) = {
return await (left(), right())
// ->
// return (await left(), await right())
}
meaning that the entire initializer of the async let is a single task,
and if multiple asynchronous function calls are made inside it, they
are performed one-by one.
Here is a more structured approach with behavior that makes sense.
struct ContentView: View {
var body: some View {
Text("Hello, world!")
.padding()
.task {
let requester = Requester()
let start = Date()
await withThrowingTaskGroup(of: Void.self) { group in
let animalTask = Task {
try await requester.getAnimals(waitingFor: 1, throwError: true)
}
group.addTask { animalTask }
group.addTask {
try await requester.getPeople(waitingFor: 2, throwError: true)
}
do {
for try await _ in group {
}
group.cancelAll()
} catch ApiError.animal {
group.cancelAll()
print("animal threw")
} catch ApiError.person {
group.cancelAll()
print("person threw")
} catch {
print("someone else")
}
}
print(Date().timeIntervalSince(start))
}
}
}
The idea is to add each task to a throwing group and then loop through each task.
Cora hit the nail on the head (+1). The async let of a tuple will just await them in order. Instead, consider a task group.
But you do not need to cancel the other items in the group. See “Task Group Cancellation” discussion in the withThrowingTaskGroup(of:returning:body:) documentation:
Throwing an error in one of the tasks of a task group doesn’t immediately cancel the other tasks in that group. However, if you call
next() in the task group and propagate its error, all other tasks are
canceled. For example, in the code below, nothing is canceled and the
group doesn’t throw an error:
withThrowingTaskGroup { group in
group.addTask { throw SomeError() }
}
In contrast, this example throws SomeError and cancels all of the tasks in the group:
withThrowingTaskGroup { group in
group.addTask { throw SomeError() }
try group.next()
}
An individual task throws its error in the corresponding call to Group.next(), which gives you a chance to handle the individual error or to let the group rethrow the error.
Or you can waitForAll, which will cancel the other tasks:
let start = Date()
do {
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { let _ = try await self.requester.getAnimals(waitingFor: 1, throwError: true) }
group.addTask { let _ = try await self.requester.getPeople(waitingFor: 2, throwError: true) }
try await group.waitForAll()
}
} catch {
print("error: ", error)
}
print(Date().timeIntervalSince(start))
Bottom line, task groups do not dictate the order in which the tasks are awaited. (They also do not dictate the order in which they complete, either, so you often have to collating task group results into an order-independent structure or re-order the results.)
You asked how you would go about collecting the results. There are a few options:
You can define group tasks such that they do not “return” anything (i.e. child of Void.self), but update an actor (Creatures, below) in the addTask calls and then extract your tuple from that:
class ViewModel1 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
let results = Creatures()
try await withThrowingTaskGroup(of: Void.self) { group in
group.addTask { try await results.update(with: self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)) }
group.addTask { try await results.update(with: self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)) }
try await group.waitForAll()
}
return await (results.animals, results.people)
}
}
private extension ViewModel1 {
/// Creatures
///
/// A private actor used for gathering results
actor Creatures {
var animals: [Animal] = []
var people: [Person] = []
func update(with animals: [Animal]) {
self.animals = animals
}
func update(with people: [Person]) {
self.people = people
}
}
}
You can define group tasks that return enumeration case with associated value, and then extracts the results when done:
class ViewModel2 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
try await withThrowingTaskGroup(of: Creatures.self) { group in
group.addTask { try await .animals(self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)) }
group.addTask { try await .people(self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)) }
return try await group.reduce(into: ([], [])) { previousResult, creatures in
switch creatures {
case .animals(let values): previousResult.0 = values
case .people(let values): previousResult.1 = values
}
}
}
}
}
private extension ViewModel2 {
/// Creatures
///
/// A private enumeration with associated types for the types of results
enum Creatures {
case animals([Animal])
case people([Person])
}
}
For the sake of completeness, you don't have to use task group if you do not want. E.g., you can manually cancel earlier task if prior one canceled.
class ViewModel3 {
let requester = Requester()
func fetch() async throws -> ([Animal], [Person]) {
let animalsTask = Task {
try await self.requester.getAnimals(waitingFor: animalsDuration, throwError: shouldThrowError)
}
let peopleTask = Task {
do {
return try await self.requester.getPeople(waitingFor: peopleDuration, throwError: shouldThrowError)
} catch {
animalsTask.cancel()
throw error
}
}
return try await (animalsTask.value, peopleTask.value)
}
}
This is not a terribly scalable pattern, which is why task groups might be a more attractive option, as they handle the cancelation of pending tasks for you (assuming you iterate through the group as you build the results).
FWIW, there are other task group alternatives, too, but there is not enough in your question to get too specific in this regard. For example, I can imagine some protocol-as-type implementations if all of the tasks returned an array of objects that conformed to a Creature protocol.
But hopefully the above illustrate a few patterns for using task groups to enjoy the cancelation capabilities while still collating the results.

How to make an async Swift function "#synchronized"?

I'd like to create an async function which itself using async calls. I also want to ensure that only one call is actively processed in any moment. So I want an async #synchronized function.
How to do that? Wrapping the function's body inside the dispatchQueue.sync {} does not work as it expects synchronised code. Also it seems that DispatchQueue in general is not designed to have async code blocks / tasks to execute.
This code communicates with hardware, so async in nature, that's why I want an async interface for my library. (I don't want to block the app while the stages of communication happen.) But certain operations can't be executed parallel on the hardware, so I have to go through synchronisation so the certain operations won't happen at the same time.
You can have every Task await the prior one. And you can use actor make sure that you are only running one at a time. The trick is, because of actor reentrancy, you have to put that "await prior Task" logic in a synchronous method.
E.g., you can do:
actor Experiment {
private var previousTask: Task<Void, Error>?
func startSomethingAsynchronous() {
previousTask = Task { [previousTask] in
let _ = await previousTask?.result
try await self.doSomethingAsynchronous()
}
}
private func doSomethingAsynchronous() async throws {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: "Task", signpostID: id, "Start")
try await Task.sleep(nanoseconds: 2 * NSEC_PER_SEC)
os_signpost(.end, log: log, name: "Task", signpostID: id, "End")
}
}
Now I am using os_signpost so I can watch this serial behavior from Xcode Instruments. Anyway, you could start three tasks like so:
import os.log
private let log = OSLog(subsystem: "Experiment", category: .pointsOfInterest)
class ViewController: NSViewController {
let experiment = Experiment()
func startExperiment() {
for _ in 0 ..< 3 {
Task { await experiment.startSomethingAsynchronous() }
}
os_signpost(.event, log: log, name: "Done starting tasks")
}
...
}
And Instruments can visually demonstrate the sequential behavior (where the ⓢ shows us where the submitting of all the tasks finished), but you can see the sequential execution of the tasks on the timeline:
I actually like to abstract this serial behavior into its own type:
actor SerialTasks<Success> {
private var previousTask: Task<Success, Error>?
func add(block: #Sendable #escaping () async throws -> Success) {
previousTask = Task { [previousTask] in
let _ = await previousTask?.result
return try await block()
}
}
}
And then the asynchronous function for which you need this serial behavior would use the above, e.g.:
class Experiment {
let serialTasks = SerialTasks<Void>()
func startSomethingAsynchronous() async {
await serialTasks.add {
try await self.doSomethingAsynchronous()
}
}
private func doSomethingAsynchronous() async throws {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: "Task", signpostID: id, "Start")
try await Task.sleep(nanoseconds: 2 * NSEC_PER_SEC)
os_signpost(.end, log: log, name: "Task", signpostID: id, "End")
}
}
The accepted answer does indeed make tasks run serially but it does not wait for a task to be finished and does not propagate errors. I use the following alternative to support the catching of errors.
actor SerialTaskQueue<T> {
private let dispatchQueue: DispatchQueue
init(label: String) {
dispatchQueue = DispatchQueue(label: label)
}
#discardableResult
func add(block: #Sendable #escaping () async throws -> T) async throws -> T {
try await withCheckedThrowingContinuation { continuation in
dispatchQueue.sync {
let semaphore = DispatchSemaphore(value: 0)
Task {
defer {
semaphore.signal()
}
do {
let result = try await block()
continuation.resume(returning: result)
} catch {
continuation.resume(throwing: error)
}
}
semaphore.wait()
}
}
}
}

Concurrently run async tasks with unnamed async let

With Swift concurrency, is it possible to have something almost like an 'unnamed' async let?
Here is an example. You have the following actor:
actor MyActor {
private var foo: Int = 0
private var bar: Int = 0
func setFoo(to value: Int) async {
foo = value
}
func setBar(to value: Int) async {
bar = value
}
func printResult() {
print("foo =", foo)
print("bar =", bar)
}
}
Now I want to set foo and bar using the given methods. Simple usage would look like the following:
let myActor = MyActor()
await myActor.setFoo(to: 5)
await myActor.setBar(to: 10)
await myActor.printResult()
However this code is sequentially run. For all intents and purposes, assume setFoo(to:) and setBar(to:) may be a long running task. You can also assume the methods are mutually exclusive (don't share variables & won't affect each other).
To make this code current, async let can be used. However, this just starts the tasks until they are awaited later on. In my example you'll notice I don't need the return value from these methods. All I need is that before printResult() is called, the previous tasks have completed.
I could come up with the following:
let myActor = MyActor()
async let tempFoo: Void = myActor.setFoo(to: 5)
async let tempBar: Void = myActor.setBar(to: 10)
let _ = await [tempFoo, tempBar]
await myActor.printResult()
Explicitly creating these tasks and then awaiting an array of them seems incorrect. Is this really the best way?
This can be achieved with a task group using withTaskGroup(of:returning:body:). The method calls are individual tasks, and then we await waitForAll() which continues when all tasks have completed.
Code:
await withTaskGroup(of: Void.self) { group in
let myActor = MyActor()
group.addTask {
await myActor.setFoo(to: 5)
}
group.addTask {
await myActor.setBar(to: 10)
}
await group.waitForAll()
await myActor.printResult()
}
I made your actor a class to allow concurrent execution of the two methods.
import Foundation
final class Jeep {
private var foo: Int = 0
private var bar: Int = 0
func setFoo(to value: Int) {
print("begin foo")
foo = value
sleep(1)
print("end foo \(value)")
}
func setBar(to value: Int) {
print("begin bar")
bar = value
sleep(2)
print("end bar \(bar)")
}
func printResult() {
print("printResult foo:\(foo), bar:\(bar)")
}
}
let jeep = Jeep()
let blocks = [
{ jeep.setFoo(to: 1) },
{ jeep.setBar(to: 2) },
]
// ...WORK
RunLoop.current.run(mode: RunLoop.Mode.default, before: NSDate(timeIntervalSinceNow: 5) as Date)
Replace WORK with one of these:
// no concurrency, ordered execution
for block in blocks {
block()
}
jeep.printResult()
// concurrency, unordered execution, tasks created upfront programmatically
Task {
async let foo: Void = blocks[0]()
async let bar: Void = blocks[1]()
await [foo, bar]
jeep.printResult()
}
// concurrency, unordered execution, tasks created upfront, but started by the system (I think)
Task {
await withTaskGroup(of: Void.self) { group in
for block in blocks {
group.addTask { block() }
}
}
// when the initialization closure exits, all child tasks are awaited implicitly
jeep.printResult()
}
// concurrency, unordered execution, awaited in order
Task {
let tasks = blocks.map { block in
Task { block() }
}
for task in tasks {
await task.value
}
jeep.printResult()
}
// tasks created upfront, all tasks start concurrently, produce result as soon as they finish
let stream = AsyncStream<Void> { continuation in
Task {
let tasks = blocks.map { block in
Task { block() }
}
for task in tasks {
continuation.yield(await task.value)
}
continuation.finish()
}
}
Task {
// now waiting for all values, bad use of a stream, I know
for await value in stream {
print(value as Any)
}
jeep.printResult()
}

What is the correct way to await the completion of two Tasks in Swift 5.5 in a function that does not support concurrency?

I have an app that does some processing given a string, this is done in 2 Tasks. During this time i'm displaying an animation. When these Tasks complete i need to hide the animation. The below code works, but is not very nice to look at. I believe there is a better way to do this?
let firTask = Task {
/* Slow-running code */
}
let airportTask = Task {
/* Even more slow-running code */
}
Task {
_ = await firTask.result
_ = await airportTask.result
self.isVerifyingRoute = false
}
Isn't the real problem that this is a misuse of Task? A Task, as you've discovered, is not really of itself a thing you can await. If the goal is to run slow code in the background, use an actor. Then you can cleanly call an actor method with await.
let myActor = MyActor()
await myActor.doFirStuff()
await myActor.doAirportStuff()
self.isVerifyingRoute = false
However, we also need to make sure we're on the main thread when we talk to self — something that your code omits to do. Here's an example:
actor MyActor {
func doFirStuff() async {
print("starting", #function)
await Task.sleep(2 * 1_000_000_000)
print("finished", #function)
}
func doAirportStuff() async {
print("starting", #function)
await Task.sleep(2 * 1_000_000_000)
print("finished", #function)
}
}
func test() {
let myActor = MyActor()
Task {
await myActor.doFirStuff()
await myActor.doAirportStuff()
Task { #MainActor in
self.isVerifyingRoute = false
}
}
}
Everything happens in the right mode: the time-consuming stuff happens on background threads, and the call to self happens on the main thread. A cleaner-looking way to take care of the main thread call, in my opinion, would be to have a #MainActor method:
func test() {
let myActor = MyActor()
Task {
await myActor.doFirStuff()
await myActor.doAirportStuff()
self.finish()
}
}
#MainActor func finish() {
self.isVerifyingRoute = false
}
I regard that as elegant and clear.
I would make the tasks discardable with an extension. Perhaps something like this:
extension Task {
#discardableResult
func finish() async -> Result<Success, Failure> {
await self.result
}
}
Then you could change your loading task to:
Task {
defer { self.isVerifyingRoute = false }
await firTask.finish()
await airportTask.finish()
}