dispatch group and Threading swift (Async Await) - swift

So I was practicing threading and I have a requirement.
I want to wait for the first api call to finish then I want to execute the second one, (Just like await async) if the first one didn't finish I want to stop the code there until it finishes.
So, I wrote this. Tell me if it is correct or if there is any better way to do this
func myFunction() {
var a: Int?
let group = DispatchGroup()
group.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 4) {
a = 1
print("First")
group.leave()
}
group.wait()
group.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 2) {
a = 3
print("Second")
group.leave()
}
// wait ...
group.wait()
group.notify(queue: .main) {
print(a) // y
}
}

Swift 5.5 and Xcode 13.0 bring async/await, so you can do something like this:
import Foundation
print("Hello, World!")
// Returns a 1 after a 3-second delay
func giveMeAOne() async -> Int {
Thread.sleep(forTimeInterval: 3.0)
return 1
}
// Returns a 5 after a 3-second delay
func giveMeAFive() async -> Int {
Thread.sleep(forTimeInterval: 3.0)
return 5
}
func myFunction() async {
var a: Int = 0
print(a)
a += await giveMeAOne()
print(a)
a += await giveMeAFive()
print(a)
}
async {
await myFunction()
}
sleep(7) // added at the end so that process doesn't exit right away
Note that the above was built as a macOS command-line tool in Xcode, rather than with Swift Playgrounds. As of Xcode 13.0 beta 1, Swift Playgrounds fail on a lot of async/await stuff, unfortunately.
That code above runs them sequentially, and cleans up myFunction quite a bit. Note that this doesn't introduce any concurrency, however. Both giveMeAOne() and giveMeAFive() are run sequentially.
If you wanted to run 4 function calls concurrently, you could do something like this:
/// Returns a random integer from 1 through 10, after a 6-second delay
func randomSmallInt() async -> Int {
Thread.sleep(forTimeInterval: 6)
return Int.random(in: 1 ... 10)
}
/// Fetches 4 random integers, summing them
func myFunction() async {
// print timestamp
print(Date())
async let a = randomSmallInt()
async let b = randomSmallInt()
async let c = randomSmallInt()
async let d = randomSmallInt()
// This awaits for a, b, c, and d to all complete.
// You can only await a value once.
let numbersToAdd = await [a, b, c, d]
var sum = 0
for number in numbersToAdd {
print("adding \(number)")
sum += number
}
print("Sum = \(sum)")
// print another timestamp, so you can see that the async let statements ran
// concurrently, rather than sequentially
print(Date())
}
async {
await myFunction()
}
// make sure we have time to finish before process exits
Thread.sleep(forTimeInterval: 10)
When you run this, you can see from the timestamps that the total execution time only took about 6 seconds -- because all of the async let ... statements were executed concurrently.
This works great if you have a fixed number of things you need to call asynchronously. But what if you wanted to make 100 asynchronous function calls, perhaps loading images or something?
You may think you could do something like this, which will work, but will run everything synchronously -- probably not what you want:
// Don't do this -- these all run sequentially
var sum = 0
for _ in 0 ..< 100 {
async let value = randomSmallishInt()
let number = await value
print("Adding \(number)")
sum += number
}
If you want to do an arbitrary or unknown number of asynchronous things, the best way it to use a taskGroup. This code is a bit more verbose than it needs to be, in order to make things clearer. Note: Xcode 13.0 beta struggles with respect to print statement output, missing much of it. Anyway, here it is:
/// Returns a random integer from 1 through 10
func randomSmallInt() async -> Int {
Thread.sleep(forTimeInterval: 6)
return Int.random(in: 1 ... 10)
}
func myFunction_arbitrary_repetitions_asynchronously() async {
// print timestamp
print(Date())
// This is pretty cool --
// `withTaskGroup` creates a context for executing a group of async tasks
// In this case, `Int.self` specifies that each individual task will return
// an Int. `[Int].self` specifies that the entire group will return an array of Ints:
let numbers = await withTaskGroup(of: Int.self, returning: [Int].self) { group in
// Repeat 100 times
for _ in 0 ..< 100 {
// Run the next thing asynchronously
group.async {
return await randomSmallInt()
}
}
// Iterate through results
var result: [Int] = []
for await individualResult in group {
result.append(individualResult)
}
return result
}
var sum = 0
for number in numbers {
print("Adding \(number)")
sum += number
}
print("Sum = \(sum)")
// print another timestamp, so you can see that the async let statements ran
// concurrently, rather than sequentially
print(Date())
}
async { await myFunction_arbitrary_repetitions_asynchronously() }
Thread.sleep(10)
You could shorten this up considerably:
func myFunction_arbitrary_repetitions_asynchronously_short() async {
// print timestamp
print(Date())
// In this case, the taskGroup just returns the sum
let sum = await withTaskGroup(of: Int.self, returning: Int.self) { group in
// Repeat 100 times
for _ in 0 ..< 100 {
// Run the next thing asynchronously
group.async {
return await randomSmallInt()
}
}
// Iterate through results
var sum = 0
for await individualResult in group {
print("Adding \(individualResult)")
sum += individualResult
}
return sum
}
print("Sum = \(sum)")
// print another timestamp, so you can see that the async let statements ran
// concurrently, rather than sequentially
print(Date())
}
async { await myFunction_arbitrary_repetitions_asynchronously_short() }
Important: Xcode 13.0 beta 1 gets quite unstable when dealing with this code. Specifically, the debugger likes to detach when the process isn't finished yet. For example, the output of many of the print statements does not appear, and using breakpoints is tough with async code. Building a command-line tool and then running the resulting executable (found in ~/Library/Developer/Xcode/DerivedData//Build/Products/Debug) will at least show all the output of your print statements.

Don't wait.
Run the second API call in the completion handler of the first
func myFunction() {
var a = 0
DispatchQueue.global().asyncAfter(deadline: .now() + 4) {
a = 1
print("First")
DispatchQueue.global().asyncAfter(deadline: .now() + 2) {
a = 3
print("Second")
DispatchQueue.main.async {
print(a)
}
}
}
}

Just like await async
The lack of a built-in language-level await async mechanism is a serious issue for Swift, and will probably be remedied in a future version. In the meantime, Swift has basically no built-in language-level mechanism for threading; you have to deal with things manually, by talking to either Cocoa/Objective-C (Grand Central Dispatch, NSOperation) or some other framework that deals with asynchronicity (Combine).
The particular problem you've focused on is what I call serializing asynchronicity. There are several ways to do this.
Grand Central Dispatch
The simple way is, as you've been told, perform the second async call at the end of the execution handler of the first async call. This is not very general for lining up numerous tasks, obviously.
You can certainly use DispatchGroup in the way you have outlined. Your statement of the pattern is almost correct. You do not need the last wait, and, more important, you must get onto a background queue right at the start, because using DispatchGroup on the main queue is illegal; your code, as it stands, effectively block the main queue while we wait, which is something you must not do.
Operation Queue
You can make a serial OperationQueue and load Operation objects onto it. This is a much more general mechanism. You can make Operation objects depend upon one another. Before the arrival of the Combine framework, this was the best general solution.
Combine Framework
If you can confine yourself to iOS 13 and later, the Combine framework is the neatest way to serialize asynchronicity in a general way. See for further discussion Combine framework serialize async operations.

Related

Semaphore with async in Swift [duplicate]

This question already has an answer here:
How to constrain concurrency (like `maxConcurrentOperationCount`) with Swift Concurrency?
(1 answer)
Closed 2 months ago.
I want to do download say 10 files from internet, but only process max 3 at a time. It seems a Semaphore is a common solution. My internet call is an async function. I haven't found any code examples that combine these two.
Problem to solve:
async function to download 10 files
max 3 files processing at one time
wait until all files processed before returning from function
Problems I'm having:
semaphore.wait() does not work in async function
I need wait to wait until all 10 files have processed before leaving function
Goofy solution?
I'm calling Task.detached each loop to create new thread. Seems like a better solution should exist
Code sample:
func downloadAllFiles() async -> (String) {
// Download 10 files, but max 3 at a time
var urls = MArray<String>() ; for _ in 1 ... 10 { urls.add("https://google.com") }
let semaphore = DispatchSemaphore(value: 3)
for url in urls {
semaphore.wait() // wait if 3 already in progress ; doesn't work in async context
Task.detached(priority: .high) { // other priority options will general "lower QoS thread" warning
let web = await MWeb.call(url: url)
semaphore.signal() // allow next download to start
}
}
await Task.WhenAll(tasks) // *** C# code: need some way to wait until all tasks done before returning
return ("")
}
If I run my code in non-async world, it seems to work, but I'd like to call such a function within in async process.
Can someone help with guidance?
Thank you all for the help! Martin's link was really helpful
Yes, this is basically a duplicate question, it seems.
The below code will start off 3, then as one finishes, the next starts right away so 3 are always going at once (until the end).
func downloadAllFiles() async -> (String) {
var urls = MArray<String>() ; for _ in 1 ... 10 { urls.add("https://google.com") }
await withTaskGroup(of: Void.self) { group in
for i in 0 ..< urls.count {
let url = urls[i]
if i >= 3 { await group.next() } // max 3 at a time
group.addTask {
let web = await MWeb.call(url: url)
}
}
await group.waitForAll()
}
return "result"
}

Testing parallel execution with structured concurrency

I’m testing code that uses an actor, and I’d like to test that I’m properly handling concurrent access and reentrancy. One of my usual approaches would be to use DispatchQueue.concurrentPerform to fire off a bunch of requests from different threads, and ensure my values resolve as expected. However, since the actor uses structured concurrency, I’m not sure how to actually wait for the tasks to complete.
What I’d like to do is something like:
let iterationCount = 100
let allTasksComplete = expectation(description: "allTasksComplete")
allTasksComplete.expectedFulfillmentCount = iterationCount
DispatchQueue.concurrentPerform(iterations: iterationCount) { _ in
Task {
// Do some async work here, and assert
allTasksComplete.fulfill()
}
}
wait(for: [allTasksComplete], timeout: 1.0)
However the timeout for the allTasksComplete expectation expires every time, regardless of whether the iteration count is 1 or 100, and regardless of the length of the timeout. I’m assuming this has something to do with the fact that mixing structured and DispatchQueue-style concurrency is a no-no?
How can I properly test concurrent access — specifically how can I guarantee that the actor is accessed from different threads, and wait for the test to complete until all expectations are fulfilled?
A few observations:
When testing Swift concurrency, we no longer need to rely upon expectations. We can just mark our tests as async methods. See Asynchronous Tests and Expectations. Here is an async test adapted from that example:
func testDownloadWebDataWithConcurrency() async throws {
let url = try XCTUnwrap(URL(string: "https://apple.com"), "Expected valid URL.")
let (_, response) = try await URLSession.shared.data(from: url)
let httpResponse = try XCTUnwrap(response as? HTTPURLResponse, "Expected an HTTPURLResponse.")
XCTAssertEqual(httpResponse.statusCode, 200, "Expected a 200 OK response.")
}
FWIW, while we can now use async tests when testing Swift concurrency, we still can use expectations:
func testWithExpectation() {
let iterations = 100
let experiment = ExperimentActor()
let e = self.expectation(description: #function)
e.expectedFulfillmentCount = iterations
for i in 0 ..< iterations {
Task.detached {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
e.fulfill()
}
}
wait(for: [e], timeout: 10)
}
You said:
However the timeout for the allTasksComplete expectation expires every time, regardless of whether the iteration count is 1 or 100, and regardless of the length of the timeout.
We cannot comment without seeing a reproducible example of the code replaced with the comment “Do some async work here, and assert”. We do not need to see your actual implementation, but rather construct the simplest possible example that manifests the behavior you describe. See How to create a Minimal, Reproducible Example.
I personally suspect that you have some other, unrelated deadlock. E.g., given that concurrentPerform blocks the thread from which you call it, maybe you are doing something that requires the blocked thread. Also, be careful with Task { ... }, which runs the task on the current actor, so if you are doing something slow and synchronous inside there, that could cause problems. We might use detached tasks, instead.
In short, we cannot diagnose the issue without a Minimal, Reproducible Example.
As a more general observation, one should be wary about mixing GCD (or semaphores or long-lived locks or whatever) with Swift concurrency, because the latter uses a cooperative thread pool, which relies upon assumptions about its threads being able to make forward progress. But if you have GCD API blocking threads, those assumptions may no longer be valid. It may not the source of the problem here, but I mention it as a cautionary note.
As an aside, concurrentPerform (which constrains the degree of parallelism) only makes sense if the work being executed runs synchronously. Using concurrentPerform to launch a series of asynchronous tasks will not constrain the concurrency at all. (The cooperative thread pool may, but concurrentPerform will not.)
So, for example, if we wanted to test a bunch of calculations in parallel, rather than concurrentPerform, we might use a TaskGroup:
func testWithStructuredConcurrency() async {
let iterations = 100
let experiment = ExperimentActor()
await withTaskGroup(of: Void.self) { group in
for i in 0 ..< iterations {
group.addTask {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
}
}
}
let count = await experiment.count
XCTAssertEqual(count, iterations)
}
Now if you wanted to verify concurrent execution within an app, normally I would just profile the app (not unit tests) with Instruments, and either watch intervals in the “Points of Interest” tool or look at the new “Swift Tasks” tool described in WWDC 2022’s Visualize and optimize Swift concurrency video. E.g., here I have launched forty tasks and I can see that my device runs six at a time:
See Alternative to DTSendSignalFlag to identify key events in Instruments? for references about the “Points of Interest” tool.
If you really wanted to write a unit test to confirm concurrency, you could theoretically keep track of your own counters, e.g.,
final class MyAppTests: XCTestCase {
func testWithStructuredConcurrency() async {
let iterations = 100
let experiment = ExperimentActor()
await withTaskGroup(of: Void.self) { group in
for i in 0 ..< iterations {
group.addTask {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
}
}
}
let count = await experiment.count
XCTAssertEqual(count, iterations, "Correct count")
let degreeOfConcurrency = await experiment.maxDegreeOfConcurrency
XCTAssertGreaterThan(degreeOfConcurrency, 1, "No concurrency")
}
}
Where:
actor ExperimentActor {
var degreeOfConcurrency = 0
var maxDegreeOfConcurrency = 0
var count = 0
/// Calculate pi with Leibniz series
///
/// Note: I am awaiting a detached task so that I can manifest actor reentrancy.
func reentrantCalculation(_ index: Int, decimalPlaces: Int = 8) async -> Double {
let task = Task.detached {
logger.log("starting \(index)") // I wouldn’t generally log in a unit test, but it’s a quick visual confirmation that I’m enjoying parallel execution
await self.increaseConcurrencyCount()
let threshold = pow(0.1, Double(decimalPlaces))
var isPositive = true
var denominator: Double = 1
var result: Double = 0
var increment: Double
repeat {
increment = 4 / denominator
if isPositive {
result += increment
} else {
result -= increment
}
isPositive.toggle()
denominator += 2
} while increment >= threshold
logger.log("finished \(index)")
await self.decreaseConcurrencyCount()
return result
}
count += 1
return await task.value
}
func increaseConcurrencyCount() {
degreeOfConcurrency += 1
if degreeOfConcurrency > maxDegreeOfConcurrency { maxDegreeOfConcurrency = degreeOfConcurrency}
}
func decreaseConcurrencyCount() {
degreeOfConcurrency -= 1
}
func isAcceptable(_ result: Double) -> Bool {
return abs(.pi - result) < 0.0001
}
}
Please note that if testing/running on simulator, the cooperative thread pool is somewhat constrained, not exhibiting the same degree of concurrency as you will see on an actual device.
Also note that if you are testing whether a particular test is exhibiting parallel execution, you might want to disable the parallel execution of tests, themselves, so that other tests do not tie up your cores, preventing any given particular test from enjoying parallel execution.

Asynchronous DispatchSemaphore.wait(timeout:) Task handle alternative

I'm currently looking for a Task based alternative for my code using DispatchSemaphore:
func checkForEvents() -> Bool
let eventSemaphore = DispatchSemaphore(value: 0)
eventService.onEvent = { _ in
eventSemaphore.signal()
}
let result = eventSemaphore.wait(timeout: .now() + 10)
guard result == .timedOut else {
return true
}
return false
}
I'm trying to detect if any callback events happen in a 10-second time window. This event could happen multiple times, but I only want to know if it occurs once. While the code works fine, I had to convert the enclosing function to async which now shows this warning:
Instance method 'wait' is unavailable from asynchronous contexts; Await a Task handle instead; this is an error in Swift 6`
I'm pretty sure this warning is kind of wrong here since there's no suspension point in this context, but I would still like to know what's the suggested approach in Swift 6 then.
I couldn't find anything about this "await a Task handle" so the closest I've got is this:
func checkForEvents() async -> Bool {
var didEventOccur = false
eventService.onEvent = { _ in
didEventOccur = true
}
try? await Task.sleep(nanoseconds: 10 * 1_000_000_000)
return didEventOccur
}
But I can't figure out a way to stop sleeping early if an event occurs...

How to suspend subsequent tasks until first finishes then share its response with tasks that waited?

I have an actor which throttles requests in a way where the first one will suspend subsequent requests until finished, then share its response with them so they don't have to make the same request.
Here's what I'm trying to do:
let cache = Cache()
let operation = OperationStatus()
func execute() async {
if await operation.isExecuting else {
await operation.waitUntilFinished()
} else {
await operation.set(isExecuting: true)
}
if let data = await cache.data {
return data
}
let request = myRequest()
let response = await myService.send(request)
await cache.set(data: response)
await operation.set(isExecuting: false)
}
actor Cache {
var data: myResponse?
func set(data: myResponse?) {
self.data = data
}
}
actor OperationStatus {
#Published var isExecuting = false
private var cancellable = Set<AnyCancellable>()
func set(isExecuting: Bool) {
self.isExecuting = isExecuting
}
func waitUntilFinished() async {
guard isExecuting else { return }
return await withCheckedContinuation { continuation in
$isExecuting
.first { !$0 } // Wait until execution toggled off
.sink { _ in continuation.resume() }
.store(in: &cancellable)
}
}
}
// Do something
DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in execute() }
This ensures one request at a time, and subsequent calls are waiting until finished. It seems this works but wondering if there's a pure Concurrency way instead of mixing Combine in, and how I can test this? Here's a test I started but I'm confused how to test this:
final class OperationStatusTests: XCTestCase {
private let iterations = 10_000 // 1_000_000
private let outerIterations = 10
actor Storage {
var counter: Int = 0
func increment() {
counter += 1
}
}
func testConcurrency() {
// Given
let storage = Storage()
let operation = OperationStatus()
let promise = expectation(description: "testConcurrency")
promise.expectedFulfillmentCount = outerIterations * iterations
#Sendable func execute() async {
guard await !operation.isExecuting else {
await operation.waitUntilFinished()
promise.fulfill()
return
}
await operation.set(isExecuting: true)
try? await Task.sleep(seconds: 8)
await storage.increment()
await operation.set(isExecuting: false)
promise.fulfill()
}
waitForExpectations(timeout: 10)
// When
DispatchQueue.concurrentPerform(iterations: outerIterations) { _ in
(0..<iterations).forEach { _ in
Task { await execute() }
}
}
// Then
// XCTAssertEqual... how to test?
}
}
Before I tackle a more general example, let us first dispense with some natural examples of sequential execution of asynchronous tasks, passing the result of one as a parameter of the next. Consider:
func entireProcess() async throws {
let value = try await first()
let value2 = try await subsequent(with: value)
let value3 = try await subsequent(with: value2)
let value4 = try await subsequent(with: value3)
// do something with `value4`
}
Or
func entireProcess() async throws {
var value = try await first()
for _ in 0 ..< 4 {
value = try await subsequent(with: value)
}
// do something with `value`
}
This is the easiest way to declare a series of async functions, each of which takes the prior result as the input for the next iteration. So, let us expand the above to include some signposts for Instruments’ “Points of Interest” tool:
import os.log
private let log = OSLog(subsystem: "Test", category: .pointsOfInterest)
func entireProcess() async throws {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: #function, signpostID: id, "start")
var value = try await first()
for _ in 0 ..< 4 {
os_signpost(.event, log: log, name: #function, "Scheduling: %d with input of %d", i, value)
value = try await subsequent(with: value)
}
os_signpost(.end, log: log, name: #function, signpostID: id, "%d", value)
}
func first() async throws -> Int {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: #function, signpostID: id, "start")
try await Task.sleep(seconds: 1)
let value = 42
os_signpost(.end, log: log, name: #function, signpostID: id, "%d", value)
return value
}
func subsequent(with value: Int) async throws -> Int {
let id = OSSignpostID(log: log)
os_signpost(.begin, log: log, name: #function, signpostID: id, "%d", value)
try await Task.sleep(seconds: 1)
let newValue = value + 1
defer { os_signpost(.end, log: log, name: #function, signpostID: id, "%d", newValue) }
return newValue
}
So, there you see a series of requests that pass their result to the subsequent request. All of that os_signpost signpost stuff is so we can visually see that they are running sequentially in Instrument’s “Points of Interest” tool:
You can see ⓢ event signposts as each task is scheduled, and the intervals illustrate the sequential execution of these asynchronous tasks.
This is the easiest way to have dependencies between tasks, passing values from one task to another.
Now, that begs the question is how to generalize the above, where we await the prior task before starting the next one.
One pattern is to write an actor that awaits the result of the prior one. Consider:
actor SerialTasks<Success> {
private var previousTask: Task<Success, Error>?
func add(block: #Sendable #escaping () async throws -> Success) {
previousTask = Task { [previousTask] in
let _ = await previousTask?.result
return try await block()
}
}
}
Unlike the previous example, this does not require that you have a single function from which you initiate the subsequent tasks. E.g., I have used the above when some separate user interaction requires me to add a new task to the end of the list of previously submitted tasks.
There are two subtle, yet critical, aspects of the above actor:
The add method, itself, must not be an asynchronous function. We need to avoid actor reentrancy. If this were an async function (like in your example), we would lose the sequential execution of the tasks.
The Task has a [previousTask] capture list to capture a copy of the prior task. This way, each task will await the prior one, avoiding any races.
The above can be used to make a series of tasks run sequentially. But it is not passing values between tasks, itself. I confess that I have used this pattern where I simply need sequential execution of largely independent tasks (e.g., sending separate commands being sent to some Process). But it can probably be adapted for your scenario, in which you want to “share its response with [subsequent requests]”.
I would suggest that you post a separate question with MCVE with a practical example of precisely what you wanted to pass from one asynchronous function to another. I have, for example, done permutation of the above, passing integer from one task to another. But in practice, that is not of great utility, as it gets more complicated when you start dealing with the reality of heterogenous results parsing. In practice, the simple example with which I started this question is the most practical pattern.
On the broader question of working with/around actor reentrancy, I would advise keeping an eye out on SE-0306 - Future Directions which explicitly contemplates some potential elegant forthcoming alternatives. I would not be surprised to see some refinements, either in the language itself, or in the Swift Async Algorithms library.
tl;dr
I did not want to encumber the above with discussion regarding your code snippets, but there are quite a few issues. So, if you forgive me, here are some observations:
The attempt to use OperationStatus to enforce sequential execution of async calls will not work because actors feature reentrancy. If you have an async function, every time you hit an await, that is a suspension point at which point another call to that async function is allowed to proceed. The integrity of your OperationStatus logic will be violated. You will not experience serial behavior.
If you are interesting in suspension points, I might recommend watching WWDC 2021 video Swift concurrency: Behind the scenes.
The testConcurrency is calling waitForExpectations before it actually starts any tasks that will fulfill any expectations. That will always timeout.
The testConcurrency is using GCD concurrentPerform, which, in turn, just schedules an asynchronous task and immediately returns. That defeats the entire purpose of concurrentPerform (which is a throttling mechanism for running a series of synchronous tasks in parallel, but not exceed the maximum number of cores on your CPU). Besides, Swift concurrency features its own analog to concurrentPerform, namely the constrained “cooperative thread pool” (also discussed in that video, IIRC), rendering concurrentPerform obsolete in the world of Swift concurrency.
Bottom line, it doesn't make sense to include concurrentPerform in a Swift concurrency codebase. It also does not make sense to use concurrentPerform to launch asynchronous tasks (whether Swift concurrency or GCD). It is for launching a series of synchronous tasks in parallel.
In execute in your test, you have two paths of execution, one which will await some state change and fulfills the expectation without ever incrementing the storage. That means that you will lose some attempts to increment the value. Your total will not match the desired resulting value. Now, if your intent was to drop requests if another was pending, that's fine. But I don't think that was your intent.
In answer to your question about how to test success at the end. You might do something like:
actor Storage {
private var counter: Int = 0
func increment() {
counter += 1
}
var value: Int { counter }
}
func testConcurrency() async {
let storage = Storage()
let operation = OperationStatus()
let promise = expectation(description: "testConcurrency")
let finalCount = outerIterations * iterations
promise.expectedFulfillmentCount = finalCount
#Sendable func execute() async {
guard await !operation.isExecuting else {
await operation.waitUntilFinished()
promise.fulfill()
return
}
await operation.set(isExecuting: true)
try? await Task.sleep(seconds: 1)
await storage.increment()
await operation.set(isExecuting: false)
promise.fulfill()
}
// waitForExpectations(timeout: 10) // this is not where you want to wait; moved below, after the tasks started
// DispatchQueue.concurrentPerform(iterations: outerIterations) { _ in // no point in this
for _ in 0 ..< outerIterations {
for _ in 0 ..< iterations {
Task { await execute() }
}
}
await waitForExpectations(timeout: 10)
// test the success to see if the store value was correct
let value = await storage.value // to test that you got the right count, fetch the value; note `await`, thus we need to make this an `async` test
// Then
XCTAssertEqual(finalCount, value, "Count")
}
Now, this test will fail for a variety of reasons, but hopefully this illustrates how you would verify the success or failure of the test. But, note, that this will test only that the final result was correct, not that they were executed sequentially. The fact that Storage is an actor will hide the fact that they were not really invoked sequentially. I.e., if you really needed the results of one request to prepare the next is not tested here.
If, as you go through this, you want to really confirm the behavior of your OperationStatus pattern, I would suggest using os_signpost intervals (or simple logging statements where your tasks start and finish). You will see that the separate invocations of the asynchronous execute method are not running sequentially.

How to ensure two asynchronous tasks that 'start' together are completed before running another?

I am setting up an app that utilizes promiseKit as a way to order asynchronous tasks. I currently have a set up which ensures two async functions (referred to as promises) are done in order (lets call them 1 and 2) and that another set of functions (3 and 4) are done in order. Roughly:
import PromiseKit
override func viewDidAppear(_ animated: Bool) {
firstly{
self.promiseOne() //promise #1 happening first (in relation to # 1 and #2)
}.then{_ -> Promise<[String]> in
self.promiseTwo()//promise #2 starting after 1 has completed
}
.catch{ error in
print(error)
}
firstly{
self.promiseThree()//Promise #3 happening first (in relation to #3 and #4)
}.then{_ -> Promise<[String]> in
self.promiseFour()//Promise #4 starting after #3 has completed
}.
.catch{ error in
print(error)
}
}
Each firstly ensures the order of the functions within them by making sure the first one is completed before the second one can initiate. Using two separate firstly's ensures that 1 is done before 2, 3 is done before 4, and (importantly) 1 and 3 start roughly around the same time (at the onset of the viewDidAppear()). This is done on purpose because 1 and 3 are not related to each other and can start at the same time without any issues (same goes for 2 and 4). The issue is that there is a fifth promise, lets call it promiseFive that must only be run after 2 and 4 have been completed. I could just link one firstly that ensure the order is 1,2,3,4,5, but since the order of 1/2 and 3/4 is not relevant, linking them in this fashion would waste time.
I am not sure how to set this up so that promiseFive is only run upon completion of both 2 and 4. I have thought to have boolean-checked functions calls at the end of both 2 and 4, making sure the other firstly has finished to then call promiseFive but, since they begin asynchronously (1/2 and 3/4), it is possible that promiseFive would be called by both at the exact same time with this approach, which would obviously create issues. What is the best way to go about this?
You can use when or join to start something after multiple other promises have completed. The difference is in how they handled failed promises. It sounds like you want join. Here is a concrete, though simple example.
This first block of code is an example of how to create 2 promise chains and then wait for both of them to complete before starting the next task. The actual work being done is abstracted away into some functions. Focus on this block of code as it contains all the conceptual information you need.
Snippet
let chain1 = firstly(execute: { () -> (Promise<String>, Promise<String>) in
let secondPieceOfInformation = "otherInfo" // This static data is for demonstration only
// Pass 2 promises, now the next `then` block will be called when both are fulfilled
// Promise initialized with values are already fulfilled, so the effect is identical
// to just returning the single promise, you can do a tuple of up to 5 promises/values
return (fetchUserData(), Promise(value: secondPieceOfInformation))
}).then { (result: String, secondResult: String) -> Promise<String> in
self.fetchUpdatedUserImage()
}
let chain2 = firstly {
fetchNewsFeed() //This promise returns an array
}.then { (result: [String : Any]) -> Promise<String> in
for (key, value) in result {
print("\(key) \(value)")
}
// now `result` is a collection
return self.fetchFeedItemHeroImages()
}
join(chain1, chain2).always {
// You can use `always` if you don't care about the earlier values
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "All promises finished %.2f seconds later", executionTime))
}
PromiseKit uses closures to provide it's API. Closures have an scope just like an if statement. If you define a value within an if statement's scope, then you won't be able to access it outside of that scope.
You have several options to passing multiple pieces of data to the next then block.
Use a variable that shares a scope with all of the promises (you'll likely want to avoid this as it works against you in managing the flow of asynchronous data propagation)
Use a custom data type to hold both (or more) values. This can be a tuple, struct, class, or enum.
Use a collection (such as a dictionary), example in chain2
Return a tuple of promises, example included in chain1
You'll need to use your best judgement when choosing your method.
Complete Code
import UIKit
import PromiseKit
class ViewController: UIViewController {
let methodStart = Date()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
<<Insert The Other Code Snippet Here To Complete The Code>>
// I'll also mention that `join` is being deprecated in PromiseKit
// It provides `when(resolved:)`, which acts just like `join` and
// `when(fulfilled:)` which fails as soon as any of the promises fail
when(resolved: chain1, chain2).then { (results) -> Promise<String> in
for case .fulfilled(let value) in results {
// These promises succeeded, and the values will be what is return from
// the last promises in chain1 and chain2
print("Promise value is: \(value)")
}
for case .rejected(let error) in results {
// These promises failed
print("Promise value is: \(error)")
}
return Promise(value: "finished")
}.catch { error in
// With the caveat that `when` never rejects
}
}
func fetchUserData() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
// These dispatch queue delays are standins for your long-running asynchronous tasks
// They might be network calls, or batch file processing, etc
// So, they're just here to provide a concise, illustrative, working example
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise1 %.2f seconds later", executionTime))
fulfill("promise1")
}
}
return promise
}
func fetchUpdatedUserImage() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise2 %.2f seconds later", executionTime))
fulfill("promise2")
}
}
return promise
}
func fetchNewsFeed() -> Promise<[String : Any]> {
let promise = Promise<[String : Any]> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise3 %.2f seconds later", executionTime))
fulfill(["key1" : Date(),
"array" : ["my", "array"]])
}
}
return promise
}
func fetchFeedItemHeroImages() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise4 %.2f seconds later", executionTime))
fulfill("promise4")
}
}
return promise
}
}
Output
promise3 1.05 seconds later
array ["my", "array"]
key1 2017-07-18 13:52:06 +0000
promise1 2.04 seconds later
promise4 3.22 seconds later
promise2 4.04 seconds later
All promises finished 4.04 seconds later
Promise value is: promise2
Promise value is: promise4
The details depend a little upon what types of these various promises are, but you can basically return the promise of 1 followed by 2 as one promise, and return the promise of 3 followed by 4 as another, and then use when to run those two sequences of promises concurrently with respect to each other, but still enjoy the consecutive behavior within those sequences. For example:
let firstTwo = promiseOne().then { something1 in
self.promiseTwo(something1)
}
let secondTwo = promiseThree().then { something2 in
self.promiseFour(something2)
}
when(fulfilled: [firstTwo, secondTwo]).then { results in
os_log("all done: %#", results)
}.catch { error in
os_log("some error: %#", error.localizedDescription)
}
This might be a situation in which your attempt to keep the question fairly generic might make it harder to see how to apply this answer in your case. So, if you are stumbling, you might want to be more specific about what these four promises are doing and what they're passing to each other (because this passing of results from one to another is one of the elegant features of promises).