I am running ThreadSanitizer on my project and I am getting some very inconsistent results. My setup is as follows:
A wrapper around a URLSession which reports back tasks and their status. I get the tasks via:
actor MyMonitor {
func getTasks() async -> [String: URLSessionTask] {
await mySession.allTasks.reduce(into: [String: URLSessionTask]()) { newMap, element in
if let taskDescription = element.taskDescription {
newMap[taskDescription] = element
}
}
}
}
For my unit tests, I have a URLProtocol to mock an actual working request. However, it seems when I call the getTasks function, I get a data race. (albeit inconsistent)
The TSan tool states the race is in the above snippet:
Threading Issues: Data race in (2) await resume partial function for
MyProject.MyMonitor.getTasks() async -> Swift.Dictionary<Swift.String,
__C.NSURLSessionTask> at 0x7b6000021400
Is this a false positive, because I do not have control ove URLSession's threading? Any help would be appreciated.
Related
I've looked through just about every question on this topic I could find but I've had little success. I need to run a function on an array of actors conforming to a specific actor protocol. Because these are actors, I need an async call. But I also need to run these functions in a specific order, and I'm not going to describe how I get the order, just suffice it to say that I have it. I am also using the following asyncForEach function, though I'm open to not doing this.
extension Sequence {
func asyncForEach (
_ operation: #escaping (Element) async -> Void
) async {
// A task group automatically waits for all of its
// sub-tasks to complete, while also performing those
// tasks in parallel:
await withTaskGroup(of: Void.self) { group in
for element in self {
group.addTask {
await operation(element)
}
}
}
}
}
Now I have some protocol
protocol ProtocolConformingActors: Actor {
func bar() async throws
}
This leads me to running my function
func foo() async throws {
let actorsAndOrders: [Int: ProtocolConformingActors] = [1:actor1, 2:actor2, 3:actor3]
// Get order
var orders: [Int] = []
for entry in actorsAndOrders {
orders.append(entry.key)
}
orders.sort()
// Run func
await orders.asyncForEach { order in
let actor = actorsAndOrders[order]
try await actor?.bar()
}
}
And this is where the problem occurs. Like I mentioned above, these calls need to be async because bar() is modifying isolated properties on each actor. Because in order to make this happen, I need to use the asyncForEach, but as I understand it, the asyncForEach loop sets up and runs each function bar() in parallel. But they need to be run in order.
Is there a way I can make each thread wait until a condition is met?
I was thinking I might be able to use the condition orders[0] == order and when bar() is done running, remove the first entry from the orders array, which could make it tell the next thread it can wake up again.
But while the apple documentation seems to indicate that there is a wait(until:) function on NSCondition, I can't seem to make it work.
With the PromiseKit library, it’s possible to create a promise and a resolver function together and store them on an instance of a class:
class ExampleClass {
// Promise and resolver for the top news headline, as obtained from
// some web service.
private let (headlinePromise, headlineSeal) = Promise<String>.pending()
}
Like any promise, we can chain off of headlinePromise to do some work once the value is available:
headlinePromise.get { headline in
updateUI(headline: headline)
}
// Some other stuff here
Since the promise has not been resolved yet, the contents of the get closure will be enqueued somewhere and control will immediately move to the “some other stuff here” section; updateUI will not be called unless and until the promise is resolved.
To resolve the promise, an instance method can call headlineSeal:
makeNetworkRequest("https://news.example/headline").get { headline in
headlineSeal.fulfill(headline)
}
The promise is now resolved, and any promise chains that had been waiting for headlinePromise will continue. For the rest of the life of this ExampleClass instance, any promise chain starting like
headlinePromise.get { headline in
// ...
}
will immediately begin executing. (“Immediately” might mean “right now, synchronously,” or it might mean “on the next run of the event loop”; the distinction isn’t important for me here.) Since promises can only be resolved once, any future calls to headlineSeal.fulfill(_:) or headlineSeal.reject(_:) will be no-ops.
Question
How can this pattern be translated idiomatically into Swift concurrency (“async/await”)? It’s not important that there be an object called a “promise” and a function called a “resolver”; what I’m looking for is a setup that has the following properties:
It’s possible for some code to declare a dependency on some bit of asynchronously-available state, and yield until that state is available.
It’s possible for that state to be “fulfilled” from potentially any instance method.
Once the state is available, any future chains of code that depend on that state are able to run right away.
Once the state is available, its value is immutable; the state cannot become unavailable again, nor can its value be changed.
I think that some of these can be accomplished by storing an instance variable
private let headlineTask: Task<String, Error>
and then waiting for the value with
let headline = try await headlineTask.value
but I’m not sure how that Task should be initialized or how it should be “fulfilled.”
Here is a way to reproduce a Promise which can be awaited by multiple consumers and fulfilled by any synchronous code:
public final class Promise<Success: Sendable>: Sendable {
typealias Waiter = CheckedContinuation<Success, Never>
struct State {
var waiters = [Waiter]()
var result: Success? = nil
}
private let state = ManagedCriticalState(State())
public init(_ elementType: Success.Type = Success.self) { }
#discardableResult
public func fulfill(with value: Success) -> Bool {
return state.withCriticalRegion { state in
if state.result == nil {
state.result = value
for waiters in state.waiters {
waiters.resume(returning: value)
}
state.waiters.removeAll()
return false
}
return true
}
}
public var value: Success {
get async {
await withCheckedContinuation { continuation in
state.withCriticalRegion { state in
if let result = state.result {
continuation.resume(returning: result)
} else {
state.waiters.append(continuation)
}
}
}
}
}
}
extension Promise where Success == Void {
func fulfill() -> Bool {
return fulfill(with: ())
}
}
The ManagedCriticalState type can be found in this file from the SwiftAsyncAlgorithms package.
I think I got the implementation safe and correct but if someone finds an error I'll update the answer. For reference I got inspired by AsyncChannel and this blog post.
You can use it like this:
#main
enum App {
static func main() async throws {
let promise = Promise(String.self)
// Delayed fulfilling.
let fulfiller = Task.detached {
print("Starting to wait...")
try await Task.sleep(nanoseconds: 2_000_000_000)
print("Promise fulfilled")
promise.fulfill(with: "Done!")
}
let consumer = Task.detached {
await (print("Promise resolved to '\(promise.value)'"))
}
// Launch concurrent consumer and producer
// and wait for them to complete.
try await fulfiller.value
await consumer.value
// A promise can be fulfilled only once and
// subsequent calls to `.value` immediatly return
// with the previously resolved value.
promise.fulfill(with: "Ooops")
await (print("Promise still resolved to '\(promise.value)'"))
}
}
Short explanation
In Swift Concurrency, the high-level Task type resembles a Future/Promise (it can be awaited and suspends execution until resolved) but the actual resolution cannot be controlled from the outside: one must compose built-in lower-level asynchronous functions such as URLSession.data() or Task.sleep().
However, Swift Concurrency provides a (Checked|Unsafe)Continuation type which basically act as a Promise resolver. It is a low-lever building block which purpose is to migrate regular asynchronous code (callback-based for instance) to the Swift Concurrency world.
In the above code, continuations are created by the consumers (via the .value property) and stored in the Promise. Later, when the result is available the stored continuations are fulfilled (with .resume()), which resumes the execution of the consumers. The result is also cached so that if it is already available when .value is called it is directly returned to the called.
When a Promise is fulfilled multiple times, the current behavior is to ignore subsequent calls and to return aa boolean value indicating if the Promise was already fulfilled. Other API's could be used (a trap, throwing an error, etc.).
The internal mutable state of the Promise must be protected from concurrent accesses since multiple concurrency domains could try to read and write from it at the same time. This is achieve with regular locking (I believe this could have been achieved with an actor, though).
I need to support cancellation of a function that returns an object that can be cancelled after initiation. In my case, the requester class is in a 3rd party library that I can't modify.
actor MyActor {
...
func doSomething() async throws -> ResultData {
var requestHandle: Handle?
return try await withTaskCancellationHandler {
requestHandle?.cancel() // COMPILE ERROR: "Reference to captured var 'requestHandle' in concurrently-executing code"
} operation: {
return try await withCheckedThrowingContinuation{ continuation in
requestHandle = requester.start() { result, error in
if let error = error
continuation.resume(throwing: error)
} else {
let myResultData = ResultData(result)
continuation.resume(returning: myResultData)
}
}
}
}
}
...
}
I have reviewed other SO questions and this thread: https://forums.swift.org/t/how-to-use-withtaskcancellationhandler-properly/54341/4
There are cases that are very similar, but not quite the same. This code won't compile because of this error:
"Reference to captured var 'requestHandle' in concurrently-executing code"
I assume the compiler is trying to protect me from using the requestHandle before it's initialized. But I'm not sure how else to work around this problem. The other examples shown in the Swift Forum discussion thread all seem to have a pattern where the requester object can be initialized before calling its start function.
I also tried to save the requestHandle as a class variable, but I got a different compile error at the same location:
Actor-isolated property 'profileHandle' can not be referenced from a
Sendable closure
You said:
I assume the compiler is trying to protect me from using the requestHandle before it’s initialized.
Or, more accurately, it is simply protecting you against a race. You need to synchronize your interaction with your “requester” and that Handle.
But I’m not sure how else to work around this problem. The other examples shown in the Swift Forum discussion thread all seem to have a pattern where the requester object can be initialized before calling its start function.
Yes, that is precisely what you should do. Unfortunately, you haven’t shared where your requester is being initialized or how it was implemented, so it is hard for us to comment on your particular situation.
But the fundamental issue is that you need to synchronize your start and cancel. So if your requester doesn’t already do that, you should wrap it in an object that provides that thread-safe interaction. The standard way to do that in Swift concurrency is with an actor.
For example, let us imagine that you are wrapping a network request. To synchronize your access with this, you can create an actor:
actor ResponseDataRequest {
private var handle: Handle?
func start(completion: #Sendable #escaping (Data?, Error?) -> Void) {
// start it and save handle for cancelation, e.g.,
handle = requestor.start(...)
}
func cancel() {
handle?.cancel()
}
}
That wraps the starting and canceling of a network request in an actor. Then you can do things like:
func doSomething() async throws -> ResultData {
let responseDataRequest = ResponseDataRequest()
return try await withTaskCancellationHandler {
Task { await responseDataRequest.cancel() }
} operation: {
return try await withCheckedThrowingContinuation { continuation in
Task {
await responseDataRequest.start { result, error in
if let error = error {
continuation.resume(throwing: error)
} else {
let resultData = ResultData(result)
continuation.resume(returning: resultData)
}
}
}
}
}
}
You obviously can shift to unsafe continuations when you have verified that everything is working with your checked continuations.
After reviewing the Swift discussion thread again, I see you can do this:
...
var requestHandle: Handle?
let onCancel = { profileHandle?.cancel() }
return try await withTaskCancellationHandler {
onCancel()
}
...
Problem is how to wait for an async query on HealthKit to return a result BEFORE allowing execution to move on. The returned data is critical for further execution.
I know this has been asked/solved many times and I have read many of the posts, however I have tried completion handlers, Dispatch sync and Dispatch Groups and have not been able to come up with an implementation that works.
Using completion handler
per Wait for completion handler to finish - Swift
This calls a method to run a HealthKit Query:
func readHK() {
var block: Bool = false
hk.findLastBloodGlucoseInHealthKit(completion: { (result) -> Void in
block = true
if !(result) {
print("Problem with HK data")
}
else {
print ("Got HK data OK")
}
})
while !(block) {
}
// now move on to the next thing ...
}
This does work. Using "block" variable to hold execution pending the callback in concept seems not that different from blocking semaphores, but it's really ugly and asking for trouble if the completion doesn't return for whatever reason. Is there a better way?
Using Dispatch Groups
If I put Dispatch Group at the calling function level:
Calling function:
func readHK() {
var block: Bool = false
dispatchGroup.enter()
hk.findLastBloodGlucoseInHealthKit(dg: dispatchGroup)
print ("Back from readHK")
dispatchGroup.notify(queue: .main) {
print("Function complete")
block = true
}
while !(block){
}
}
Receiving function:
func findLastBloodGlucoseInHealthKit(dg: DispatchGroup) {
print ("Read last HK glucose")
let sortDescriptor = NSSortDescriptor(key: HKSampleSortIdentifierEndDate, ascending: false)
let query = HKSampleQuery(sampleType: glucoseQuantity!, predicate: nil, limit: 10, sortDescriptors: [sortDescriptor]) { (query, results, error) in
// .... other stuff
dg.leave()
The completion executes OK, but the .notify method is never called, so the block variable is never updated, program hangs and never exits from the while statement.
Put Dispatch Group in target function but leave .notify at calling level:
func readHK() {
var done: Bool = false
hk.findLastBloodGlucoseInHealthKit()
print ("Back from readHK")
hk.dispatchGroup.notify(queue: .main) {
print("done function")
done = true
}
while !(done) {
}
}
Same issue.
Using Dispatch
Documentation and other S.O posts say: “If you want to wait for the block to complete use the sync() method instead.”
But what does “complete” mean? It seems that it does not mean complete the function AND get the later async completion. For example, the below does not hold execution until the completion returns:
func readHK() {
DispatchQueue.global(qos: .background).sync {
hk.findLastBloodGlucoseInHealthKit()
}
print ("Back from readHK")
}
Thank you for any help.
Yes, please don't fight the async nature of things. You will almost always lose, either by making an inefficient app (timers and other delays) or by creating opportunities for hard-to-diagnose bugs by implementing your own blocking functions.
I am far from a Swift/iOS expert, but it appears that your best alternatives are to use Grand Central Dispatch, or one of the third-party libraries for managing async work. Look at PromiseKit, for example, although I haven't seen as nice a Swift Promises/Futures library as JavaScript's bluebird.
You can use DispatchGroup to keep track of the completion handler for queries. Call the "enter" method when you set up the query, and the "leave" at the end of the results handler, not after the query has been set up or executed. Make sure that you exit even if the query is completed with an error. I am not sure why you are having trouble because this works fine in my app. The trick, I think, is to make sure you always "leave()" the dispatch group no matter what goes wrong.
If you prefer, you can set a barrier task in the DispatchQueue -- this will only execute when all of the earlier tasks in the queue have completed -- instead of using a DispatchGroup. You do this by adding the correct options to the DispatchWorkItem.
I'm currently testing a number of classes that do network stuff like REST API calls, and a Realm database is mutated in the process. When I run all the different tests I have at once, race conditions appear (but of course, when I run them one by one, they all pass). How can I reliably make the tests pass?
I have tried to call the mentioned functions in a GCD block like this:
DispatchQueue.main.async {
self.function.start()
}
One of my tests are still failing, so I guess the above didn't work. I have enabled Thread Sanitizer and it reports, from time to time, that race conditions appear.
I can't post code, so I'm looking for conceptual solutions.
Typically some form of dependency injection. Be it an internally exposed var to the DispatchQueue, a default argument in a function with the queue, or a constructor argument. You just need some way to pass a test queue that dispatches the event when you need to.
DispatchQueue.main.async will schedule the block async to the callee on the main queue and therefore isn't guarenteed by the time you make an assertion.
Example (disclaimer: I'm typing from memory so it might not compile but it gives the idea):
// In test code.
struct TestQueue: DispatchQueue {
// make sure to impement other necessary protocol methods
func async(block: () -> Void) {
// you can even have some different behavior for when to execute the block.
// also you can pass XCTestExpectations to this TestQueue to be fulfilled if necessary.
block()
}
}
// In source code. In test, pass the Test Queue to the first argument
func doSomething(queue: DispatchQueue = DispatchQueue.main, completion: () -> Void) {
queue.async(block: completion)
}
Other methods of testing async and eliminating race conditions revolve around craftily fulfilling an XCTestExpectation.
If you have access to the completion block that is eventually invoked:
// In source
class Subject {
func doSomethingAsync(completion: () -> Void) {
...
}
}
// In test
func testDoSomethingAsync() {
let subject = Subject()
let expect = expectation(description: "does something asnyc")
subject.doSomethingAsync {
expect.fulfill()
}
wait(for: [expect], timeout: 1.0)
// assert something here
// or the wait may be good enough as it will fail if not fulfilled
}
If you don't have access to the completion block it usually means finding a way to inject or subclass a test double that you can set an XCTestExpectation on and will eventually fulfill the expectation when the async work has completed.