Let's say I have the following functions.
func first() async {
print("first")
}
func second() {
print("second")
}
func main() {
Task {
await first()
}
second()
}
main()
Even though marking first function as async does not make any sense as there is no async work it does, but still it is possible...
I was expecting that even though the first function is being awaited, it will be called asynchronously.
But actually the output is
first
second
How would I call the fist function asynchronously mimicking the GCD's variant of:
DispatchQueue.current.async { first() }
second()
This behavior will change depending upon the context.
If you invoke this from a non-isolated context, then first and second will run on separate threads. In this scenario, the second task is not actually waiting for the first task, but rather there is a race as to which will finish first. This can be illustrated if you do something time-consuming in the first task and you will see the second task is not waiting at all.
This introduces a race between first and second and you have no assurances as which order they will run. (In my tests, it runs second before first most of the time, but it can still occasionally run first before second.)
However, if you invoke this from an actor-isolated context, then first will wait for second to yield before running.
So, the question is, do you really care which order these two tasks start? If so, you can eliminate the race by (obviously) putting the Task { await first() } after the call to second. Or do you simply want to ensure that second won’t wait for first to finish? In that case, this already is the behavior and no change to your code is required.
You asked:
What if await first() needs to be run on the same queue as second() but asynchronously. … I am just thinking [that if it runs on background thread that it] would mean crashes due to updates of UI not from the main thread.
You can mark the routine to update the UI with #MainActor, which will cause it to run on the main thread. But note, do not use this qualifier with the time-consuming task, itself (because you do not want to block the main thread), but rather decouple the time-consuming operation from the UI update, and just mark the latter as #MainActor.
E.g., here is an example that manually calculates π asynchronously, and updates the UI when it is done:
func startCalculation() {
Task {
let pi = await calculatePi()
updateWithResults(pi)
}
updateThatCalculationIsUnderway() // this really should go before the Task to eliminate any races, but just to illustrate that this second routine really does not wait
}
// deliberately inefficient calculation of pi
func calculatePi() async -> Double {
await Task.detached {
var value: Double = 0
var denominator: Double = 1
var sign: Double = 1
var increment: Double = 0
repeat {
increment = 4 / denominator
value += sign * 4 / denominator
denominator += 2
sign *= -1
} while increment > 0.000000001
return value
}.value
}
func updateThatCalculationIsUnderway() {
statusLabel.text = "Calculating π"
}
#MainActor
func updateWithResults(_ value: Double) {
statusLabel.text = "Done"
resultLabel.text = formatter.string(for: value)
}
Note: To ensure this slow synchronous calculation of calculatePi is not run on the current actor (presumably the main actor), we want an “unstructured task”. Specifically, we want a “detached task”, i.e., one that is not run on the current actor. As the Unstructured Concurrency section of The Swift Programming Language: Concurrency: Tasks and Task Groups says:
To create an unstructured task that runs on the current actor, call the Task.init(priority:operation:) initializer. To create an unstructured task that’s not part of the current actor, known more specifically as a detached task, call the Task.detached(priority:operation:) class method.
Related
Let assume a ObservableObject as the following:
class NavigationModel: ObservableObject {
#MainActor #Published name: String = ""
}
When changing the published property within an async code block running not in the main queue, I am normally using the following syntax:
await MainActor.run {
name = "foobar"
}
However, I have realised that the following syntax can also be compiled without errors:
await name = "foobar"
I am wondering if this short path is valid and provides same results?
Yes, the two snippets will behave the same.
The await will cause the current asynchronous call to "hop" the main actor and execute there.
The compiler (and Swift runtime) know that name can only be accessed from the main thread/actor, so will require you to either:
Access the property from an asynchronous block with #MainActor context.
Hop to the main actor from the current thread to execute there. Then, resume execution of the current function afterwards (however, the function could resume on a different thread than it was originally executing on prior to the hop to the main actor).
Your first snippet is essentially doing both of these steps, introducing a #MainActor scope to run your code in, then you are hopping to the main actor to execute it before continuing with your function call.
The second snippet is just skipping the scope creation part, running the single line of code on the main actor, then hopping back right away.
If you're running multiple things on the main actor you will want to reduce the number of hops that are performed, because this will introduce a lot of overhead if you are hopping back-and-forth to and from the main actor.
For example:
#MainActor func someUIOperation() async { ... }
func expensiveLotsOfHopsToMainActor() async {
for _ in 0..<100 {
await someUIOperation()
}
}
func betterOnlyOneHopToMainActor() async {
await MainActor.run {
for _ in 0..<100 {
someUIOperation()
}
}
}
See examples and more in the original pitch for #MainActor.
I'm doing the Advanced coroutines with Kotlin flow and LiveData code lab and encountered this function in CacheOnSuccess.kt.
There is a comment that says "// Note: mutex is not held in this async block". What does this mean exactly? Why wouldn't the mutex be held in the async block? And what is the significance of that?
suspend fun getOrAwait(): T {
return supervisorScope {
// This function is thread-safe _iff_ deferred is #Volatile and all reads and writes
// hold the mutex.
// only allow one coroutine to try running block at a time by using a coroutine-base
// Mutex
val currentDeferred = mutex.withLock {
deferred?.let { return#withLock it }
async {
// Note: mutex is not held in this async block
block()
}.also {
// Note: mutex is held here
deferred = it
}
}
// await the result, with our custom error handling
currentDeferred.safeAwait()
}
}
according to withLock implementation, mutex is held on the just stack-frame, which means, after withLock execution the mutex is released, but code inside async may not execute right in that frame (maybe in another thread according to current Dispatchers), so probably when the block of async get executed, withLock call could have already returned, as for the also call, it is marked as inline, so it is executed in the current frame, right before withLock returned
The mutex is held by at most one coroutine at any time. async launches a coroutine which doesn't attempt to acquire the mutex. The significance of that is the same as for any other mutex -- the code inside the async block isn't guarded by the mutex, so it must not touch the state that is required to be guarded by it.
I have a question about JMM and Scala futures.
In the following code, I have non-immutable Data class. I create an instance of it inside one thread(inside Future apply body), and then subscribe on completion event.
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object Hello extends App {
Future {
new Data(1, "2")
}.foreach { d =>
println(d)
}
Thread.sleep(100000)
}
class Data(var someInt: Int, var someString: String)
Can we guarantee that:
foreach body called from the same thread, where a Data instance was created?
If not, can we guarantee that actions inside the Future.apply happens-before(in terms of JMM) actions inside foreach body?
Completion happens-before callback execution.
Disclaimer: I am the main contributor.
I had a sort-of similar question, and what I found is -
1) in the doc Intellij so conveniently pulled up for me it says
Asynchronously processes the value in the future once the value becomes available...
2) on https://docs.scala-lang.org/overviews/core/futures.html it says
The result becomes available once the future completes.
Basically, it does not anywhere I can find say explicitly that there is a memory barrier. I suspect, however, that it is a safe assumption that there is. Otherwise the language would simply not work.
No.
You can get a good idea of this by looking through the source code for Promise/DefaultPromise/Future, which schedules the callback for foreach on the execution context/adds it to the listeners without any special logic requiring it to run on the original thread...
But you can also verify it experimentally, by trying to set up an execution context and threads such that something else will already be queued for execution when the Future in which Data was created completes.
implicit val context = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2))
Future {
new Data(1, "2")
println("Data created on: " + Thread.currentThread().getName)
Thread.sleep(100)
}.foreach { _ =>
println("Data completed on: " + Thread.currentThread().getName)
}
Future { // occupies second thread
Thread.sleep(1000)
}
Future { // queue for execution while first future is still executing
Thread.sleep(2000)
}
My output:
Data created on: pool-$n-thread-1
Data completed on: pool-$n-thread-2
2.
Less confident here than I'd like to be, but I'll give it a shot:
Yes.
DefaultPromise, the construct underlying Future, is wrapping an atomic reference, which behaves like a volatile variable. Since the write-to for updating the result must happen prior to the read-from which passes the result to the listener so it can run the callback, JMM volatile variable rules turn this into a happens-before relationship.
I don't think there are any guarantees that foreach is called from the same thread
foreach will not be called until the future completes succesfully. onComplete is a more idiomatic way of providing a callback to process the result of a Future.
I'm building a Swift-based iOS application that uses PromiseKit to handle promises (although I'm open to switching promise library if it makes my problem easier to solve). There's a section of code designed to handle questions about overwriting files.
I have code that looks approximately like this:
let fileList = [list, of, files, could, be, any, length, ...]
for file in fileList {
if(fileAlreadyExists) {
let overwrite = Promise<Bool> { fulfill, reject in
let alert = UIAlertController(message: "Overwrite the file?")
alert.addAction(UIAlertAction(title: "Yes", handler: { action in
fulfill(true)
}
alert.addAction(UIAlertAction(title: "No", handler: { action in
fulfill(false)
}
} else {
fulfill(true)
}
}
overwrite.then { result -> Promise<Void> in
Promise<Void> { fulfill, reject in
if(result) {
// Overwrite the file
} else {
// Don't overwrite the file
}
}
}
However, this doesn't have the desired effect; the for loop "completes" as quickly as it takes to iterate over the list, which means that UIAlertController gets confused as it tries to overlay one question on another. What I want is for the promises to chain, so that only once the user has selected "Yes" or "No" (and the subsequent "overwrite" or "don't overwrite" code has executed) does the next iteration of the for loop happen. Essentially, I want the whole sequence to be sequential.
How can I chain these promises, considering the array is of indeterminate length? I feel as if I'm missing something obvious.
Edit: one of the answers below suggests recursion. That sounds reasonable, although I'm not sure about the implications for Swift's stack (this is inside an iOS app) if the list grows long. Ideal would be if there was a construct to do this more naturally by chaining onto the promise.
One approach: create a function that takes a list of the objects remaining. Use that as the callback in the then. In pseudocode:
function promptOverwrite(objects) {
if (objects is empty)
return
let overwrite = [...] // same as your code
overwrite.then {
do positive or negative action
// Recur on the rest of the objects
promptOverwrite(objects[1:])
}
}
Now, we might also be interested in doing this without recursion, just to avoid blowing the call stack if we have tens of thousands of promises. (Suppose that the promises don't require user interaction, and that they all resolve on the order of a few milliseconds, so that the scenario is realistic).
Note first that the callback—in the then—happens in the context of a closure, so it can't interact with any of the outer control flow, as expected. If we don't want to use recursion, we'll likely have to take advantage of some other native features.
The reason you're using promises in the first place, presumably, is that you (wisely) don't want to block the main thread. Consider, then, spinning off a second thread whose sole purpose is to orchestrate these promises. If your library allows to explicitly wait for a promise, just do something like
function promptOverwrite(objects) {
spawn an NSThread with target _promptOverwriteInternal(objects)
}
function _promptOverwriteInternal(objects) {
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.awaitCompletion()
}
}
If your promises library doesn't let you do this, you could work around it by using a lock:
function _promptOverwriteInternal(objects) {
semaphore = createSemaphore(0)
for obj in objects {
let overwrite = [...] // same as your code
overwrite.then(...) // same as your code
overwrite.always {
semaphore.release(1)
}
semaphore.acquire(1) // wait for completion
}
}
BrightFutures is a nice implementation of "future" in the Swift language.
https://github.com/Thomvis/BrightFutures
I like to control the parallelism of multicore CPU with it. Does someone know a way to control the # of CPU cores/physical threads to be used?
All closures passed to BrightFutures are executed according to BF's default threading model. It seems like you want to diverge from the default model. This is possible by passing a custom execution context.
An execution context that limits the number of parallel tasks it executes, could be created with the following function:
func executionContextWithControlledParallelism(p: Int) -> ExecutionContext {
let s = Semaphore(value: p)
let q = Queue.global.context
return { task in
s.wait()
q {
task()
s.signal()
}
}
}
I tested this briefly using the following code:
let context = executionContextWithControlledParallelism(5)
for _ in 0..<100 {
future(context:context) { () -> Int in
return fibonacci(Int(arc4random_uniform(15)))
}
}
You'll have to pass context to every map, flatMap, etc. that you want to limit the parallelism of. I'll admit that seems cumbersome. A better way (that is currently not supported by BrightFutures) would be to set the default threading model, like this:
let context = executionContextWithControlledParalelism(5)
// this is not supported right now:
BrightFutures.setDefaultThreadingModel(model: {
return context
})
If you like this, please consider filing an issue to request this or (even better) create a pull request.