I'm doing the Advanced coroutines with Kotlin flow and LiveData code lab and encountered this function in CacheOnSuccess.kt.
There is a comment that says "// Note: mutex is not held in this async block". What does this mean exactly? Why wouldn't the mutex be held in the async block? And what is the significance of that?
suspend fun getOrAwait(): T {
return supervisorScope {
// This function is thread-safe _iff_ deferred is #Volatile and all reads and writes
// hold the mutex.
// only allow one coroutine to try running block at a time by using a coroutine-base
// Mutex
val currentDeferred = mutex.withLock {
deferred?.let { return#withLock it }
async {
// Note: mutex is not held in this async block
block()
}.also {
// Note: mutex is held here
deferred = it
}
}
// await the result, with our custom error handling
currentDeferred.safeAwait()
}
}
according to withLock implementation, mutex is held on the just stack-frame, which means, after withLock execution the mutex is released, but code inside async may not execute right in that frame (maybe in another thread according to current Dispatchers), so probably when the block of async get executed, withLock call could have already returned, as for the also call, it is marked as inline, so it is executed in the current frame, right before withLock returned
The mutex is held by at most one coroutine at any time. async launches a coroutine which doesn't attempt to acquire the mutex. The significance of that is the same as for any other mutex -- the code inside the async block isn't guarded by the mutex, so it must not touch the state that is required to be guarded by it.
Related
Let assume a ObservableObject as the following:
class NavigationModel: ObservableObject {
#MainActor #Published name: String = ""
}
When changing the published property within an async code block running not in the main queue, I am normally using the following syntax:
await MainActor.run {
name = "foobar"
}
However, I have realised that the following syntax can also be compiled without errors:
await name = "foobar"
I am wondering if this short path is valid and provides same results?
Yes, the two snippets will behave the same.
The await will cause the current asynchronous call to "hop" the main actor and execute there.
The compiler (and Swift runtime) know that name can only be accessed from the main thread/actor, so will require you to either:
Access the property from an asynchronous block with #MainActor context.
Hop to the main actor from the current thread to execute there. Then, resume execution of the current function afterwards (however, the function could resume on a different thread than it was originally executing on prior to the hop to the main actor).
Your first snippet is essentially doing both of these steps, introducing a #MainActor scope to run your code in, then you are hopping to the main actor to execute it before continuing with your function call.
The second snippet is just skipping the scope creation part, running the single line of code on the main actor, then hopping back right away.
If you're running multiple things on the main actor you will want to reduce the number of hops that are performed, because this will introduce a lot of overhead if you are hopping back-and-forth to and from the main actor.
For example:
#MainActor func someUIOperation() async { ... }
func expensiveLotsOfHopsToMainActor() async {
for _ in 0..<100 {
await someUIOperation()
}
}
func betterOnlyOneHopToMainActor() async {
await MainActor.run {
for _ in 0..<100 {
someUIOperation()
}
}
}
See examples and more in the original pitch for #MainActor.
Let's say I have the following functions.
func first() async {
print("first")
}
func second() {
print("second")
}
func main() {
Task {
await first()
}
second()
}
main()
Even though marking first function as async does not make any sense as there is no async work it does, but still it is possible...
I was expecting that even though the first function is being awaited, it will be called asynchronously.
But actually the output is
first
second
How would I call the fist function asynchronously mimicking the GCD's variant of:
DispatchQueue.current.async { first() }
second()
This behavior will change depending upon the context.
If you invoke this from a non-isolated context, then first and second will run on separate threads. In this scenario, the second task is not actually waiting for the first task, but rather there is a race as to which will finish first. This can be illustrated if you do something time-consuming in the first task and you will see the second task is not waiting at all.
This introduces a race between first and second and you have no assurances as which order they will run. (In my tests, it runs second before first most of the time, but it can still occasionally run first before second.)
However, if you invoke this from an actor-isolated context, then first will wait for second to yield before running.
So, the question is, do you really care which order these two tasks start? If so, you can eliminate the race by (obviously) putting the Task { await first() } after the call to second. Or do you simply want to ensure that second won’t wait for first to finish? In that case, this already is the behavior and no change to your code is required.
You asked:
What if await first() needs to be run on the same queue as second() but asynchronously. … I am just thinking [that if it runs on background thread that it] would mean crashes due to updates of UI not from the main thread.
You can mark the routine to update the UI with #MainActor, which will cause it to run on the main thread. But note, do not use this qualifier with the time-consuming task, itself (because you do not want to block the main thread), but rather decouple the time-consuming operation from the UI update, and just mark the latter as #MainActor.
E.g., here is an example that manually calculates π asynchronously, and updates the UI when it is done:
func startCalculation() {
Task {
let pi = await calculatePi()
updateWithResults(pi)
}
updateThatCalculationIsUnderway() // this really should go before the Task to eliminate any races, but just to illustrate that this second routine really does not wait
}
// deliberately inefficient calculation of pi
func calculatePi() async -> Double {
await Task.detached {
var value: Double = 0
var denominator: Double = 1
var sign: Double = 1
var increment: Double = 0
repeat {
increment = 4 / denominator
value += sign * 4 / denominator
denominator += 2
sign *= -1
} while increment > 0.000000001
return value
}.value
}
func updateThatCalculationIsUnderway() {
statusLabel.text = "Calculating π"
}
#MainActor
func updateWithResults(_ value: Double) {
statusLabel.text = "Done"
resultLabel.text = formatter.string(for: value)
}
Note: To ensure this slow synchronous calculation of calculatePi is not run on the current actor (presumably the main actor), we want an “unstructured task”. Specifically, we want a “detached task”, i.e., one that is not run on the current actor. As the Unstructured Concurrency section of The Swift Programming Language: Concurrency: Tasks and Task Groups says:
To create an unstructured task that runs on the current actor, call the Task.init(priority:operation:) initializer. To create an unstructured task that’s not part of the current actor, known more specifically as a detached task, call the Task.detached(priority:operation:) class method.
I have a question about JMM and Scala futures.
In the following code, I have non-immutable Data class. I create an instance of it inside one thread(inside Future apply body), and then subscribe on completion event.
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object Hello extends App {
Future {
new Data(1, "2")
}.foreach { d =>
println(d)
}
Thread.sleep(100000)
}
class Data(var someInt: Int, var someString: String)
Can we guarantee that:
foreach body called from the same thread, where a Data instance was created?
If not, can we guarantee that actions inside the Future.apply happens-before(in terms of JMM) actions inside foreach body?
Completion happens-before callback execution.
Disclaimer: I am the main contributor.
I had a sort-of similar question, and what I found is -
1) in the doc Intellij so conveniently pulled up for me it says
Asynchronously processes the value in the future once the value becomes available...
2) on https://docs.scala-lang.org/overviews/core/futures.html it says
The result becomes available once the future completes.
Basically, it does not anywhere I can find say explicitly that there is a memory barrier. I suspect, however, that it is a safe assumption that there is. Otherwise the language would simply not work.
No.
You can get a good idea of this by looking through the source code for Promise/DefaultPromise/Future, which schedules the callback for foreach on the execution context/adds it to the listeners without any special logic requiring it to run on the original thread...
But you can also verify it experimentally, by trying to set up an execution context and threads such that something else will already be queued for execution when the Future in which Data was created completes.
implicit val context = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2))
Future {
new Data(1, "2")
println("Data created on: " + Thread.currentThread().getName)
Thread.sleep(100)
}.foreach { _ =>
println("Data completed on: " + Thread.currentThread().getName)
}
Future { // occupies second thread
Thread.sleep(1000)
}
Future { // queue for execution while first future is still executing
Thread.sleep(2000)
}
My output:
Data created on: pool-$n-thread-1
Data completed on: pool-$n-thread-2
2.
Less confident here than I'd like to be, but I'll give it a shot:
Yes.
DefaultPromise, the construct underlying Future, is wrapping an atomic reference, which behaves like a volatile variable. Since the write-to for updating the result must happen prior to the read-from which passes the result to the listener so it can run the callback, JMM volatile variable rules turn this into a happens-before relationship.
I don't think there are any guarantees that foreach is called from the same thread
foreach will not be called until the future completes succesfully. onComplete is a more idiomatic way of providing a callback to process the result of a Future.
In Scala and other programming languages one can use Futures and Await.
(In real code one would use e.g. zip+map instead of Await)
def b1() = Future { 1 }
def b2() = Future { 2 }
def a() = Future {
Await.result(b1(),Duration.inf) + Await.result(b2(),Duration.inf)
}
What is the difference to Async/Await in Javascript/Scala?
async function b1() { return 1 }
async function b2() { return 3 }
async function a() {
return await b1() + await b2()
}
The "Await.result" function in Scala is "blocking", which means that the calling thread will be paused until the awaited Future is completed, at which point it will resume with the returned value.
Pausing a thread can be expensive in a system under high load, as the thread context has to be saved in memory, and it can cause cache misses etc.. Blocking threads is considered poor practice in concurrent programming for that reason.
The async / await syntax in Javascript is non-blocking. When an async function invokes "await", it is converted into a Future, and placed into the execution queue. When the awaited future is complete, the calling function is marked as ready for execution and it will be resumed at some later point. The important difference is that no Threads need to be paused in this model.
There are a number of libraries which implement the async / await syntax in Scala, including https://github.com/scala/scala-async
Further reading
The Futures and Promises book by PRASAD, PATIL, AND MILLER has a good introduction to blocking and non-blocking ops.
Await is blocking while async is non blocking. Await waits in the current thread to finish the task while async won't block the current thread and runs in background.
I don't have idea about JavaScript. In Scala Await from scala.concurrent package is used for blocking main thread. There's another library called scala-async what is used for using await inside async block.
If you are using scala-async, then you need to call await inside async
Consider the following code
int Var;
Function1() {
[CS_Start]
Var++;
[CS_End]
}
Function2() {
[CS_Start]
Var += 2;
[CS_End]
}
ISR() {
[CS_Start]
Var--;
[CS_End]
}
How to protect Var in multitasking environment? One of the design I understand is to keep Var as volatile such that it is safe in multiprocessor scheduling scheme. Additionally Spin lock (in place of mutex) can be implemented to protect the critical section.
What happens if Spinlock is acquired by Function1 and ISR occurs (with higher priority than scheduler timer) ISR will keep on polling and Function1 never gets a chance to release the lock. Any solution to this problem ?