variable sharing between isr and function call - operating-system

Consider the following code
int Var;
Function1() {
[CS_Start]
Var++;
[CS_End]
}
Function2() {
[CS_Start]
Var += 2;
[CS_End]
}
ISR() {
[CS_Start]
Var--;
[CS_End]
}
How to protect Var in multitasking environment? One of the design I understand is to keep Var as volatile such that it is safe in multiprocessor scheduling scheme. Additionally Spin lock (in place of mutex) can be implemented to protect the critical section.
What happens if Spinlock is acquired by Function1 and ISR occurs (with higher priority than scheduler timer) ISR will keep on polling and Function1 never gets a chance to release the lock. Any solution to this problem ?

Related

How to call async function asynchronously without awaiting for the result

Let's say I have the following functions.
func first() async {
print("first")
}
func second() {
print("second")
}
func main() {
Task {
await first()
}
second()
}
main()
Even though marking first function as async does not make any sense as there is no async work it does, but still it is possible...
I was expecting that even though the first function is being awaited, it will be called asynchronously.
But actually the output is
first
second
How would I call the fist function asynchronously mimicking the GCD's variant of:
DispatchQueue.current.async { first() }
second()
This behavior will change depending upon the context.
If you invoke this from a non-isolated context, then first and second will run on separate threads. In this scenario, the second task is not actually waiting for the first task, but rather there is a race as to which will finish first. This can be illustrated if you do something time-consuming in the first task and you will see the second task is not waiting at all.
This introduces a race between first and second and you have no assurances as which order they will run. (In my tests, it runs second before first most of the time, but it can still occasionally run first before second.)
However, if you invoke this from an actor-isolated context, then first will wait for second to yield before running.
So, the question is, do you really care which order these two tasks start? If so, you can eliminate the race by (obviously) putting the Task { await first() } after the call to second. Or do you simply want to ensure that second won’t wait for first to finish? In that case, this already is the behavior and no change to your code is required.
You asked:
What if await first() needs to be run on the same queue as second() but asynchronously. … I am just thinking [that if it runs on background thread that it] would mean crashes due to updates of UI not from the main thread.
You can mark the routine to update the UI with #MainActor, which will cause it to run on the main thread. But note, do not use this qualifier with the time-consuming task, itself (because you do not want to block the main thread), but rather decouple the time-consuming operation from the UI update, and just mark the latter as #MainActor.
E.g., here is an example that manually calculates π asynchronously, and updates the UI when it is done:
func startCalculation() {
Task {
let pi = await calculatePi()
updateWithResults(pi)
}
updateThatCalculationIsUnderway() // this really should go before the Task to eliminate any races, but just to illustrate that this second routine really does not wait
}
// deliberately inefficient calculation of pi
func calculatePi() async -> Double {
await Task.detached {
var value: Double = 0
var denominator: Double = 1
var sign: Double = 1
var increment: Double = 0
repeat {
increment = 4 / denominator
value += sign * 4 / denominator
denominator += 2
sign *= -1
} while increment > 0.000000001
return value
}.value
}
func updateThatCalculationIsUnderway() {
statusLabel.text = "Calculating π"
}
#MainActor
func updateWithResults(_ value: Double) {
statusLabel.text = "Done"
resultLabel.text = formatter.string(for: value)
}
Note: To ensure this slow synchronous calculation of calculatePi is not run on the current actor (presumably the main actor), we want an “unstructured task”. Specifically, we want a “detached task”, i.e., one that is not run on the current actor. As the Unstructured Concurrency section of The Swift Programming Language: Concurrency: Tasks and Task Groups says:
To create an unstructured task that runs on the current actor, call the Task.init(priority:operation:) initializer. To create an unstructured task that’s not part of the current actor, known more specifically as a detached task, call the Task.detached(priority:operation:) class method.

Mutex is not held in this async block

I'm doing the Advanced coroutines with Kotlin flow and LiveData code lab and encountered this function in CacheOnSuccess.kt.
There is a comment that says "// Note: mutex is not held in this async block". What does this mean exactly? Why wouldn't the mutex be held in the async block? And what is the significance of that?
suspend fun getOrAwait(): T {
return supervisorScope {
// This function is thread-safe _iff_ deferred is #Volatile and all reads and writes
// hold the mutex.
// only allow one coroutine to try running block at a time by using a coroutine-base
// Mutex
val currentDeferred = mutex.withLock {
deferred?.let { return#withLock it }
async {
// Note: mutex is not held in this async block
block()
}.also {
// Note: mutex is held here
deferred = it
}
}
// await the result, with our custom error handling
currentDeferred.safeAwait()
}
}
according to withLock implementation, mutex is held on the just stack-frame, which means, after withLock execution the mutex is released, but code inside async may not execute right in that frame (maybe in another thread according to current Dispatchers), so probably when the block of async get executed, withLock call could have already returned, as for the also call, it is marked as inline, so it is executed in the current frame, right before withLock returned
The mutex is held by at most one coroutine at any time. async launches a coroutine which doesn't attempt to acquire the mutex. The significance of that is the same as for any other mutex -- the code inside the async block isn't guarded by the mutex, so it must not touch the state that is required to be guarded by it.

Why is CAS(Compare and Swap) atomic?

I know CAS is a well-known atomic operation. But I struggle to see why it must be atomic. Take the sample code below as an example. After if (*accum == *dest), if another thread jumps in and succeed to modify the *dest first, then switch back to the previous thread and it proceeds to *dest = newval;. Wouldn't that lead to a failure?
Is there something I am missing? Is there some mechanism that would prevent the above scenario from happening?
Any discussions would be greatly appreciated!
bool compare_and_swap(int *accum, int *dest, int newval)
{
if (*accum == *dest) {
*dest = newval;
return true;
} else {
*accum = *dest;
return false;
}
}
Often people use example code that is not atomic to describe what a CPU does atomically with a single instruction; because it's easier to see how it would work (and because a single cmpxchg instruction doesn't tell you much about how it works).
The code you've shown is like that (not atomic, to help understand how it works).
I had this question,too.This kind of things couldn't happen. The function that you wrote is an abstract operation of CPU, and the impletement is atomatic in real. U can google the key words of "cmpxchg" and will get the answer you find.
Yes, this code can lead to pitfalls that you mentioned as it looks from the outside. However, if we look at how it is compiled, it will lead to a cmpxchg command, which will be executed atomically by the compiler.
As a computer science concept compare and swap HAS to be implemented atomically because of what it is designed to do as a consensus object https://stackoverflow.com/a/56383038/526864
if another thread jumps in and succeed to modify the *dest first
I think that this premise is flawed because dest can not be allowed to change. The pseudocode should look more like
bool compare_and_swap(int *p, int oldval, int newval)
{
if (*p == oldval) {
*p = newval;
return true;
} else {
return false;
}
}
The example that you provided was for a specific implementation that returns the winning processes pid to the losers and only allows the single modification to *dest
an election protocol can be implemented such that every process checks the result of compare_and_swap against its own PID (= newval)
So compare-and-swap is either implemented with an atomic function/library or uses cmpxchg as you surmised
Do you think that these methods are special methods that directly utilize the hardware to perform atomic operations

Writing a semaphore using a monitor

I'm having this excercise, which I don't know how to solve it:
Build semaphores using monitors: please define variables val (the
semaphore value) and Qu (of type condition), on which would be
possible suspending that process that if calling qWait() finds val =
0. Implement it and qSignal(), defining the code that initializes the semaphore as well.
I came up with this :
monitor Semaphore {
integer val;
condition Qu; //value > 0
procedure qWait() {
val--;
if (val < 0)
Qu.wait();
}
procedure qSignal() {
val++;
Qu.signal();
}
Semaphore(int init) {
val = init;
}
}
do you think it's the right solution?
I am not familiar with the programming language you are writing this in. If I can assume that only one thread of execution can be actively executing within the monitor at a time, then I think your implementation is sufficient. If that is not the case, then you need some extra code to handle this.
Using a simple language, like C, can help to clarify the requirements. To implement a semaphore (or a monitor) would require a mutex and a condition variable, like so:
struct Sem {
int value;
mutex lock;
condvar cond;
};
int qsignal(struct Sem *s) {
Lock(&s->lock);
if (++(s->val) <= 0) {
Signal(&s->cond);
}
Unlock(&s->lock);
return 0;
}
int qwait(struct Sem *s) {
Lock(&s->lock);
if (s->val-- <= 0) {
Wait(&s->cond, &s->lock);
}
Unlock(&s->lock);
}
Notice here that both functions are bracketed in a lock which prevents the Sem structure from changing while this is executing. The Wait() function atomically releases the lock and blocks the caller until the cond is Signaled(). Once signaled, it re-acquires the lock (waiting if necessary), and continues.
You can see the structural similarity to your code; but the lock transitions are more explicit. I hope this helps!

Zookeeper barrier implementation

I am trying to implement a barrier in Zookeeper. My implementation works all of the time when there are a small number of nodes that need to join to pass the barrier. However, when I test my implementation with 100+ nodes needing to joining the barrier, around 1% of the time it seems like that one of the nodes is missing the last watcher event, and not checking to see if the number of children of the barrier node has changed.
I even synchronized the process method on the watcher, but that did not change anything. Below is the code for my process method, and the logic that checks to see if needs to move forward.
Watcher process :
public BarrierWatcher(FastBarrier FastBarrier) {
this.ofb = FastBarrier;
}
#Override
public synchronized void process(WatchedEvent event) {
synchronized (ofb) {
ofb.notify();
}
}
Logic to control barrier mechanism:
BarrierWatcher bw = new BarrierWatcher(this);
List<String> memberList = zk.getChildren(barrierPath, bw);
synchronized(this) {
while (memberList.size() < numOfMembers) {
this.wait(1000);
memberList = zk.getChildren(barrierPath, bw);
}
}
Instead of just calling this.wait(), I had add this.wait(1000) for the rare failure occurrence. With 1000 in place it always passes the barrier once all nodes have joined. I was sure that synchronizing the process method would fix this, but it hasn't. Anyone have any experience with this, or an ideas what i might be doing wrong?
You can compare your implementation with netflix-curator where distributed barrier is already implemented.