I'm having this excercise, which I don't know how to solve it:
Build semaphores using monitors: please define variables val (the
semaphore value) and Qu (of type condition), on which would be
possible suspending that process that if calling qWait() finds val =
0. Implement it and qSignal(), defining the code that initializes the semaphore as well.
I came up with this :
monitor Semaphore {
integer val;
condition Qu; //value > 0
procedure qWait() {
val--;
if (val < 0)
Qu.wait();
}
procedure qSignal() {
val++;
Qu.signal();
}
Semaphore(int init) {
val = init;
}
}
do you think it's the right solution?
I am not familiar with the programming language you are writing this in. If I can assume that only one thread of execution can be actively executing within the monitor at a time, then I think your implementation is sufficient. If that is not the case, then you need some extra code to handle this.
Using a simple language, like C, can help to clarify the requirements. To implement a semaphore (or a monitor) would require a mutex and a condition variable, like so:
struct Sem {
int value;
mutex lock;
condvar cond;
};
int qsignal(struct Sem *s) {
Lock(&s->lock);
if (++(s->val) <= 0) {
Signal(&s->cond);
}
Unlock(&s->lock);
return 0;
}
int qwait(struct Sem *s) {
Lock(&s->lock);
if (s->val-- <= 0) {
Wait(&s->cond, &s->lock);
}
Unlock(&s->lock);
}
Notice here that both functions are bracketed in a lock which prevents the Sem structure from changing while this is executing. The Wait() function atomically releases the lock and blocks the caller until the cond is Signaled(). Once signaled, it re-acquires the lock (waiting if necessary), and continues.
You can see the structural similarity to your code; but the lock transitions are more explicit. I hope this helps!
Related
Let's say I have the following functions.
func first() async {
print("first")
}
func second() {
print("second")
}
func main() {
Task {
await first()
}
second()
}
main()
Even though marking first function as async does not make any sense as there is no async work it does, but still it is possible...
I was expecting that even though the first function is being awaited, it will be called asynchronously.
But actually the output is
first
second
How would I call the fist function asynchronously mimicking the GCD's variant of:
DispatchQueue.current.async { first() }
second()
This behavior will change depending upon the context.
If you invoke this from a non-isolated context, then first and second will run on separate threads. In this scenario, the second task is not actually waiting for the first task, but rather there is a race as to which will finish first. This can be illustrated if you do something time-consuming in the first task and you will see the second task is not waiting at all.
This introduces a race between first and second and you have no assurances as which order they will run. (In my tests, it runs second before first most of the time, but it can still occasionally run first before second.)
However, if you invoke this from an actor-isolated context, then first will wait for second to yield before running.
So, the question is, do you really care which order these two tasks start? If so, you can eliminate the race by (obviously) putting the Task { await first() } after the call to second. Or do you simply want to ensure that second won’t wait for first to finish? In that case, this already is the behavior and no change to your code is required.
You asked:
What if await first() needs to be run on the same queue as second() but asynchronously. … I am just thinking [that if it runs on background thread that it] would mean crashes due to updates of UI not from the main thread.
You can mark the routine to update the UI with #MainActor, which will cause it to run on the main thread. But note, do not use this qualifier with the time-consuming task, itself (because you do not want to block the main thread), but rather decouple the time-consuming operation from the UI update, and just mark the latter as #MainActor.
E.g., here is an example that manually calculates π asynchronously, and updates the UI when it is done:
func startCalculation() {
Task {
let pi = await calculatePi()
updateWithResults(pi)
}
updateThatCalculationIsUnderway() // this really should go before the Task to eliminate any races, but just to illustrate that this second routine really does not wait
}
// deliberately inefficient calculation of pi
func calculatePi() async -> Double {
await Task.detached {
var value: Double = 0
var denominator: Double = 1
var sign: Double = 1
var increment: Double = 0
repeat {
increment = 4 / denominator
value += sign * 4 / denominator
denominator += 2
sign *= -1
} while increment > 0.000000001
return value
}.value
}
func updateThatCalculationIsUnderway() {
statusLabel.text = "Calculating π"
}
#MainActor
func updateWithResults(_ value: Double) {
statusLabel.text = "Done"
resultLabel.text = formatter.string(for: value)
}
Note: To ensure this slow synchronous calculation of calculatePi is not run on the current actor (presumably the main actor), we want an “unstructured task”. Specifically, we want a “detached task”, i.e., one that is not run on the current actor. As the Unstructured Concurrency section of The Swift Programming Language: Concurrency: Tasks and Task Groups says:
To create an unstructured task that runs on the current actor, call the Task.init(priority:operation:) initializer. To create an unstructured task that’s not part of the current actor, known more specifically as a detached task, call the Task.detached(priority:operation:) class method.
I know CAS is a well-known atomic operation. But I struggle to see why it must be atomic. Take the sample code below as an example. After if (*accum == *dest), if another thread jumps in and succeed to modify the *dest first, then switch back to the previous thread and it proceeds to *dest = newval;. Wouldn't that lead to a failure?
Is there something I am missing? Is there some mechanism that would prevent the above scenario from happening?
Any discussions would be greatly appreciated!
bool compare_and_swap(int *accum, int *dest, int newval)
{
if (*accum == *dest) {
*dest = newval;
return true;
} else {
*accum = *dest;
return false;
}
}
Often people use example code that is not atomic to describe what a CPU does atomically with a single instruction; because it's easier to see how it would work (and because a single cmpxchg instruction doesn't tell you much about how it works).
The code you've shown is like that (not atomic, to help understand how it works).
I had this question,too.This kind of things couldn't happen. The function that you wrote is an abstract operation of CPU, and the impletement is atomatic in real. U can google the key words of "cmpxchg" and will get the answer you find.
Yes, this code can lead to pitfalls that you mentioned as it looks from the outside. However, if we look at how it is compiled, it will lead to a cmpxchg command, which will be executed atomically by the compiler.
As a computer science concept compare and swap HAS to be implemented atomically because of what it is designed to do as a consensus object https://stackoverflow.com/a/56383038/526864
if another thread jumps in and succeed to modify the *dest first
I think that this premise is flawed because dest can not be allowed to change. The pseudocode should look more like
bool compare_and_swap(int *p, int oldval, int newval)
{
if (*p == oldval) {
*p = newval;
return true;
} else {
return false;
}
}
The example that you provided was for a specific implementation that returns the winning processes pid to the losers and only allows the single modification to *dest
an election protocol can be implemented such that every process checks the result of compare_and_swap against its own PID (= newval)
So compare-and-swap is either implemented with an atomic function/library or uses cmpxchg as you surmised
Do you think that these methods are special methods that directly utilize the hardware to perform atomic operations
I'm working on a CDCL SAT-Solver. I don't know how to implement non-chronological backtracking. Is this even possible with recursion or is it only possible in a iterative approach.
Actually what i have done jet is implemented a DPLL Solver which works with recursion. The great differnece from DPLL and CDCL ist that the backracking in the tree is not chronological. Is it even possible to implement something like this with recursion. In my opionion i have two choices in the node of the binary-decision-tree if one of to path leads i a conlict:
I try the other path -> but then it would be the same like the DPLL, means a chronological backtracking
I return: But then i will never come back to this node.
So am i missing here something. Could it be that the only option is to implement it iterativly?
Non-chronological backtracking (or backjumping as it is usually called) can be implemented in solvers that use recursion for the variable assignments. In languages that support non-local gotos, you would typically use that method. For example in the C language you would use setjmp() to record a point in the stack and longjmp() to backjump to that point. C# has try-catch blocks, Lispy languages might have catch-throw, and so on.
If the language doesn't support non-local goto, then you can implement a substitute in your code. Instead of dpll() returning FALSE, have it return a tuple containing FALSE and the number of levels that need to be backtracked. Upstream callers decrement the counter in the tuple and return it until zero is returned.
You can modify this to get backjumping.
private Assignment recursiveBackJumpingSearch(CSP csp, Assignment assignment) {
Assignment result = null;
if (assignment.isComplete(csp.getVariables())) {
result = assignment;
}
else {
Variable var= selectUnassignedVariable(assignment, csp);
for (Object value : orderDomainValues(var, assignment, csp)) {
assignment.setAssignment(var, value);
fireStateChanged(assignment, csp);
if (assignment.isConsistent(csp.getConstraints(var))) {
result=recursiveBackJumpingSearch(csp, assignment);
if (result != null) {
break;
}
if (result == null)
numberOfBacktrack++;
}
assignment.removeAssignment(var);
}
}
return result;
}
Consider the following code
int Var;
Function1() {
[CS_Start]
Var++;
[CS_End]
}
Function2() {
[CS_Start]
Var += 2;
[CS_End]
}
ISR() {
[CS_Start]
Var--;
[CS_End]
}
How to protect Var in multitasking environment? One of the design I understand is to keep Var as volatile such that it is safe in multiprocessor scheduling scheme. Additionally Spin lock (in place of mutex) can be implemented to protect the critical section.
What happens if Spinlock is acquired by Function1 and ISR occurs (with higher priority than scheduler timer) ISR will keep on polling and Function1 never gets a chance to release the lock. Any solution to this problem ?
I'd like to know how a composable event-driven design with callbacks can be used in Rust. From my existing experimentation, I have come to suspect that the ownership system in Rust is more suited to top-down procedural code and has problems with references to parent objects that are needed for callbacks in event-driven design.
Essentially, I would like to see the Rust equivalent for the following C++ code. The code implements an EventLoop which dispatches Timer events using a busy-loop, a Timer class with a timer_expired callback, and a User class that schedules a timer in intervals of 500ms.
#include <stdio.h>
#include <assert.h>
#include <list>
#include <chrono>
#include <algorithm>
using namespace std::chrono;
// Wrapping code in System class so we can implement all functions within classes declarations...
template <typename Dummy=void>
struct System {
class Timer;
class EventLoop {
friend class Timer;
private:
std::list<Timer *> m_running_timers;
bool m_iterating_timers;
typename std::list<Timer *>::iterator m_current_timer;
void unlink_timer (Timer *timer)
{
auto it = std::find(m_running_timers.begin(), m_running_timers.end(), timer);
assert(it != m_running_timers.end());
if (m_iterating_timers && it == m_current_timer) {
++m_current_timer;
}
m_running_timers.erase(it);
}
public:
EventLoop()
: m_iterating_timers(false)
{
}
milliseconds get_time()
{
return duration_cast<milliseconds>(system_clock::now().time_since_epoch());
}
void run()
{
while (true) {
milliseconds now = get_time();
m_iterating_timers = true;
m_current_timer = m_running_timers.begin();
while (m_current_timer != m_running_timers.end()) {
Timer *timer = *m_current_timer;
assert(timer->m_running);
if (now >= timer->m_expire_time) {
m_current_timer = m_running_timers.erase(m_current_timer);
timer->m_running = false;
timer->m_callback->timer_expired();
} else {
++m_current_timer;
}
}
m_iterating_timers = false;
}
}
};
struct TimerCallback {
virtual void timer_expired() = 0;
};
class Timer {
friend class EventLoop;
private:
EventLoop *m_loop;
TimerCallback *m_callback;
bool m_running;
milliseconds m_expire_time;
public:
Timer(EventLoop *loop, TimerCallback *callback)
: m_loop(loop), m_callback(callback), m_running(false)
{
}
~Timer()
{
if (m_running) {
m_loop->unlink_timer(this);
}
}
void start (milliseconds delay)
{
stop();
m_running = true;
m_expire_time = m_loop->get_time() + delay;
m_loop->m_running_timers.push_back(this);
}
void stop ()
{
if (m_running) {
m_loop->unlink_timer(this);
m_running = false;
}
}
};
class TimerUser : private TimerCallback {
private:
Timer m_timer;
public:
TimerUser(EventLoop *loop)
: m_timer(loop, this)
{
m_timer.start(milliseconds(500));
}
private:
void timer_expired() override
{
printf("Timer has expired!\n");
m_timer.start(milliseconds(500));
}
};
};
int main ()
{
System<>::EventLoop loop;
System<>::TimerUser user(&loop);
loop.run();
return 0;
}
The code works as standard C++14 and I believe is correct. Note, in a serious implementation I would make the running_timers an intrusive linked-list not a std::list for performance reasons.
Here are some properties of this solution which I need to see in a Rust equivalent:
Timers can be added/removed without restrictions, there are no limitations on how/where a Timer is allocated. For example one can dynamically manage a list of classes each using their own timer.
In a timer_callback, the class being called back has full freedom to access itself, and the timer it is being called from, e.g. to restart it.
In a timer_callback, the class being called also has freedom to delete itself and the timer. The EventLoop understands this possibility.
I can show some things I've tried but I don't think it will be useful. The major pain point I'm having is satisfy the borrowing rules with all the references to parent objects involved for callback traits.
I suspect a RefCell or something similar might be part of a solution, possibly one or more special classes with internal unsafe parts, that allow gluing stuff together. Maybe some parts of reference safety typically provided by Rust could only be guaranteed at runtime by panicking.
Update
I have created a prototype implementation of this in Rust, but it is not safe. Specifically:
Timers and the EventLoop must not be moved. If they are accidentally moves undefined behavior occurs due to use pointers to these. It is not possible to even detect this within Rust.
The callback implementation is a hack but should work. Note that this allows the same object to receive callbacks from two or more Timers, something that is not possible if traits were used for callbacks instead.
The callbacks are unsafe due to use of pointers that point to the object that is to receive the callback.
It is theoretically possible for an object to delete itself, but this actually seems unsafe in Rust. Because if a Timer callback ends up deleting itself, there would be a &self reference at one point that is not valid. The source of this unsafety is the use of pointers for callbacks.