State of the conditional variable in pthread_cond_ - rtos

while writing a thread program i came across small issue.
The issue is: How do i know the state of a conditional variable
I mean to ask if a pthread is already waiting on pthread_cond_wait and if i again try to wait on the same conditional variable it leads to deadlock condition. Inorder to avoid this could you please suggest how do i know the state of a condvar before waiting on the same.

Related

What condition variables can do that unlock+yield cannot?

In POSIX, there's the requirement that when a wait is called on a condition variable and a mutex, the 2 operations - unlocking the mutex and blocking the thread, be atomically performed, in such way that any broadcast/signal should take effect as if they happened after blocking. I suppose there should be equivalent requirements on C11, C++ condition variables as well, and I won't go on to do a verbose enumeration.
However, in some system (such as many people's nostalgia WinXP), there wasn't a condition variable mechanism. Instead, they have to perform a unlock+yield to achieve similar (same?) effect. And this works, because even if the broadcast/signal occured in-between the unlock and yield, when the thread is re-scheduled, its observable behavior is the same as if the wake occured after the block. WinXP supported mutex, and it had an SleepEx function that can work like an yield.
So it begs the question: What condition variables can do, that unlock+yield cannot?
In response to the comment: I use WinXP as an example because it happens to be one that supported mutex but not condvar, and the fact that it's one generation's memory. Of course, we assume correctness and reasonable performance, and the question doesn't specifically ask Windows and it asks any implementation in general.

SystemVerilog assertion semantics & when to stop simulation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
The vendor for the simulation tool I'm using (Cadence) has said that they must stop failing assertions at the time the assertion fails and not during the action blocks of the assertion. I prefer for them to stop only when a $error or $fatal happens - and that would happen in the action block. My reasoning is that the simulator is stopping before I have a chance to print out the message.
Their reasoning relates to the state of the simulation during the assertion (observed region) vs the state during the action block (reactive region). I am trying to figure out if this explanation makes sense.
Can you weigh in on this? Here's their explanation:
SVA concurrent assertions execute in the observed region while their
action block execute in the reactive region. Currently, we stop in the
observed region right at the instant the assertion fails. There could
be other processes that run between the assertion failure and the
execution of action block. This would mean, the state of simulator at
stop point is not consistent to assertion failure. There could be
delays in the action block itself. Stopping after the action block may
mean stopping after those delays.
This explanation makes me think that SVA is designed badly. If there's no easy way to print a message during an SVA assertion failure without the state changing in a significant way, perhaps there needs to be an update to SVA. On the other hand, if this has never come up for others before, I question if the vendor is correct in their analysis.
At its core I want to know:
Can state really change between the assertion failure and the action block?
Would $sampled in the action block be sufficient to mitigate the concern of the changes in state?
Wouldn't the standard have specified the action block to happen while the state still persists? At least somewhat? And if the state changes, I would think there would be a way to still view what caused the failure. Perhaps that is $sampled. This is what I'm trying to figure out.
You need to take this up with your tool vendor. There is nothing in the LRM that says the simulator has to stop on an assertion failure—that is a tool option. It does say there is an implicit call to $error if there is no action block. And the LRM does not require the tool to stop on $error—that is a tool option as well.
If your action block has a $error in it, that could be the trigger to stop your simulation and it should print your message along with it.

multiple triggered subsystem + algebraic loop, initialisation problem

I have a Simulink diagram which contains multiple triggered subsystem with different timestamp. In this model I also got a feedback loop inducing an algebraic loop. Therefore the signal must be initialised, in order to do that, I used a Memory block.
The problem is on the feedback loop, the value of the signal seems to be not initialised.
I believe the origin of this problem is that it is indeed initialised by the memory block for the first timestamp, however, the trigger on the next subsystem did not occur. By default, this subsytem puts its out signal value to be 0. The loop is therefore broken there.
Did someone already encounter this situation ? Any Tips ?
Thank you for your time.
You could add initialization blocks for your trigger values? I don't know about what SubSystem0 looks like inside, but its output could use an initialization block as well, this way you guarantee that you have an input to Subsystem

In Scala, does Futures.awaitAll terminate the thread on timeout?

So I'm writing a mini timeout library in scala, it looks very similar to the code here: How do I get hold of exceptions thrown in a Scala Future?
The function I execute is either going to complete successfully, or block forever, so I need to make sure that on a timeout the executing thread is cancelled.
Thus my question is: On a timeout, does awaitAll terminate the underlying actor, or just let it keep running forever?
One alternative that I'm considering is to use the java Future library to do this as there is an explicit cancel() method one can call.
[Disclaimer - I'm new to Scala actors myself]
As I read it, scala.actors.Futures.awaitAll waits until the list of futures are all resolved OR until the timeout. It will not Future.cancel, Thread.interrupt, or otherwise attempt to terminate a Future; you get to come back later and wait some more.
The Future.cancel may be suitable, however be aware that your code may need to participate in effecting the cancel operation - it doesn't necessarily come for free. Future.cancel cancels a task that is scheduled, but not yet started. It interrupts a running thread [setting a flag that can be checked]... which may or may not acknowledge the interrupt. Review Thread.interrupt and Thread.isInterrupted(). Your long-running task would normally check to see if it's being interrupted (your code), and self-terminate. Various methods (i.e. Thread.sleep, Object.wait and others) respond to the interrupt by throwing InterruptedException. You need to review & understand that mechanism to ensure your code will meet your needs within those constraints. See this.

Is blocking on mutex equivalent to empty while cycle?

I'm writing a concurrency application for the iPhone.
I wonder if this code:
while(!conditionBoolean)
{
// do nothing
// until another thread makes this variable true.
}
makeWork();
Is equivalent to the following:
[lock lock]; // this lock is locked by another thread
// causing the current to block until it's unlocked
[lock unlock];
makeWork();
If it's not, what's the difference?
Thank you.
You should prefer the second, the first will produce a tight loop and delay or maybe even prevent the variable being set in the way you want/expect. At the very least you would have to introduce a delay in that loop, a sleep of some kind.
Better still would be to wait on a signalling primitive for the work to complete, which then gets signalled by the other thread - the design is then deterministic, versus depending on a mutex or state variable that some other thread might lock or modify before you get your chance. In general, it's better for a multi-threaded design to be event-driven (push model), not check shared state opportunistically (pull model).
My understanding of mutexes is that the lock can occur in less cycles, so for example it's possible that while you read the conditionboolean to become true, it's possible that another thread could still change it to true while you're reading it, and another goes to false before you read it again. This turns into a race condition, which the mutex locking would hope to avoid. Also this could cause your code not to be the "next in line" if you have numerous functions with a similar while loop.