Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
I must solve this problem:
Two processes, A and B, each need three records, 1, 2, and 3, in a
database. If A asks for them in the order 1, 2, 3, and B asks for them
in the same order, deadlock is not possible. However, if B asks for
them in the order 3, 2, 1, then deadlock is possible. With three
resources, there are 3! or six possible combinations in which each
process can request them. What fraction of all the combinations is
guaranteed to be deadlock free?
And I've seen the solution to this problem in a book:
123 deadlock free
132 deadlock free
213 possible deadlock
231 possible deadlock
312 possible deadlock
321 possible deadlock
Since four of the six may lead to deadlock, there is a 1/3 chance of
avoiding a deadlock and a 2/3 chance of getting one.
But I can't figure out what logic is behind of this solution.
Would someone please explain why this solution is correct?
I've searched a lot but didn't find anything and all of the answers to this problem was without clear explanation.
Deadlock occurs when both threads must wait to acquire a lock that the other thread already acquired (causing both threads to wait forever).
If both threads try to acquire the same lock first then only one thread can succeed and the other must wait, and the waiting thread will be waiting while not holding any locks, so deadlock can't happen because the thread that acquired lock 1 will be able to acquire all other locks it wants and will be able to release lock 1 when its finished (which allows the waiting thread to continue).
E.g. if A and B try to acquire lock 1 first and A wins, then B waits while not holding any lock and A can acquire any other locks in any order it wants because B isn't holding any locks (and then A will release lock 1 and B can stop waiting).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm working on STM32F103RB Nucleo Board. I want to know how CAN messages are segregated in FIFO upon reception of data?. And what happens after FIFO is full(more than 3 messages)?.
When you configure a filter bank, you also specify the receive mailbox (you have 2 of them). Messages which are accepted by one filter bank goes into the associated mailbox.
FIFO (mailbox) overrun can trigger an interrupt if enabled. The behavior of the FIFO and the fate of the incoming messages are determined by the RFLM bit of the CAN->MCR register.
RFLM = 0 -> The last (3rd) message is overwritten (destroyed) by the new arriving messages. The first (oldest) 2 messages are preserved until you read them.
RFLM = 1 -> FIFO is locked. New arriving messages are discarded. The oldest 3 messages are preserved.
And what happens after FIFO is full(more than 3 messages)?
Then you are basically done for - you'll lose data upon Rx FIFO overflow, which is often unacceptable in CAN real-time systems. So in case your MCU is too busy to always meet the 3 message deadline, you would have to implement some ugly system with interrupts + ring buffers.
This is one reason why CAN controllers from somewhere around the late 90s/early 2000s started to use some 5 to 8 message rx buffers. BxCAN is apparently ancient, since it is worse than those 20+ years old controllers.
Hopefully you can DMA the messages, which is much prettier than the mentioned interrupt/ring buffer complexity. If that's not an option, then you should perhaps go for a modern CAN controller instead. Bascially any other CAN controller on the market has a larger rx FIFO than this one.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I would like to know if my solution to this problem is correct
ThreadA and ThreadB both need both resources, P and Q, in order to do some work. Each Thread can acquire a resource with lockQ() or lockP(). The resources cannot be shared. If a Task tries to acquire a resource that is already in use, it blocks. When a Task no longer needs a resource it owns it can release it with unlockP() or unlockQ().
Each task runs its own code and uses the functions lockP(), lockQ(), doWork(), unlockP() and unlockQ(). Each thread releases its resources after invokes “doWork()”; the resources are released in the opposite order they were obtained. Show a sequence of operations for each thread so that deadlock is impossible.
Solution:
Task A
lockP();
lockQ();
doWork();
unlockQ();
unlockP();
Task B
lockP();
lockQ();
doWork();
unlockQ();
unlockP();
The reason I locked P and Q first for both, was because if the processor was executing TaskA first, TaskB would become blocked. This would allow for TaskA to finish execution, and then move on to TaskB which would e unblocked.
Yes your solution is correct.
An alternative is for Task B to lockQ before lockP, while Task A does lockP and lockQ. But this has the potential for dead lock if B gets Q and A gets P. In such a case both task's are blocked.
As you mention, having both acquire the resources in the same order, ensures one is blocked without causing the other to block on the other resource.
Coffman, Elphick and Shoshani (1971) identified four conditions for occurrence of a dead lock[1],
Shared resources which are used under mutual exclusion
Incremental acquisition
No preemption (once acquired, a task will not give up a resource until it has completed its work)
Wait-for cycle
You must nullify one of these conditions for a guarantee of no deadlocks occurring. By ensuring both tasks acquire the resources in the same order, you have make sure condition 4 does not hold, i.e. A, B will never both hold a resource and wait for a second resource at the same time.
[1]: Magee, Kramer, Concurrency State Models & Java Programs, page 107
I have a doubt arround the paradigm of distributed systems.
Taking into consideration the condition variables that the signal operation unlocks. If we say that the processes are signaled in Last In First Out motion what vantages can we get from here and disadvantages?
The disadvantages and advantages related to what?... Assuming it is related to having no order I would say That a disadvantage is that if we have many processes being put to wait on that condition constantly we may see starvation because only the most recent processes will wake up making it impossible for the first ones to ever wake up unless processes stop being put to wait.
The advantages I'm not so certain, but we can always say that at least we have some order and the signal won't just wake a random process wich we may use for our bnefit.
There may be other advantages or disadvantages that I didn't think about so it may be best to wait for other answers.
Deadlock-
A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does.
For deadlock to happen all these four conditions must hold simultaneously
Mutual exclusion
Hold and wait
No preemption
Circular wait
we apply deadlock detection algorithm to check whether the system is in deadlock or not. But if any of the above criterion fails(For example No preemption fails, so some resource is being released) which incur the system to be deadlock free. So what I think, if the deadlock detection algorithm finds the state to be unsafe and all the above four criterion holds true simultaneously then we can say the system is in deadlock.
Unsafe state may or may not lead to deadlock.
But unsafe state with all these 4 conditions holding simultaneously must incur deadlock.
Am I thinking right?
I have another question in my mind. How can we say deadlock has occurred definitely because the next moment some process may release their resources to get rid of deadlock.
Am I thinking right?
Yes, you are correct.
See this link to see why unsafe may not lead to deadlock.
I have another question in my mind. How can we say deadlock has occurred definitely because the next moment some process may release their resources to get rid of deadlock.
Say deadlock has occurred. All processes causing the deadlock are waiting for some resource to be acquired. And because of "No preemption" no such process will get preempted and therefore release resources. Also because of "Hold and wait" property, process needs some more resources to continue but is not going to give up or release whatever it is holding now and will wait till its required resources are met. Once there is deadlock, nothing can happen (there cannot be any progress) until you break one of the above condition. Breaking a condition will make some other process to meet its requirements and ensure progress and completion.
I am trying to understand and solve the following problem.
The following program attempts to use a pair of semaphores t and s for mutual exclusion.
Initially: s = 1, t = 0.
Thread 1 Thread 2
-------- --------
1. P(s); P(s);
2. V(s); V(s);
3. P(t); P(t);
4. V(t); V(t);
Please remember that:
The P operation wastes time or sleeps until a resource protected by
the semaphore becomes available, at which time the resource is
immediately claimed.
The V operation is the inverse: it makes a resource available again
after the process has finished using it.
1. Why does this program cause a deadlock?
2. What changes can be made to initial semaphore values to remove deadlock potential?
UPDATE
According to comments I am able to better understand and illustrate the deadlock visually below, so please let me know if I understood correctly.
Here is how the deadlock happens, IF Thread 1 gets CPU Time before Thread 2:
How to fix this?
Set value of t to initially be 1
The deadlock is at Line 3. At this line both the Threads are waiting continuously to acquire lock on resource t. The initial value of t is 0, which means it is already in a locked state, so for example if Thread1 reaches first to line 3 it waits till the value of t becomes 1 and similarly after some time Thread2 will wait at the same line for t to become 1. In this way both the process will wait continuously for the resource creating a `deadlock.
Hint: You can avoid deadlocks by lock ordering. For example, all code must lock s before it locks t. The question demonstrates what can happen if that is not the case. You can "make changes to the initial semaphore values" to conform with lock ordering.