Synchronization (Operating Systems) - operating-system

this is a problem I faced in my operating system's exam. I could not figure out the right answer for it. Can someone help.Given is a code for synchronization where many threads are trying to access a global counter g using lock-
if(lock==1)
wait(); //sleep this thread until some other thread wakes up this thread
else
lock=1; //enter in protected area
//access global counter g//
lock=0;
//wake up some other thread which is waiting for the lock to be released
What is the problem in above synchronization? Choose anyone of the options given below
The synchronization is fine and will run correctly.
Will only run on uni-processor systems but not on multiprocessor systems.
Will not run on any system
Can’t say. Need more data

The answer is 3. This code fails both at safety and liveness as long as threads can be preempted. For safety, consider the following interleaving of operations with two threads t1 and t2:
t1 checks lock, skips to the else statement
OS preempts t1 and schedules t2
t2 checks lock, skips to the else statement
And we have two threads in the critical section. This is why you need some sort of atomic test-and-set operation, or the ability to disable preemption, to do it properly.
For liveness, consider the following interleaving of operations with two threads t1 and t2:
t1 checks lock, skips to the else statement
t1 sets lock to 1
OS preempts t1 and schedules t2
t2 checks lock, finds 1
OS preemtps t2 and schedules t1
t1 sets lock to 0
t1 finds no thread waiting and does nothing else
OS schedules t2 again
t2 starts waiting...
And thus t2 is (potentially) waiting forever. The solution is for the synchronization primitive to keep track of wake-ups (e.g., a semaphore) or require that testing the condition and waiting is done atomically (e.g., mutexes and condition variables).

Related

Postgres ISOLATION LEVEL

I want to ask you for help with my problem. I have a program that triggers asynchronous computations in parallel and waits in the loop until they are finished.
I am using Postgres as a database where I have created the table computation_status that contains the following data when computations is triggered:
computation
finished
COMPUTATION_A
null
COMPUTATION_B
null
COMPUTATION_C
null
Then I am waiting in the loop until all computations are finished. This loop acceps notifications for each computation that is finished and triggers SQL transactions to update its status and check if there is any other computation running. For example:
T1:
BEGIN_TRANSACTION
update computation_status set finished = NOW() where and computation = 'COMPUTATION_A'
select exists (select 1 from computation_status where finished is null)
COMMIT
T2:
BEGIN_TRANSACTION
update computation_status set finished = NOW() where and computation = 'COMPUTATION_B'
select exists (select 1 from computation_status where finished is null)
COMMIT
T3:
BEGIN_TRANSACTION
update computation_status set finished = NOW() where and computation = 'COMPUTATION_C'
select exists (select 1 from computation_status where finished is null)
COMMIT
And when the last computation is finished the program exits the waiting loop.
What level of isolation should I use to avoid these problems? I know I should at least use the READ_COMMITED isolation level to prevent non-repeatable reads, but is that enough? Or is it also possible that phantom reads will occur and I should use REPETABLE_READ? (I'm not sure if an UPDATE is counted as a READ too).
I want to avoid the problem that for example computations A and B will be finished at the same time as the last ones. Then T1 will set A=finished and read that B is not finished and T2 will set B=finished and read that A is not fished and this will cause a problem in my application because it will end up in an infinite loop.
To avoid race conditions here, you have to effectively serialize the transactions.
The only isolation level where that would work reliably would be SERIALIZABLE. However, that incurs a performance penalty, and you have to be ready to repeat transactions in case a serialization error is thrown. If more than one of these transactions are running concurrently, a serialization error will be thrown.
The alternative would be to use locks, but that is not very appealing: using row locks would lead to deadlocks, and using table locks would block autovacuum, which would eventually bring your system down.

Interruption of process in critical section

In Priority based scheduling, I came across the problem of Priority inversion which is a higher priority process is forced to wait for the lower priority task.
One possible scenario is, consider three process L,M,H with order of priority L < M < H .
L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; M interrupts L and starts running ; M runs till completion and relinquishes control ; L resumes and starts running till the end of CS ; H enters CS and starts running.
Here, my question is, regarding the statement M interrupts L and starts running i.e., can a process executing in Critical section be interrupted or pre-empted.
Here, my question is, regarding the statement M interrupts L and starts running i.e., can a process executing in Critical section be interrupted or pre-empted.
It depends on how the critical section is implemented.
In operating system code you will frequently find critical sections implemented where interrupts are blocked. In this kind of implementation, a process will always execute the entire critical section without interruption.
In user code that uses critical sections implemented through system services, the process invariably can be interrupted. If the were not the case a process could take over the system by putting all its code in a critical section.
You are describing one of the reasons process priorities should be consistent. Unless you are doing real time processing or background batch processing, all processes should generally have the same base priority.
The old DECUS tapes used to be filled with "fair share" applications that would lower the priory of processes with high CPU usage and that would wreak havoc with system scheduling.
The answer is simple and yes.
If someother process with a higher priority in a preemptive system doesn't need to run in critical section, i.e. doesn't need to aquire a lock which is held by a lower priority process, then it can preempt the lower priority process regardless of what it is executing.
Even if M needs the CS, it will preempt L, run, get blocked and switched out for L to continue execution.

can single processor environment prevent race condition?

When multiple processors are working, the processes are working concurrently. Race condition happens when multiple threads accessing some common data area, one may overwrite the other value.
So, if it is a single processor and single core environment, can it prevent the race condition from happening?
Help me clarify this confusion, Thank you.
A race condition could happen in Single processor environment. As per Wiki Race Condition occurs when output is dependent on the sequence or timing of other uncontrollable events
Single processor environment could support multiple threads of the same process of different process that might be waiting for another thread to yield on a resource. Deadlocks can happen in single processor environments too.
Scenario:
T1: Wants add an employee record to file "employee.txt"
T2: Wants to compute average salary for "legal dept"
T3: Wants to remove an employee who left
T4: Wants to list number of employees working in each dept
If all the above threads are waiting at time=0 and submitted to single processor, it would decide which thread goes first, second and so on. The order in which the Threads are prioritised and yielded differs on different platform, scenarios etc. Thus T2 and T4 might not give consistent result.

Differences (if any) among livelock and starvation in Operating systems

What are the differences (if any) among the starvartion and livelock Or just they are the synonyms used? If there is a difference please some one afford an example.
Note: I have seen the wikipedia...but confused...
Thanks
Livelock is a special case of resource starvation where two processes follow an algorithm for resolving a deadlock that results in a cycle of different locked states because each process is attempting the same strategy to avoid the lock.
Starvation itself can occur for one process without another process being cyclically blocked; in this case no livelock exists, just a single unfortunate process that gets no resources allocated by the scheduler.
Starvation and Livelock (by Java docs) state:
Starvation and Livelock
Starvation and livelock are much less common a problem than deadlock, but are still problems that every designer of concurrent software is likely to encounter.
Starvation
Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by "greedy" threads. For example, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.
Livelock
A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gaston moves to his left. They're still blocking each other, so...
LiveLock
Livelock is a form of deadlock. In a deadlocked computation there is no possible execution sequence which succeeds. but In a livelocked computation, there are successful computations, but there are one or more execution sequences in which no process enters its critical section.
#Example scenario
process P1
c1 = 1
c2 = 1
while (true){
nonCriticalSection;
c1 = 0;
while(c2!=1){
c1=1;
c1=0;
}
criticalSection1;
c1 =1;
}
process P2
c1 = 1
c2 = 1
while (true){
nonCriticalSection;
c2 = 0;
while(c1!=1){
c2=1;
c2=0;
}
criticalSection1;
c2 =1;
}
In this scenario, How can starvation happen?
For example,
P1 sets c1 to 0
P2 sets c2 to 0
P2 checks c1 and resets c2 to 1.
P1 completes a full cycle;
checks c2
enters critical section
resets c1
enters non-critical section
sets c1 to 0
P2 sets c2 to 0
So now the same thing happens again and again, so P1 again may get a chance to execute and P2 will be stuck in the while loop. We don't force our algorithm to give a chance to P2. P1 may run a million times even before P2 gets a chance from OS since we don't enforce anything. So which means there can be some sequence that P2 starved. Since P1 can process and P2 starves, we call sequences as starvation.
Livelock is actually both threads that will be stuck in a while loop without doing anything. Since the above lines may give livelock the same as deadlock but the deadlock you don't do anything. but in live lock, some instructions will be executed but these executing instructions are not enough to allow a process to its critical section.
In this above pseudo-code how livelock will see with following line of executions.
P1 sets c1 to 0.
P2 sets c2 to 0.
P1 checks c2 and remains in the loop.
P2 checks c1 and remains in the loop.
P1 resets c1 to 1.
P2 resets c2 to 1.
P1 resets c1 to 0.
P2 resets c2 to 0.
P1 and P2 will be in the while loop doing some executions.
Difference from deadlock and livelock
When deadlock happens, No execution will happen. but in livelock, some executions will happen but those executions are not enough to enter the critical section.
Difference between Livelock and Starvation
In starvation, some processes will enter the critical section while some of them are not allowed to critical section due to some reasons(os scheduling, priority) but in livelock, the Critical section will be empty and processes are competing to enter the critical section with doing soemthing.

What is priority inversion?

I've heard the phrase 'priority inversion' in reference to development of operating systems.
What exactly is priority inversion?
What is the problem it's meant to solve, and how does it solve it?
Imagine three (3) tasks of different priority: tLow, tMed and tHigh. tLow and tHigh access the same critical resource at different times; tMed does its own thing.
tLow is running, tMed and tHigh are presently blocked (but not in critical section).
tLow comes along and enters the critical section.
tHigh unblocks and since it is the highest priority task in the system, it runs.
tHigh then attempts to enter the critical resource but blocks as tLow is in there.
tMed unblocks and since it is now the highest priority task in the system, it runs.
tHigh can not run until tLow gives up the resource. tLow can not run until tMed blocks or ends. The priority of the tasks has been inverted; tHigh though it has the highest priority is at the bottom of the execution chain.
To "solve" priority inversion, the priority of tLow must be bumped up to be at least as high as tHigh. Some may bump its priority to the highest possible priority level. Just as important as bumping up the priority level of tLow, is dropping the priority level of tLow at the appropriate time(s). Different systems will take different approaches.
When to drop the priority of tLow ...
No other tasks are blocked on any of the resources that tLow has. This may be due to timeouts or the releasing of resources.
No other tasks contributing to the raising the priority level of tLow are blocked on the resources that tLow has. This may be due to timeouts or the releasing of resources.
When there is a change in which tasks are waiting for the resource(s), drop the priority of tLow to match the priority of the highest priority level task blocked on its resource(s).
Method #2 is an improvement over method #1 in that it shortens the length of time that tLow has had its priority level bumped. Note that its priority level stays bumped at tHigh's priority level during this period.
Method #3 allows the priority level of tLow to step down in increments if necessary instead of in one all-or-nothing step.
Different systems will implement different methods depending upon what factors they consider important.
memory footprint
complexity
real time responsiveness
developer knowledge
Hope this helps.
Priority inversion is a problem, not a solution. The typical example is a low priority process acquiring a resource that a high priority process needs, and then being preempted by a medium priority process, so the high priority process is blocked on the resource while the medium priority one finishes (effectively being executed with a lower priority).
A rather famous example was the problem experienced by the Mars Pathfinder rover: http://www.cs.duke.edu/~carla/mars.html, it's a pretty interesting read.
Suppose an application has three threads:
Thread 1 has high priority.
Thread 2 has medium priority.
Thread 3 has low priority.
Let's assume that Thread 1 and Thread 3 share the same critical section code
Thread 1 and thread 2 are sleeping or blocked at the beginning of the example. Thread 3 runs and enters a critical section.
At that moment, thread 2 starts running, preempting thread 3 because thread 2 has a higher priority. So, thread 3 continues to own a critical section.
Later, thread 1 starts running, preempting thread 2. Thread 1 tries to enter the critical section that thread 3 owns, but because it is owned by another thread, thread 1 blocks, waiting for the critical section.
At that point, thread 2 starts running because it has a higher priority than thread 3 and thread 1 is not running. Thread 3 never releases the critical section that thread 1 is waiting for because thread 2 continues to run.
Therefore, the highest-priority thread in the system, thread 1, becomes blocked waiting for lower-priority threads to run.
It is the problem rather than the solution.
It describes the situation that when low-priority threads obtain locks during their work, high-priority threads will have to wait for them to finish (which might take especially long since they are low-priority). The inversion here is that the high-priority thread cannot continue until the low-priority thread does, so in effect it also has low priority now.
A common solution is to have the low-priority threads temporarily inherit the high priority of everyone who is waiting on locks they hold.
[ Assume, Low process = LP, Medium Process = MP, High process = HP ]
LP is executing a critical section. While entering the critical section, LP must have acquired a lock on some object, say OBJ.
LP is now inside the critical section.
Meanwhile, HP is created. Because of higher priority, CPU does a context switch, and HP is now executing (not the same critical section, but some other code). At some point during HP's execution, it needs a lock on the same OBJ (may or may not be on the same critical section), but the lock on OBJ is still held by LP, since it was pre-empted while executing the critical section. LP cannot relinquish now because the process is in READY state, not RUNNING. Now HP is moved to BLOCKED / WAITING state.
Now, MP comes in, and executes its own code. MP does not need a lock on OBJ, so it keeps executing normally. HP waits for LP to release lock, and LP waits for MP to finish executing so that LP can come back to RUNNING state (.. and execute and release lock). Only after LP has released lock can HP come back to READY (and then go to RUNNING by pre-empting the low priority tasks.)
So, effectively it means that until MP finishes, LP cannot execute and hence HP cannot execute. So, it seems like HP is waiting for MP, even though they are not directly related through any OBJ locks. -> Priority Inversion.
A solution to Priority Inversion is Priority Inheritance -
increase the priority of a process (A) to the maximum priority of any
other process waiting for any resource on which A has a resource lock.
Let me make it very simple and clear. (This answer is based on the answers above but presented in crisp way).
Say there is a resource R and 3 processes. L, M, H. where p(L) < p(M) < p(H) (where p(X) is priority of X).
Say
L starts executing first and catch holds on R. (exclusive access to R)
H comes later and also want exclusive access to R and since L is holding it, H has to wait.
M comes after H and it doesn't need R. And since M has got everything it wants to execute it forces L to leave as it has high priority compared to L. But H cannot do this as it has a resource locked by L which it needs for execution.
Now making the problem more clear, actually the M should wait for H to complete as p(H) > p(M) which didn't happen and this itself is the problem. If many processes such as M come along and don't allow the L to execute and release the lock H will never execute. Which can be hazardous in time critical applications
And for solutions refer the above answers :)
Priority inversion is where a lower priority process gets ahold of a resource that a higher priority process needs, preventing the higher priority process from proceeding till the resource is freed.
eg:
FileA needs to be accessed by Proc1 and Proc2.
Proc 1 has a higher priority than Proc2, but Proc2 manages to open FileA first.
Normally Proc1 would run maybe 10 times as often as Proc2, but won't be able to do anything because Proc2 is holding the file.
So what ends up happening is that Proc1 blocks until Proc2 finishes with FileA, essentially their priorities are 'inverted' while Proc2 holds FileA's handle.
As far as 'Solving a problem' goes, priority inversion is a problem in itself if it keeps happening.
The worst case (most operating systems won't let this happen though) is if Proc2 wasn't allowed to run until Proc1 had. This would cause the system to lock as Proc1 would keep getting assigned CPU time, and Proc2 will never get CPU time, so the file will never be released.
Priority inversion occurs as such:
Given processes H, M and L where the names stand for high, medium and low priorities,
only H and L share a common resource.
Say, L acquires the resource first and starts running. Since H also needs that resource, it enters the waiting queue.
M doesn't share the resource and can start to run, hence it does. When L is interrupted by any means, M takes the running state since it has higher priority and it is running on the instant that interrupt happens.
Although H has higher priority than M, since it is on the waiting queue, it cannot acquire the resource, implying a lower priority than even M.
After M finishes, L will again take over CPU causing H to wait the whole time.
Priority Inversion can be avoided if the blocked high priority thread transfers its high priority to the low priority thread that is holding onto the resource.
A scheduling challenge arises when a higher-priority process needs to read or modify kernel data that are currently being accessed by a lower-priority process—or a chain of lower-priority processes. Since kernel data are typically protected with a lock, the higher-priority process will have to wait for a lower-priority one to finish with the resource. The situation becomes more complicated if the lower-priority process is preempted in favor of another process with a higher priority. As an example, assume we have three processes—L, M, and H—whose priorities follow the order L < M < H. Assume that process H requires resource R,which is currently being accessed by process L.Ordinarily,process H would wait for L to finish using resource R. However, now suppose that process M becomes runnable, thereby preempting process L. Indirectly, a process with a lower priority—process M—has affected how long process H must wait for L to relinquish resource R. This problem is known as priority inversion.It occurs only in systems with more than two priorities,so one solution is to have only two priorities.That is insufficient for most general-purpose operating systems, however. Typically these systems solve the problem by implementing a priority-inheritance protocol. According to this protocol, all processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question.When they are finished,their priorities revert to their original values. In the example above, a priority-inheritance protocol would allow process L to temporarily inherit the priority of process H,thereby preventing process M from preempting its execution. When process L had finished using resource R,it would relinquish its inherited priority from H and assume its original priority.Because resource R would now be available, process H—not M—would run next.
Reference :ABRAHAM SILBERSCHATZ
Consider a system with two processes,H with high priority and L with low priority. The scheduling rules are such that H is run whenever it is in ready state because of its high priority. At a certain moment, with L in its critical region, H becomes ready to run (e.g., an I/O operation completes). H now begins busy waiting, but since L is never scheduled while H is running, L never gets the chance to leave the critical section. So H loops forever.
This situation is called Priority Inversion. Because higher priority process is waiting on lower priority process.