Definition: Bounded waiting refers to a process P_i that keeps waiting forever to enter critical section (CS) while other processes P_j keep entering CS although P_i has shown interest to enter CS.
Now, I understand why lock variable mechanism is not bounded waiting because if a process entered a non-critical section, then another process might come and take CS, so a process might starve.
Algorithm:
NCS (Non-critical Section)
DISABLE INTERRUPTS
CS
ENABLE INTERRUPTS
NCS
Edit: no more details are given about schedulers, etc. The question is to get a glance whether this satisfies bounded waiting or not.
Question: can you please explain why disabled interrupts synchronization mechanism satisfies bounded waiting please (a process can not starve to enter CS as in lock variable mechanism)?
Your question has two contexts to consider:
Are interrupt handlers entering the Critical Section?
Is there an asynchronous scheduler involved?
No,No
As soon as P_i releases the CS, the release mechanism can accept P_j, thus starvation is averted.
No,Yes
Despite P_j's desire, once interrupts are released, the scheduler could be invoked, and decide that P_j is not to be executed next, so in at least a pathological case, could spend forever trying to enter the CS while others are selected.
Yes,No
As soon as P_i release the interrupt, any pending interrupts will execute immediately. If they are to enter the CS, they will first (otherwise the system has halted [*]), so with the right timing a set of interrupts could keep P_j starved forever.
Yes,Yes
Here, the starvation could happen for either of the reasons No,Yes or Yes,No.
[*] - Interrupts have no a-priori way of deferring work, so any resources required to complete the interrupt handler must be available when the handler runs. The handlers context is effectively nested; and a nested context cannot wait for the completion of its superior context.
The concept of priority inversion is a case when TSL cannot be deadlock free then how one can say the TSL is deadlock free?
Lets first look at the definition of Deadlock.
Definition :- A set of processes are said to be in Deadlock, if they wait for happening of event caused by other processes in the same set. Here the processes in the set are in waiting/blocked state.
Now lets understand the Priority Inversion. Lets process A is a lower priority process running in critical section. and process B be higher priority process.
When the process B goes to ready state,it gets scheduled (goes to running state) and process A gets pre-empted by schedular as it has low priority. Here Process A goes to waiting/blocked state but it has already locked the critical section and process B is in running state which then goes in Busy waiting because the critical section is already locked by process A !!. This causes an infinite waiting. This case is basically called as SpinLock since process B is in running state and process A is in waiting/blocked state but according to definition of Deadlock all process should be in waiting/blocked state.
This is why TSL is said to be Deadlock free but its NOT free from SpinLock.
I hope this will be helpful :)
Consider this: When one task/process is running on a single processor system, another task has to wait for its turn till the first task is either suspended or terminates (depending on the scheduling algorithm).
Kernel also consists of various tasks that are using the using the same CPU to do OS related stuff - like scheduling, memory management, responding to system calls etc.
So when a kernel schedules a particular task/process to give it CPU time, does it relinquish its control over the CPU?ie does it momentarily stop? If not how does it continually keep on running to do all OS related tasks while the other process is running on CPU? Does the scheduler move aside to give the next task in line CPU and if so what brings the scheduler back to go on with further scheduling activities? This question is similar but it does not contain enough details -
How can kernel run all the time?
I am confused about this part and I cant understand how this would work.Can somebody please explain this in detail. It would be helpful if you could explain it with an example.
Yeah.. you should stop thinking of the OS kernel as a process and think of it instead of just code and data - a state-machine that processes/threads call in to in order to obtain specific services at one end, (eg. I/O requests) and drivers call in to at the other end to provide service solutions, (eg. I/O completion).
The kernel does not need any threads of execution in itself. It only runs when entered from syscalls, (interrupt-like calls from running user threads/processes), or drivers, (hardware interrupts from disk/NIC/KB/mouse etc hardware). Sometimes, such calls will change the set of threads running on the available cores, (eg. if a thread waiting for a network buffer becomes ready because the NIC driver has completed the action, the OS will probably try to assign it to a core 'immediately', preempting some other thread if required).
If there are no syscalls, and no hardware interrupts, the kernel does nothing because it is not entered - there is nothing for it to do.
What you are missing is that few operating systems these days have a monitor process as you are describing.
At the risk of gross oversimplification, operating systems run through exceptions and interrupts.
Assume you have two processes, P and Q. P is the running process and Q is the next to run. One way to switch processes is the system timer goes off triggering an interrupt. P switches to kernel mode and handles that interrupt. P runs the interrupt code handling the timer and determines that Q should run. P then saves its context and loads Q. At that moment, Q is the running process. The interrupt handler exits and picks up where Q was before.
In other words, process P becomes the kernel scheduler while the interrupt is being processed. Each process becomes the scheduler that loads the next process.
Another example, let us say that Q has queued a read operation to a disk. That operation completes and triggers an interrupt. P, the running process, enters kernel mode to handle the interrupt. P then processes Q's disk read operation.
I understand that CPU scheduling algorithms are classified into
Interactive - Round Robin, Priority scheduling
Batch Scheduling - FCFS,SJF
But I cant understand the reason behind the naming Interactive and Batch Scheduling..??
Why are algorithms like RR called interactive and those like FCFS called batch scheduling??
Thanks in advance...
The idea of Batch Scheduling is that there will be no change in the schedule during runtime: a process is scheduled to do an operation on data, and it runs until the process is finished. In 'interactive' scheduling, a new process could be launched while another process is running, and so time would be allocated for that process as well as the other. In batch scheduling the schedule is determined at the beginning of the operation.
Example of priority (interactive) scheduling:
Process A has a high priority, and process B has a low priority. Process A runs until it requires some input from the user. While A is waiting, the CPU gives some time to process B. Once the input for A has been gathered, process B is swapped out and process A is given the CPU, due to its higher priority.
Example of batch (FCFS) scheduling:
Process A and process B are processes to be scheduled. Process A is given to the CPU first, so B will not receive any time until A finishes running. Even if A pauses for user input, B will not run (and the CPU time while waiting for input is effectively wasted).
Of course, as with everything this low-level, it's not entirely that simple: to gain the illusion of multi-tasking, time is generally divided up between processes even when nothing is waiting for I/O. In priority scheduling, this may mean that more time slices are given to A than B while both are running so that A executes quicker. Both interactive and batch scheduling have their pros and cons: while interactive scheduling gives a quicker response time to the user and divides time up more 'fairly', an overhead is incurred due to how long a 'context switch' takes, which is the time taken for the processor to switch from working on process A to process B.
Interactive scheduling policies assign a time-slice to each process. Once the time-slice is over, the process is swapped even if not yet terminated. It can also be said that scheduling of this kind are preemptive.
Batch scheduling policies, instead, are non-preemptive. Once a Process is in the Running-status, it will not change status until it terminates.
I've heard the phrase 'priority inversion' in reference to development of operating systems.
What exactly is priority inversion?
What is the problem it's meant to solve, and how does it solve it?
Imagine three (3) tasks of different priority: tLow, tMed and tHigh. tLow and tHigh access the same critical resource at different times; tMed does its own thing.
tLow is running, tMed and tHigh are presently blocked (but not in critical section).
tLow comes along and enters the critical section.
tHigh unblocks and since it is the highest priority task in the system, it runs.
tHigh then attempts to enter the critical resource but blocks as tLow is in there.
tMed unblocks and since it is now the highest priority task in the system, it runs.
tHigh can not run until tLow gives up the resource. tLow can not run until tMed blocks or ends. The priority of the tasks has been inverted; tHigh though it has the highest priority is at the bottom of the execution chain.
To "solve" priority inversion, the priority of tLow must be bumped up to be at least as high as tHigh. Some may bump its priority to the highest possible priority level. Just as important as bumping up the priority level of tLow, is dropping the priority level of tLow at the appropriate time(s). Different systems will take different approaches.
When to drop the priority of tLow ...
No other tasks are blocked on any of the resources that tLow has. This may be due to timeouts or the releasing of resources.
No other tasks contributing to the raising the priority level of tLow are blocked on the resources that tLow has. This may be due to timeouts or the releasing of resources.
When there is a change in which tasks are waiting for the resource(s), drop the priority of tLow to match the priority of the highest priority level task blocked on its resource(s).
Method #2 is an improvement over method #1 in that it shortens the length of time that tLow has had its priority level bumped. Note that its priority level stays bumped at tHigh's priority level during this period.
Method #3 allows the priority level of tLow to step down in increments if necessary instead of in one all-or-nothing step.
Different systems will implement different methods depending upon what factors they consider important.
memory footprint
complexity
real time responsiveness
developer knowledge
Hope this helps.
Priority inversion is a problem, not a solution. The typical example is a low priority process acquiring a resource that a high priority process needs, and then being preempted by a medium priority process, so the high priority process is blocked on the resource while the medium priority one finishes (effectively being executed with a lower priority).
A rather famous example was the problem experienced by the Mars Pathfinder rover: http://www.cs.duke.edu/~carla/mars.html, it's a pretty interesting read.
Suppose an application has three threads:
Thread 1 has high priority.
Thread 2 has medium priority.
Thread 3 has low priority.
Let's assume that Thread 1 and Thread 3 share the same critical section code
Thread 1 and thread 2 are sleeping or blocked at the beginning of the example. Thread 3 runs and enters a critical section.
At that moment, thread 2 starts running, preempting thread 3 because thread 2 has a higher priority. So, thread 3 continues to own a critical section.
Later, thread 1 starts running, preempting thread 2. Thread 1 tries to enter the critical section that thread 3 owns, but because it is owned by another thread, thread 1 blocks, waiting for the critical section.
At that point, thread 2 starts running because it has a higher priority than thread 3 and thread 1 is not running. Thread 3 never releases the critical section that thread 1 is waiting for because thread 2 continues to run.
Therefore, the highest-priority thread in the system, thread 1, becomes blocked waiting for lower-priority threads to run.
It is the problem rather than the solution.
It describes the situation that when low-priority threads obtain locks during their work, high-priority threads will have to wait for them to finish (which might take especially long since they are low-priority). The inversion here is that the high-priority thread cannot continue until the low-priority thread does, so in effect it also has low priority now.
A common solution is to have the low-priority threads temporarily inherit the high priority of everyone who is waiting on locks they hold.
[ Assume, Low process = LP, Medium Process = MP, High process = HP ]
LP is executing a critical section. While entering the critical section, LP must have acquired a lock on some object, say OBJ.
LP is now inside the critical section.
Meanwhile, HP is created. Because of higher priority, CPU does a context switch, and HP is now executing (not the same critical section, but some other code). At some point during HP's execution, it needs a lock on the same OBJ (may or may not be on the same critical section), but the lock on OBJ is still held by LP, since it was pre-empted while executing the critical section. LP cannot relinquish now because the process is in READY state, not RUNNING. Now HP is moved to BLOCKED / WAITING state.
Now, MP comes in, and executes its own code. MP does not need a lock on OBJ, so it keeps executing normally. HP waits for LP to release lock, and LP waits for MP to finish executing so that LP can come back to RUNNING state (.. and execute and release lock). Only after LP has released lock can HP come back to READY (and then go to RUNNING by pre-empting the low priority tasks.)
So, effectively it means that until MP finishes, LP cannot execute and hence HP cannot execute. So, it seems like HP is waiting for MP, even though they are not directly related through any OBJ locks. -> Priority Inversion.
A solution to Priority Inversion is Priority Inheritance -
increase the priority of a process (A) to the maximum priority of any
other process waiting for any resource on which A has a resource lock.
Let me make it very simple and clear. (This answer is based on the answers above but presented in crisp way).
Say there is a resource R and 3 processes. L, M, H. where p(L) < p(M) < p(H) (where p(X) is priority of X).
Say
L starts executing first and catch holds on R. (exclusive access to R)
H comes later and also want exclusive access to R and since L is holding it, H has to wait.
M comes after H and it doesn't need R. And since M has got everything it wants to execute it forces L to leave as it has high priority compared to L. But H cannot do this as it has a resource locked by L which it needs for execution.
Now making the problem more clear, actually the M should wait for H to complete as p(H) > p(M) which didn't happen and this itself is the problem. If many processes such as M come along and don't allow the L to execute and release the lock H will never execute. Which can be hazardous in time critical applications
And for solutions refer the above answers :)
Priority inversion is where a lower priority process gets ahold of a resource that a higher priority process needs, preventing the higher priority process from proceeding till the resource is freed.
eg:
FileA needs to be accessed by Proc1 and Proc2.
Proc 1 has a higher priority than Proc2, but Proc2 manages to open FileA first.
Normally Proc1 would run maybe 10 times as often as Proc2, but won't be able to do anything because Proc2 is holding the file.
So what ends up happening is that Proc1 blocks until Proc2 finishes with FileA, essentially their priorities are 'inverted' while Proc2 holds FileA's handle.
As far as 'Solving a problem' goes, priority inversion is a problem in itself if it keeps happening.
The worst case (most operating systems won't let this happen though) is if Proc2 wasn't allowed to run until Proc1 had. This would cause the system to lock as Proc1 would keep getting assigned CPU time, and Proc2 will never get CPU time, so the file will never be released.
Priority inversion occurs as such:
Given processes H, M and L where the names stand for high, medium and low priorities,
only H and L share a common resource.
Say, L acquires the resource first and starts running. Since H also needs that resource, it enters the waiting queue.
M doesn't share the resource and can start to run, hence it does. When L is interrupted by any means, M takes the running state since it has higher priority and it is running on the instant that interrupt happens.
Although H has higher priority than M, since it is on the waiting queue, it cannot acquire the resource, implying a lower priority than even M.
After M finishes, L will again take over CPU causing H to wait the whole time.
Priority Inversion can be avoided if the blocked high priority thread transfers its high priority to the low priority thread that is holding onto the resource.
A scheduling challenge arises when a higher-priority process needs to read or modify kernel data that are currently being accessed by a lower-priority process—or a chain of lower-priority processes. Since kernel data are typically protected with a lock, the higher-priority process will have to wait for a lower-priority one to finish with the resource. The situation becomes more complicated if the lower-priority process is preempted in favor of another process with a higher priority. As an example, assume we have three processes—L, M, and H—whose priorities follow the order L < M < H. Assume that process H requires resource R,which is currently being accessed by process L.Ordinarily,process H would wait for L to finish using resource R. However, now suppose that process M becomes runnable, thereby preempting process L. Indirectly, a process with a lower priority—process M—has affected how long process H must wait for L to relinquish resource R. This problem is known as priority inversion.It occurs only in systems with more than two priorities,so one solution is to have only two priorities.That is insufficient for most general-purpose operating systems, however. Typically these systems solve the problem by implementing a priority-inheritance protocol. According to this protocol, all processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question.When they are finished,their priorities revert to their original values. In the example above, a priority-inheritance protocol would allow process L to temporarily inherit the priority of process H,thereby preventing process M from preempting its execution. When process L had finished using resource R,it would relinquish its inherited priority from H and assume its original priority.Because resource R would now be available, process H—not M—would run next.
Reference :ABRAHAM SILBERSCHATZ
Consider a system with two processes,H with high priority and L with low priority. The scheduling rules are such that H is run whenever it is in ready state because of its high priority. At a certain moment, with L in its critical region, H becomes ready to run (e.g., an I/O operation completes). H now begins busy waiting, but since L is never scheduled while H is running, L never gets the chance to leave the critical section. So H loops forever.
This situation is called Priority Inversion. Because higher priority process is waiting on lower priority process.