Why Disabling Interrupts Synchronization Satisfies Bounded Waiting - operating-system

Definition: Bounded waiting refers to a process P_i that keeps waiting forever to enter critical section (CS) while other processes P_j keep entering CS although P_i has shown interest to enter CS.
Now, I understand why lock variable mechanism is not bounded waiting because if a process entered a non-critical section, then another process might come and take CS, so a process might starve.
Algorithm:
NCS (Non-critical Section)
DISABLE INTERRUPTS
CS
ENABLE INTERRUPTS
NCS
Edit: no more details are given about schedulers, etc. The question is to get a glance whether this satisfies bounded waiting or not.
Question: can you please explain why disabled interrupts synchronization mechanism satisfies bounded waiting please (a process can not starve to enter CS as in lock variable mechanism)?

Your question has two contexts to consider:
Are interrupt handlers entering the Critical Section?
Is there an asynchronous scheduler involved?
No,No
As soon as P_i releases the CS, the release mechanism can accept P_j, thus starvation is averted.
No,Yes
Despite P_j's desire, once interrupts are released, the scheduler could be invoked, and decide that P_j is not to be executed next, so in at least a pathological case, could spend forever trying to enter the CS while others are selected.
Yes,No
As soon as P_i release the interrupt, any pending interrupts will execute immediately. If they are to enter the CS, they will first (otherwise the system has halted [*]), so with the right timing a set of interrupts could keep P_j starved forever.
Yes,Yes
Here, the starvation could happen for either of the reasons No,Yes or Yes,No.
[*] - Interrupts have no a-priori way of deferring work, so any resources required to complete the interrupt handler must be available when the handler runs. The handlers context is effectively nested; and a nested context cannot wait for the completion of its superior context.

Related

Can Context Switching happens between ISRs?

I am still a new learner in this field and appreciate anyone who can give me some opinions.
Let me introduce the situation of my problem:
1.The keyboard interrupt occurs when Process A is executing. As far as I know, it doesn't matter if Process A was executing in user mode or kernel mode. The Interrupt Handler will be invoked to deal with the keyboard interrupt in kernel mode.
2.The Interrupt Handler will save the state of Process A in its kernel stack and executes the ISR corresponding to the keyboard interrupt(still using the kernel stack of Process A).
3.During the execution of the keyboard ISR, the clock interrupt occurs. Then the interrupts will be nested.
4.The Interrupt Handler will save the state of the keyboard ISR in the kernel stack of Process A and executes the ISR corresponding to the clock interrupt(still using the kernel stack of Process A).
5.The clock ISR updates the system time and finishes. But os finds out the time slice of Process A has been used up.
Question:
1.What will the os do next?
Will the os schedule another process first Or will the os finish the keyboard ISR first? I prefer the former because the state of the interrupted keyboard ISR is saved in the kernel stack of Process A. It can be restored when Process A is selected to run after some time. Am I right?
2.Is there any difference in interrupt handling between common os(like Linux) and real time os?
Unfortunately there is no single answer to this question. The exact behavior depends on the interrupt controller implementation, if the OS supports Symmetric multi processing(SMP) and the specific scheduler.
The interrupt controller implementation is important because some processors do not support nested ISRs while some do. CPUs that do not supported nested interrupts are going to return to kernel operation after servicing an interrupt. If multiple interrupts are triggered in a narrow time window so servicing overlaps, typically the CPU will enter kernel mode very briefly and then return back to an interrupt context to handle the next interrupt. If nested interrupts are supported then the typical behavior is for the CPU to stay in the interrupt context until the "stack" of interrupts is serviced before returning to a kernel context.
The OS supporting SMP is also very important to the exact behavior of interrupt handling. When SMP is supported, it possible, and probably very likely, that the another core may be scheduled to handle the kernel and subsequent user space workload to handle what ever the interrupt trigged. Suppose the ISR served a Ethernet port on core 1, upon completion of the ISR, core 1 could service another interrupt, while core 2 wakes up and runs the user process waiting on the network traffic from the ethernet port.
To add a final wrinkle of complexity, interrupts can typically be routed to different CPUs, the exact way dependent on the interrupt controller implementation. This is done to minimize interrupt latency by keeping all the interrupts from pilling up on one CPU waiting for the sequential handling.
Finally, typical scheduler implementations don't track the ISR servicing time when calculating time slices for a given thread. As for the difference in handling between a traditional fair scheduler or an RTOS, there generally are not significant differences. Ultimately its the interrupt controller hardware that dictates the order interrupts are handled in, not the software's thread scheduler.

What happens if a process preempts while executing wait and signal operations?

The main reason for using semaphores is to prevent the producer-consumer problem.
But I wonder what would happen if a process gets preempted while executing wait operation and another process also executes wait operation.
Let's take
S value as 1.
What if while executing Wait(), S value is loaded into register reg as 1.
now S value is decremented.
Now reg is 0.
And now if another process wants to execute the wait to access the critical section
it considers S value as 1.
loads reg as 1.
and again decrements.
reg is 0.
Now both processes enter the critical section.
The code for the wait function is
Down(Semaphore S){
S.value=S.value-1;
if(S.value<0)
{
put PCB in suspended list;
sleep;
}
else
return;
}
The code for the signal function is
Signal(Semaphore S){
S.value=S.value+1;
if(S.value<=0)
{
Select a process from suspendend list;
wakeup();
}
}
isn't semaphore variable also a critical section variable as it is common for two or many processes? how can we prevent such race conditions?
You are correct that if the code for semaphore operations is as given above, there is indeed a risk that something bad could happen if a thread gets preempted in the middle of implementing an operation. The reason that this isn’t a problem in practice is that the actual implementations of semaphore operations are a bit more involved than what you gave.
Some implementations of semaphores, for example, will begin by physically disabling the interrupt mechanism on the machine to ensure that the current thread cannot possibly be preempted during execution of the operation. Others are layered on top of other synchronization primitives that use similar techniques to prevent preemption. Others might use other mechanisms besides disabling interrupts that have the same effect of ensuring that the process can’t be halted midway in the middle of performing the needed synchronization, or at least, ensuring that any places where preemption can occur are well-marked and properly thought through.
Hope this helps!

Who actually carries out the scheduling in a system

I came across that the process ready for execution in the ready queue are given the control of the CPU by the scheduler. The scheduler selects a process based on its scheduling algorithm and then gives the selected process the control of the CPU and later preempts if it is following a preemptive style. I would like to know that if the CPU's processing unit is being used by the processor then who exactly preempts and schedules the processes if the processing unit is not available.
now , i want to share you my thought about the OS,
and I'm sorry my English is not very fluent
What do you think about the OS? Do you think it's 'active'?
no, in my opinion , OS is just a pile of dead code in memory
and this dead code is constituted by interrupt handle function(We just called this dead code 'kernel source code')
ok, now, CPU is execute process A, and suddenly a 'interrupt' is occur, this 'interrupt' may occured because time clock or because a read system call, anyhow, a interrupt is occur. then CPU will jump the constitute interrupt handl function(CPU jump because CPU's constitute is designed). As said previously, this interrupt handle function is the part of OS kernel source code.
and CPU will execute this code. And what this code will do? this code will schedule,and CPU will execute this code.
Everything happens in the context of a process (Linux calls these lightweight processes but its the same).
Process scheduling generally occurs either as part of a system service call or as part of an interrupt.
In the case of a system service call, the process may determine it cannot execute so it invokes the scheduler to change the context to a new process.
The OS will schedule timer interrupts where it can do scheduling. Scheduling can also occur in other types of interrupts. Interrupts are handled by the current process.

What is the exact definition of 'process preemption'?

Wikipedia says:
In computing, preemption is the act of temporarily interrupting a task being carried out by a computer system, without requiring its cooperation, and with the intention of resuming the task at a later time.
Other sources say:
[...] preemption means forcefully taking away of the processor from one process and allocating it to another process. [Operating Systems (Self Edition 1.1), Sibsankar Haldar]
Preemption of a program occurs when an interrupt arises during its execution and the scheduler selects some other programs for execution. [Operating Systems: a Concept-based Approach, 2E, D. M. Dhamdhere]
So, what I understood is that we have process preemption if the process is interrupted (by a hardware interrupt, i.e. I/O interrupt or timer interrupt) and the scheduler, invoked after handling the interrupt, selects another process to run (according to the CPU scheduling algorithm). If the scheduler selects the interrupted process we have no process preemption (interrupts do not necessarily cause preemption).
But I found many other sources that define preemption in the following way:
Preemption is the forced deallocation of the CPU from a program. [Operating Systems: a Concept-based Approach, 2E, D. M. Dhamdhere]
You can see that the same book reports two different definitions of preemption. In the latter it is not mentioned that the CPU must be allocated to another process. According to this definition, preemption is just another name for 'interruption'. When a hardware interrupt arises, the process is interrupted (it switches from "Running" to "Ready" state) or preempted.
So my question is: which of the two definitions is correct? I'm quite confused.
The Wikipedia definition is pretty bad.The others are not so good. However, they are all saying essentially the same think.
Preemption is simply one of the means by which the operating system changes the process executing on a CPU.
Such a change can occur either through by the executing process voluntarily yielding the CPU or by the operating system preempting the executing process.
The mechanism for switching processes (context switch) is identical in both methods. The only difference is how the context switch is triggered.
A process can voluntarily yield the CPU when it no longer can execute. E.g. after doing I/O to disk (which will take a long time to complete). Some systems only support voluntary yielding (cooperative multitasking).
If a process is compute-bound, it would hog the CPU, no allowing other processes to execute. Most operating systems use a timer interrupt. If the interrupt handler finds that the current process has executed for at least a specified period of time and there are other processes that can execute the OS will switch processes.
Preemption is then a process (or thread) [context] switch on a CPU that is triggered by the operating system rather than by the process (or thread) itself.

What does the kernel do while another process is running

Consider this: When one task/process is running on a single processor system, another task has to wait for its turn till the first task is either suspended or terminates (depending on the scheduling algorithm).
Kernel also consists of various tasks that are using the using the same CPU to do OS related stuff - like scheduling, memory management, responding to system calls etc.
So when a kernel schedules a particular task/process to give it CPU time, does it relinquish its control over the CPU?ie does it momentarily stop? If not how does it continually keep on running to do all OS related tasks while the other process is running on CPU? Does the scheduler move aside to give the next task in line CPU and if so what brings the scheduler back to go on with further scheduling activities? This question is similar but it does not contain enough details -
How can kernel run all the time?
I am confused about this part and I cant understand how this would work.Can somebody please explain this in detail. It would be helpful if you could explain it with an example.
Yeah.. you should stop thinking of the OS kernel as a process and think of it instead of just code and data - a state-machine that processes/threads call in to in order to obtain specific services at one end, (eg. I/O requests) and drivers call in to at the other end to provide service solutions, (eg. I/O completion).
The kernel does not need any threads of execution in itself. It only runs when entered from syscalls, (interrupt-like calls from running user threads/processes), or drivers, (hardware interrupts from disk/NIC/KB/mouse etc hardware). Sometimes, such calls will change the set of threads running on the available cores, (eg. if a thread waiting for a network buffer becomes ready because the NIC driver has completed the action, the OS will probably try to assign it to a core 'immediately', preempting some other thread if required).
If there are no syscalls, and no hardware interrupts, the kernel does nothing because it is not entered - there is nothing for it to do.
What you are missing is that few operating systems these days have a monitor process as you are describing.
At the risk of gross oversimplification, operating systems run through exceptions and interrupts.
Assume you have two processes, P and Q. P is the running process and Q is the next to run. One way to switch processes is the system timer goes off triggering an interrupt. P switches to kernel mode and handles that interrupt. P runs the interrupt code handling the timer and determines that Q should run. P then saves its context and loads Q. At that moment, Q is the running process. The interrupt handler exits and picks up where Q was before.
In other words, process P becomes the kernel scheduler while the interrupt is being processed. Each process becomes the scheduler that loads the next process.
Another example, let us say that Q has queued a read operation to a disk. That operation completes and triggers an interrupt. P, the running process, enters kernel mode to handle the interrupt. P then processes Q's disk read operation.