What happens to a process and/or thread while it waits on a mutex? - operating-system

When a process and/or thread waits on mutex in which state the process and/or thread is? Is it in WAIT or READY or some other state? I tried to search the answer over the web but could not find a clear, definitive answer, maybe there isn't one or maybe there is, to find out that I am posting this question here.

tl;dr: Nothing happens when it is waiting; it is simply a kernel data structure.
Without loss of generality, all operating systems have some sort of model where a unit of execution (task) moves between the states : Ready, Running, Waiting. This Task has a data structure associated with it, where its state and registers (among other things) are recorded.
When a task moves from Ready to Running, its saved registers are loaded on a cpu, and it continues execution from its last saved state. Initially, its saved registers are set to reasonable values for the program to start.
From Running to Waiting or Ready, its registers are stored in its task data structure, and this structure is placed on either a list of Ready or Waiting tasks.
From Waiting to Ready, the task data structure is removed from the Waiting list and appended to the Ready list.
When a task tries to acquire a mutex that is unavailable, it moves from Running (how else could it try to get the mutex) to Waiting. If the mutex was available, it remains Running, and the mutex becomes unavailable.
When a task releases a mutex, and another task is Waiting for that mutex, the Waiting task becomes Ready, and acquires the mutex. If many tasks are Waiting, one is chosen to acquire the mutex and become Ready, the rest remain Waiting.
This is a very abstract description; real systems are complicated by both a plurality of synchronization mechanisms (mailbox, queue, semaphore, pipe, ...), a desire to optimise the various paths, and the utilization of multiple CPUs.

Related

Where does the scheduler run?

Having just finished a book on comp. architecture, I find myself not completely clarified on where the scheduler is running.
What I'm looking to have clarified is where the scheduler is running - does it have it's own core assigned to run that and nothing else, or is the "scheduler" in fact just a more ambiguous algorithm, that it implemented in every thread being executed - ex. upon preemption of thread, a swithToFrom() command is run?
I don't need specifics according to windows x/linux x/mac os x, just in general.
No the scheduler is not run in it's own core. In fact multi-threading was common long before multi-core CPUs were common.
The best way to see how scheduler code interacts with thread code is to start with a simple, cooperative, single-core example.
Suppose thread A is running and thread B is waiting on an event. thread A posts that event, which causes thread B to become runnable. The event logic has to call the scheduler, and, for the purposes of this example, we assume that it decides to switch to thread B. At this point in time the call stack will look something like this:
thread_A_main()
post_event(...)
scheduler(...)
switch_threads(threadA, threadB)
switch_threads will save the CPU state on the stack, save thread A's stack pointer, and load the CPU stack pointer with the value of thread B's stack pointer. It will then load the rest of the CPU state from the stack, where the stack is now stack B. At this point, the call stack has become
thread_B_main()
wait_on_event(...)
scheduler(...)
switch_threads(threadB, threadC)
In other words, thread B has now woken up in the state it was in when it previously yielded control to thread C. When switch_threads() returns, it returns control to thread B.
These kind of manipulations of the stack pointer usually require some hand-coded assembler.
Add Interrupts
Thread B is running and a timer interrupts occurs. The call stack is now
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
interrupt_shell is a special function. It is not called. It is preemptively invoked by the hardware. foo() did not call interrupt_shell, so when interrupt_shell returns control to foo(), it must restore the CPU state exactly. This is different from a normal function, which returns leaving the CPU state according to calling conventions. Since interrupt_shell follows different rules to those stated by the calling conventions, it too must be written in assembler.
The main job of interrupt_shell is to identify the source of the interrupt and call the appropriate interrupt service routine (ISR) which in this case is timer_isr(), then control is returned to the running thread.
Add preemptive thread switches
Suppose the timer_isr() decides that it's time for a time-slice. Thread D is to be given some CPU time
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
scheduler()
Now, scheduler() can't call switch_threads() at this point because we are in interrupt context. However, it can be called soon after, usually as the last thing interrupt_shell does. This leaves the thread B stack saved in this state
thread_B_main()
foo() //something thread B was up to
interrupt_shell
switch_threads(threadB, threadD)
Add Deferred Service Routines
Some OSses do not allow you to do complex logic like scheduling from within ISRs. One solution is to use a deferred service routine (DSR) which runs as higher priority than threads but lower than interrupts. These are used so that while scheduler() still needs to be protected from being preempted by DSRs, ISRs can be executed without a problem. This reduces the number of places a kernel has to mask (switch off) interrupts to keep it's logic consistent.
I once ported some software from an OS that had DSRs to one that didn't. The simple solution to this was to create a "DSR thread" that ran higher priority than all other threads. The "DSR thread" simply replaces the DSR dispatcher that the other OS used.
Add traps
You may have observed in the examples I've given so far, we are calling the scheduler from both thread and interrupt contexts. There are two ways in and two ways out. It looks a bit weird but it does work. However, moving forward, we may want to isolate our thread code from our Kernel code, and we do this with traps. Here is the event posting redone with traps
thread_A_main()
post_event(...)
user_space_scheduler(...)
trap()
interrupt_shell
kernel_space_scheduler(...)
switch_threads(threadA, threadB)
A trap causes an interrupt or an interrupt-like event. On the ARM CPU they are known as "software interrupts" and this is a good description.
Now all calls to switch_threads() begin and end in interrupt context, which, incidentally usually happens in a special CPU mode. This is a step towards privilege separation.
As you can see, scheduling wasn't built in a day. You could go on:
Add a memory mapper
Add processes
Add multiple Cores
Add hyperthreading
Add virtualization
Happy reading!
Each core is separately running the kernel, and cooperates with other cores by reading / writing shared memory. One of the shared data structures maintained by the kernel is the list of tasks that are ready to run, and are just waiting for a timeslice to run in.
The kernel's process / thread scheduler runs on the core that needs to figure out what to do next. It's a distributed algorithm with no single decision-making thread.
Scheduling doesn't work by figuring out what task should run on which other CPU. It works by figuring out what this CPU should do now, based on which tasks are ready to run. This happens whenever a thread uses up its timeslice, or makes a system call that blocks. In Linux, even the kernel itself is pre-emptible, so a high-priority task can be run even in the middle of a system call that takes a lot of CPU time to handle. (e.g. checking the permissions on all the parent directories in an open("/a/b/c/d/e/f/g/h/file", ...), if they're hot in VFS cache so it doesn't block, just uses a lot of CPU time).
I'm not sure if this is done by having the directory-walking loop in (a function called by) open() "manually" call schedule() to see if the current thread should be pre-empted or not. Or maybe just that tasks waking up will have set some kind of hardware time to fire an interrupt, and the kernel in general is pre-emptible if compiled with CONFIG_PREEMPT.
There's an inter-processor interrupt mechanism to ask another core to schedule something on itself, so the above description is an over-simplification. (e.g. for Linux run_on to support RCU sync points, and TLB shootdowns when a thread on another core uses munmap). But it's true that there isn't one "master control program"; generally the kernel on each core decides what that core should be running. (By running the same schedule() function on a shared data-structure of tasks that are ready to run.)
The scheduler's decision-making is not always as simple as taking the task at the front of the queue: a good scheduler will try to avoid bouncing a thread from one core to another (because its data will be hot in the caches of the core it was last running on, if that was recent). So to avoid cache thrashing, a scheduler algorithm might choose not to run a ready task on the current core if it was just running on a different core, instead leaving it for that other core to get to later. That way a brief interrupt-handler or blocking system call wouldn't result in a CPU migration.
This is especially important in a NUMA system, where running on the "wrong" core will be slower long-term, even once the caches populate.
There are three types of general schedulers:
Job scheduler also known as the Long term scheduler.
Short term scheduler also known as the CPU scheduler.
Medium term scheduler, mostly used to swap jobs so there can be non-blocking calls. This is usually for not having too many I/O jobs or to little.
In an operating systems book it shows a nice automata of the states these schedulers go to and from. Job scheduler puts things from job queue to ready queue, the CPU scheduler takes things from ready queue to running state. The algorithm is just like any other software, it must be run on a cpu/core, it is most likely probably part of the kernel somewhere.
It doesn't make sense the scheduler can be preempted. The jobs inside the queue can be preempted when running, for I/O, etc. No the kernel does not have to schedule itself to allocate the task, it just gets cpu time without scheduling itself. And yes, most likely the data is in probably in ram, not sure if it is worth storing in the cpu cache.

why it is called context switch? why not Process Swap or Swapping,cause resources are same

The process of a Context Switch. In computing, a context switch is the
process of storing and restoring the state (more specifically, the
execution context) of a process or thread so that execution can be
resumed from the same point at a later time.
but if it is the process or thread
and resource is not shared then it must be called as Process Swap or Thread Swap
It is not swapping threads themselves or process themselves. It is swapping execution context information stored within the CPU - various CPU registers, flag bits, etc. A process contains 1+ threads, and each thread manipulates CPU state. A context switch captures the CPU state (the context) of the current running thread and pauses it, then swaps in the state of another thread so it can then resume running where it previously left off.
Keep in minds that it was called a change in process context long before there were threads. Since we are talking history, I ignore threads.
Making a process executable requires loading registers. The set of registers varies among systems but it typically includes:
General Registers
Processor Status
Use Page Table Registers
When a process ceases to be current, its register values need to be saved. When the process becomes current, its register values need to be restored.
Nearly every processor has an instruction that performs each of these tasks to allow the switch to be performed atomically.
The processor then needs to define a data structure for holding the register values. For whatever reason, this data structure was called a "Process Context Block" or PCB (some idiotic textbooks use PCB to describe something else that does not really exist).
Someone probably thought calling it a Process "Context" was a cute name. It really does not make sense in English but that is what happened.
The instructions for loading and saving then became variants of Load Process Context and Save Process Context.
but if it is the process or thread and resource is not shared then it must be called as Process Swap or Thread Swap
Swapping is an entirely different concept. So the terms "swap" and "swapping" were already taken. In the days before paging, entire processes were moved (swapped) to disk. That is why eunuchs has a "swap" partition—the name never got updated to "page partition."

How scheduler knows a Task is in blocking state?

I am reading "Embedded Software Primer" by David E.Simon.
In it discusses RTOS and its building blocks Scheduler and Task. It says each Task is either in Ready State, Running State, or Blocking State. My question is how the scheduler determines a Task is in Blocking State? Assume it's waiting for a Semaphore. Then it likely Semaphore is in a state it can't return. Does Scheduler see if a function does not return, then mark its state as Blocking?
The implementation details will vary by RTOS. Generally, each task has a state variable that identifies whether the task is ready, running, or blocked. The scheduler simply reads the task's state variable to determine whether the task is blocked.
Each task has a set of parameters that determine the state and context of the task. These parameters are often stored in a struct and called the "task control block" (although the implementation varies by RTOS). The ready/run/block state variable may be a part of the task control block.
When the task attempts to get the semaphore and the semaphore is not available then the task will be set to the blocked state. More specifically, the semaphore-get function will change the task from running to blocked. And then the scheduler will be called to determine which task should run next. The scheduler will read through the task state variables and will not run those tasks that are blocked.
When another task eventually sets the semaphore then the task that is blocked on the semaphore will be changed from the blocked to the ready state and the scheduler may be called to determine if a context switch should occur.
As I'm writing a RTOS ( http://distortos.org/ ), I thought that I may chime in.
The variable which holds the state of each thread is indeed usually implemented in RTOSes, and this includes mine version:
https://github.com/DISTORTEC/distortos/blob/master/include/distortos/ThreadState.hpp#L26
https://github.com/DISTORTEC/distortos/blob/master/include/distortos/internal/scheduler/ThreadControlBlock.hpp#L329
However this variable usually is used only as a debugging aid or for additional checks (like preventing you from starting a thread that is already started).
In RTOSes targeted at deeply embedded systems the distinction between ready/blocked is usually made using the containers that hold the threads. Usually the threads are "chained" in linked lists, usually also sorted by priority and insertion time. The scheduler has its own list of threads that are "ready" ( https://github.com/DISTORTEC/distortos/blob/master/include/distortos/internal/scheduler/Scheduler.hpp#L340 ). Each synchronization object (like a semaphore) also has its own list of threads which are "blocked" waiting for this object ( https://github.com/DISTORTEC/distortos/blob/master/include/distortos/Semaphore.hpp#L244 ) . When a thread attempts to use a semaphore that is currently not available, it is simply moved from the scheduler's "ready" list to semaphores's "blocked" list ( https://github.com/DISTORTEC/distortos/blob/master/source/synchronization/Semaphore.cpp#L82 ). The scheduler doesn't need to decide anything, as now - from scheduler's perspective - this thread is just gone. When this semaphore is now released by another thread, first thread which was waiting on this semaphore's "blocked" list is moved back to scheduler's "ready" list ( https://github.com/DISTORTEC/distortos/blob/master/source/synchronization/Semaphore.cpp#L39 ).
Usually there's no need to make special distinction between threads that are ready and the thread that is actually running. As the amount of threads that can actually run is fixed and equal to the number of available CPU cores, then all you need is a pointer for each CPU core which points to the thread from the "ready" list which is running at that core at that moment. In my system I do the same - the thread that is at the head of the "ready" list is the one that is running, but I also manage an iterator which points to that thread ( https://github.com/DISTORTEC/distortos/blob/master/include/distortos/internal/scheduler/Scheduler.hpp#L337 ). You could have a separate list for running threads, but in most cases it would be a waste of space (there's usually just one) and makes other things slightly more complicated.
I've actually wrote an article about thread states and their transitions if you're interested - http://distortos.org/documentation/task-states/ This article has no special distinction between the thread that is "ready" and the one that is actually running. I don't consider this distinction to be actually useful for anything, as long as you have other means to tell which of the "ready" threads is running.

how to terminate an infinitely running process

this question was asked to me in NVIDIA interview
Q)if a process is running from infinite time and O.S. has completely lost control on it, then how would you deallocate resources from that process?
This is a very open question for which the answer depends on lot of factors like signal mechanism of the OS, fork wait exit are implemented, states in which a process can be ...etc. Your answers should have been on the following lines.
A process can be in 1 of these states: ready, running, waiting.. apart from this there is born state and terminated/zombie state. During its lifetime i.e after fork() till exit(), a process has system resources under its control. Ex:entries in process table, open files to IO devices, VM allocated, mappings in page table etc...
OS has lost complete control of the process: I interpret as follows--
----- A process can execute a system call Ex-read and the OS will put the process into sleep/waiting state. There are 2 kinds of sleep: INTERRUPTIBLE & UNINTERRUPTIBLE
After the read is done the Hardware sends a signal to the OS which will get redirected to the sleeping process. And the process will spring up to action again. There may also be a case where the read never completed so the Hardware never sent a signal.
Assume the process initiated a read (which will fail) and went to a:
a INTERRUPTIBLE SLEEP: Now the OS/parent of this process can send a SIGSTOP, SIGKILL etc to this process and it will be interrupted from its sleep and then the OS can kill the process reclaim all the resources. This is fine and solves your problem of infinite resource control.
b UNINTERRUPTIBLE SLEEP: Now even though the read has not sent any signal, but is taking infinite time, and the parent/OS knows that the read has failed likely sending SIGSTOP,SIGKILL has no effect since it is sleeping UNITERRUPTIBLY waiting for read to finish. So now the process has control over system resource and OS cant claim it back. See this for clear understanding of UNITERRUPTIBLE SLEEP. So, if a process is in this state from an infinite time, the OS or parent CANNOT kill/stop it and reclaim the resource, those resources are tied indefinitely and can be reclaimed only when the system is powered off, or you need to hack the driver where a service is struck and get it terminated by the driver which will then send the read failed signal to UNINTERRUPTIBLE PROCESS.
----- after the process has completed its execution and ready to die, the parent which created this process has not yet called wait on this child and process is in ZOMBIE state, in which case it is still hanging on to some system resources. Usually it is the job of the parent to ensure that the child it created terminates normally. But, it may be the case that the parent which was supposed to call wait on this child itself got killed, then the onus of terminating this child goes to the OS, that is exactly what init process does on UNIX like OSes. Apart from its other jobs, init acts like a foster parent for the process whoes parent are dead. It runs the check_ZOMBIE procedures every now and then ( when system resources are low) to ensure that ZOMBIES dont hang on to system resources.
From the wikipedia article : When a process loses its parent, init becomes its new parent. Init periodically executes the wait system call to reap any zombies with init as parent. One of init's responsibilities is reaping orphans and parentless zombies.
So your answer should be pointing at the way process states are defined, signal handling mechanisms are implemented and also the way init process reaps ZOMBIES etc ....
Hope it cleared a thing or two..
Cheers....

how dispatcher works?

I have recently started my OS course. As far as i know the work of dispatcher is to save the context of current process and load context of process to be run next. But how does it do that? When a process is preempted then as soon as dispatcher will be loaded and executed ( as it is also a program ) the context of previous process in registers, PSW etc will be lost. How is it going to save the context before loading itself ?
The simple answer is that modern processors offer architectural extensions providing for several banks of registers that can be swapped in hardware, so up to X tasks get to retain their full set of registers.
The more complex answer is that the dispatcher, when triggered by an interrupt, receives the full register set of the program that was running at the time of interrupt (with the exception of the program counter, which is presumably propagated through a mutually-agreed-upon 'volatile' register or some such). Thus, the dispatcher must be carefully written to store the current state of register banks as its first operation upon being triggered. In short, the dispatcher itself has no immediate context and thus doesn't suffer from the same problem.
Here is an attempt at a simple description of what goes on during the dispatcher call:
The program that currently has context is running on the processor. Registers, program counter, flags, stack base, etc are all appropriate for this program; with the possible exception of an operating-system-native "reserved register" or some such, nothing about the program knows anything about the dispatcher.
The timed interrupt for dispatcher function is triggered. The only thing that happens at this point (in the vanilla architecture case) is that the program counter jumps immediately to whatever the PC address in the BIOS interrupt is listed as. This begins execution of the dispatcher's "dispatch" subroutine; everything else is left untouched, so the dispatcher sees the registers, stack, etc of the program that was previously executing.
The dispatcher (like all programs) has a set of instructions that operate on the current register set. These instructions are written in such a way that they know that the previously executing application has left all of its state behind. The first few instructions in the dispatcher will store this state in memory somewhere.
The dispatcher determines what the next program to have the cpu should be, takes all of its previously stored state and fills registers with it.
The dispatcher jumps to the appropriate PC counter as listed in the task that now has its full context established on the cpu.
To (over)simplify in summary; the dispatcher doesn't need registers, all it does is write the current cpu state to a predetermined memory location, load another processes' cpu state from a predetermined memory location, and jumps to where that process left off.
Does that make it any clearer?
It doesn't generally get loaded in such a way that you lose the information on the current process.
Often, it's an interrupt that happens in the context of the current process.
So the dispatcher (or scheduler) can save all relevant information in a task control block of some sort before loading up that information for the next process.
This includes the register contents, stack pointer and so on.
It's worth noting that the context of the next process includes its state of being in the dispatcher interrupt itself so that, when it returns from the interrupt, it's to a totally different process.
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:
switching context,
switching to user mode,
jumping to the proper location in the user program to restart that program
The operating system's principal responsibility is controlling the execution of processes. This includes determining the pattern for execution and allocating resources to the processes.
A process may be in one of the two states :
Running or
Not Running
When the OS creates a new process, it creates a process control block for the process and enters that process into the system into the Not Running state. The process exists is known to OS and is waiting for an opportunity to execute.
From time to time, the currently running processes will be interrupted and the dispatcher portion of the OS will select some other processes to run.
During execution when the process is devoid of resources, it gets blocked. Provided those resources it re-enters the ready state and then into running state. This transition from ready to running state is done by dispatcher. Dispatcher dispatches the process.