multiprogramming on single kernel thread operating systems - operating-system

Suppose there is a system with a single processor and an operating system that has single kernel thread and I run a C program having scanf() function.
Now if I execute the program and don't supply values. How kernel would handle this? I mean since scanf() executes a blocking system call, read() and kernel is executing on the processor to handle the system call and is blocked. How would kernel would make another process to run which itself is blocked.
How multiprogramming is supported on single kernel thread operating system.

A kernel doesn’t have to have a thread context for each user thread. In particular, many microkernels have no notion of a kernel thread context at all. These stateless kernels manipulate data structures representing threads, so when one wants to read, the kenrel might start the io and enqueues a data structure (a continuation) to record what to do when the io is complete. Then kernel is then free to select another thread to run while the io operation is in progress.
The classic heavyweight thread model has become ubiquitous due to unix (later linux), but is by no means the only or even the best model.

How kernel would handle this? I mean since scanf() executes a blocking system call, read() and kernel is executing on the processor to handle the system call and is blocked. How would kernel would make another process to run which itself is blocked.
If the system supports "more threads than CPUs"; then it must have something (a scheduler) to figure out which thread gets to use which CPU somewhere (maybe in kernel, maybe not). In this case, hopefully (but not necessarily), if a thread blocks the scheduler will mark the thread as blocked (and why) and assign the CPU to a different thread, and then later (when it's notified that whatever the blocked thread was waiting for has occurred) the scheduler will unblock the thread and let it use a CPU again.

Related

Does CPU always execute instructions in kernel mode?

This is perhaps a very silly question, but it's been bugging me for a while. My doubt is that since the only threads that are scheduled onto the CPU are kernel threads(typically) and since every user thread is mapped to a kernel thread, does that mean that all applications essentially run in kernel mode?
There's multiple scenarios where some are less common. One possible example (of a less common scenario) is having a single kernel stack per CPU, with "save user-space thread state" as part of privilege level change from user-space to kernel and "load possibly different user-space thread's state" as part of privilege level change from kernel back to user-space (where it's hard to say if kernel's code use 1 thread/task per CPU or if kernel's code doesn't use the concept of threads/tasks at all and only exists between the 2 halves of a potential user-space thread switch).
For the common scenario, every thread/task has a kernel part (and its own kernel stack, etc) and a user-space part (and its own user-space stack, etc). In this case it can be described as task switches only ever occuring when the thread is using its kernel part; but after the task switch the task's kernel part can switch to the same task's user-space part and back again whenever it wants (e.g. for kernel system calls and IRQs).
Note that this isn't the only way to describe the common scenario; but it is my preferred way (because it avoids "beginner pitfalls" - specifically, the false assumption that IRQs are a necessary part of task switching).
I suspect that this way of describing the common scenario (i.e. that task switches only happen between tasks running kernel code) has caused your doubts.

Where does the scheduler run?

Having just finished a book on comp. architecture, I find myself not completely clarified on where the scheduler is running.
What I'm looking to have clarified is where the scheduler is running - does it have it's own core assigned to run that and nothing else, or is the "scheduler" in fact just a more ambiguous algorithm, that it implemented in every thread being executed - ex. upon preemption of thread, a swithToFrom() command is run?
I don't need specifics according to windows x/linux x/mac os x, just in general.
No the scheduler is not run in it's own core. In fact multi-threading was common long before multi-core CPUs were common.
The best way to see how scheduler code interacts with thread code is to start with a simple, cooperative, single-core example.
Suppose thread A is running and thread B is waiting on an event. thread A posts that event, which causes thread B to become runnable. The event logic has to call the scheduler, and, for the purposes of this example, we assume that it decides to switch to thread B. At this point in time the call stack will look something like this:
thread_A_main()
post_event(...)
scheduler(...)
switch_threads(threadA, threadB)
switch_threads will save the CPU state on the stack, save thread A's stack pointer, and load the CPU stack pointer with the value of thread B's stack pointer. It will then load the rest of the CPU state from the stack, where the stack is now stack B. At this point, the call stack has become
thread_B_main()
wait_on_event(...)
scheduler(...)
switch_threads(threadB, threadC)
In other words, thread B has now woken up in the state it was in when it previously yielded control to thread C. When switch_threads() returns, it returns control to thread B.
These kind of manipulations of the stack pointer usually require some hand-coded assembler.
Add Interrupts
Thread B is running and a timer interrupts occurs. The call stack is now
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
interrupt_shell is a special function. It is not called. It is preemptively invoked by the hardware. foo() did not call interrupt_shell, so when interrupt_shell returns control to foo(), it must restore the CPU state exactly. This is different from a normal function, which returns leaving the CPU state according to calling conventions. Since interrupt_shell follows different rules to those stated by the calling conventions, it too must be written in assembler.
The main job of interrupt_shell is to identify the source of the interrupt and call the appropriate interrupt service routine (ISR) which in this case is timer_isr(), then control is returned to the running thread.
Add preemptive thread switches
Suppose the timer_isr() decides that it's time for a time-slice. Thread D is to be given some CPU time
thread_B_main()
foo() //something thread B was up to
interrupt_shell
timer_isr()
scheduler()
Now, scheduler() can't call switch_threads() at this point because we are in interrupt context. However, it can be called soon after, usually as the last thing interrupt_shell does. This leaves the thread B stack saved in this state
thread_B_main()
foo() //something thread B was up to
interrupt_shell
switch_threads(threadB, threadD)
Add Deferred Service Routines
Some OSses do not allow you to do complex logic like scheduling from within ISRs. One solution is to use a deferred service routine (DSR) which runs as higher priority than threads but lower than interrupts. These are used so that while scheduler() still needs to be protected from being preempted by DSRs, ISRs can be executed without a problem. This reduces the number of places a kernel has to mask (switch off) interrupts to keep it's logic consistent.
I once ported some software from an OS that had DSRs to one that didn't. The simple solution to this was to create a "DSR thread" that ran higher priority than all other threads. The "DSR thread" simply replaces the DSR dispatcher that the other OS used.
Add traps
You may have observed in the examples I've given so far, we are calling the scheduler from both thread and interrupt contexts. There are two ways in and two ways out. It looks a bit weird but it does work. However, moving forward, we may want to isolate our thread code from our Kernel code, and we do this with traps. Here is the event posting redone with traps
thread_A_main()
post_event(...)
user_space_scheduler(...)
trap()
interrupt_shell
kernel_space_scheduler(...)
switch_threads(threadA, threadB)
A trap causes an interrupt or an interrupt-like event. On the ARM CPU they are known as "software interrupts" and this is a good description.
Now all calls to switch_threads() begin and end in interrupt context, which, incidentally usually happens in a special CPU mode. This is a step towards privilege separation.
As you can see, scheduling wasn't built in a day. You could go on:
Add a memory mapper
Add processes
Add multiple Cores
Add hyperthreading
Add virtualization
Happy reading!
Each core is separately running the kernel, and cooperates with other cores by reading / writing shared memory. One of the shared data structures maintained by the kernel is the list of tasks that are ready to run, and are just waiting for a timeslice to run in.
The kernel's process / thread scheduler runs on the core that needs to figure out what to do next. It's a distributed algorithm with no single decision-making thread.
Scheduling doesn't work by figuring out what task should run on which other CPU. It works by figuring out what this CPU should do now, based on which tasks are ready to run. This happens whenever a thread uses up its timeslice, or makes a system call that blocks. In Linux, even the kernel itself is pre-emptible, so a high-priority task can be run even in the middle of a system call that takes a lot of CPU time to handle. (e.g. checking the permissions on all the parent directories in an open("/a/b/c/d/e/f/g/h/file", ...), if they're hot in VFS cache so it doesn't block, just uses a lot of CPU time).
I'm not sure if this is done by having the directory-walking loop in (a function called by) open() "manually" call schedule() to see if the current thread should be pre-empted or not. Or maybe just that tasks waking up will have set some kind of hardware time to fire an interrupt, and the kernel in general is pre-emptible if compiled with CONFIG_PREEMPT.
There's an inter-processor interrupt mechanism to ask another core to schedule something on itself, so the above description is an over-simplification. (e.g. for Linux run_on to support RCU sync points, and TLB shootdowns when a thread on another core uses munmap). But it's true that there isn't one "master control program"; generally the kernel on each core decides what that core should be running. (By running the same schedule() function on a shared data-structure of tasks that are ready to run.)
The scheduler's decision-making is not always as simple as taking the task at the front of the queue: a good scheduler will try to avoid bouncing a thread from one core to another (because its data will be hot in the caches of the core it was last running on, if that was recent). So to avoid cache thrashing, a scheduler algorithm might choose not to run a ready task on the current core if it was just running on a different core, instead leaving it for that other core to get to later. That way a brief interrupt-handler or blocking system call wouldn't result in a CPU migration.
This is especially important in a NUMA system, where running on the "wrong" core will be slower long-term, even once the caches populate.
There are three types of general schedulers:
Job scheduler also known as the Long term scheduler.
Short term scheduler also known as the CPU scheduler.
Medium term scheduler, mostly used to swap jobs so there can be non-blocking calls. This is usually for not having too many I/O jobs or to little.
In an operating systems book it shows a nice automata of the states these schedulers go to and from. Job scheduler puts things from job queue to ready queue, the CPU scheduler takes things from ready queue to running state. The algorithm is just like any other software, it must be run on a cpu/core, it is most likely probably part of the kernel somewhere.
It doesn't make sense the scheduler can be preempted. The jobs inside the queue can be preempted when running, for I/O, etc. No the kernel does not have to schedule itself to allocate the task, it just gets cpu time without scheduling itself. And yes, most likely the data is in probably in ram, not sure if it is worth storing in the cpu cache.

What does the kernel do while another process is running

Consider this: When one task/process is running on a single processor system, another task has to wait for its turn till the first task is either suspended or terminates (depending on the scheduling algorithm).
Kernel also consists of various tasks that are using the using the same CPU to do OS related stuff - like scheduling, memory management, responding to system calls etc.
So when a kernel schedules a particular task/process to give it CPU time, does it relinquish its control over the CPU?ie does it momentarily stop? If not how does it continually keep on running to do all OS related tasks while the other process is running on CPU? Does the scheduler move aside to give the next task in line CPU and if so what brings the scheduler back to go on with further scheduling activities? This question is similar but it does not contain enough details -
How can kernel run all the time?
I am confused about this part and I cant understand how this would work.Can somebody please explain this in detail. It would be helpful if you could explain it with an example.
Yeah.. you should stop thinking of the OS kernel as a process and think of it instead of just code and data - a state-machine that processes/threads call in to in order to obtain specific services at one end, (eg. I/O requests) and drivers call in to at the other end to provide service solutions, (eg. I/O completion).
The kernel does not need any threads of execution in itself. It only runs when entered from syscalls, (interrupt-like calls from running user threads/processes), or drivers, (hardware interrupts from disk/NIC/KB/mouse etc hardware). Sometimes, such calls will change the set of threads running on the available cores, (eg. if a thread waiting for a network buffer becomes ready because the NIC driver has completed the action, the OS will probably try to assign it to a core 'immediately', preempting some other thread if required).
If there are no syscalls, and no hardware interrupts, the kernel does nothing because it is not entered - there is nothing for it to do.
What you are missing is that few operating systems these days have a monitor process as you are describing.
At the risk of gross oversimplification, operating systems run through exceptions and interrupts.
Assume you have two processes, P and Q. P is the running process and Q is the next to run. One way to switch processes is the system timer goes off triggering an interrupt. P switches to kernel mode and handles that interrupt. P runs the interrupt code handling the timer and determines that Q should run. P then saves its context and loads Q. At that moment, Q is the running process. The interrupt handler exits and picks up where Q was before.
In other words, process P becomes the kernel scheduler while the interrupt is being processed. Each process becomes the scheduler that loads the next process.
Another example, let us say that Q has queued a read operation to a disk. That operation completes and triggers an interrupt. P, the running process, enters kernel mode to handle the interrupt. P then processes Q's disk read operation.

Multicores and mulithreads

How is process-based multitasking achieved by using multi-threading in each process?
For example, consider when an operating system is running with two background process. Each process supports internally multi-threading features. Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?
The scheduler typically works at the thread level. In simplest terms the scheduler gives each runnable thread its timeslice in turn.
So a process with two threads will get twice as much CPU time as a process with one thread.
From:
http://msdn.microsoft.com/en-us/library/ms684259(VS.85).aspx
"A multitasking operating system divides the available processor time among the processes or threads that need it. The system is designed for preemptive multitasking; it allocates a processor time slice to each thread it executes. The currently executing thread is suspended when its time slice elapses, allowing another thread to run. When the system switches from one thread to another, it saves the context of the preempted thread and restores the saved context of the next thread in the queue.
The length of the time slice depends on the operating system and the processor. Because each time slice is small (approximately 20 milliseconds), multiple threads appear to be executing at the same time. This is actually the case on multiprocessor systems, where the executable threads are distributed among the available processors. However, you must use caution when using multiple threads in an application, because system performance can decrease if there are too many threads."
Also check out This link for when to use multi-tasking
The operating system decides when and for how long each thread exectues. For Microsoft operating systems, there is no way to determine or predict which thread in which process will execute next. Each thread also has a priority that it runs at. Higher priority threads tend to get more time than lower This priority can be changed by the user or by a program. See this link for more info.
"Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?"
That's entirely up to the operating system to decide, really. A really basic OS might not do time-slicing at all, and just let each process run through to completion on a first-come, first-serve basis.
However, most modern operating systems will use some flavor of scheduling algorithm to decide which thread gets to execute on which core and for how long, and perform the context-switching necessary to save and restore per-thread state when swapping out one thread for another.

Relationship between a kernel and a user thread

Is there a relationship between a kernel and a user thread?
Some operating system textbooks said that "maps one (many) user thread to one (many) kernel thread". What does map means here?
When they say map, they mean that each kernel thread is assigned to a certain number of user mode threads.
Kernel threads are used to provide privileged services to applications (such as system calls ). They are also used by the kernel to keep track of what all is running on the system, how much of which resources are allocated to what process, and to schedule them.
If your applications make heavy use of system calls, more user threads per kernel thread, and your applications will run slower. This is because the kernel thread will become a bottleneck, since all system calls will pass through it.
On the flip side though, if your programs rarely use system calls (or other kernel services), you can assign a large number of user threads to a kernel thread without much performance penalty, other than overhead.
You can increase the number of kernel threads, but this adds overhead to the kernel in general, so while individual threads will be more responsive with respect to system calls, the system as a whole will become slower.
That is why it is important to find a good balance between the number of kernel threads and the number of user threads per kernel thread.
http://www.informit.com/articles/printerfriendly.aspx?p=25075
Implementing Threads in User Space
There are two main ways to implement a threads package: in user space and in the kernel. The choice is moderately controversial, and a hybrid implementation is also possible. We will now describe these methods, along with their advantages and disadvantages.
The first method is to put the threads package entirely in user space. The kernel knows nothing about them. As far as the kernel is concerned, it is managing ordinary, single-threaded processes. The first, and most obvious, advantage is that a user-level threads package can be implemented on an operating system that does not support threads. All operating systems used to fall into this category, and even now some still do.
All of these implementations have the same general structure, which is illustrated in Fig. 2-8(a). The threads run on top of a run-time system, which is a collection of procedures that manage threads. We have seen four of these already: thread_create, thread_exit, thread_wait, and thread_yield, but usually there are more.
When threads are managed in user space, each process needs its own private thread table to keep track of the threads in that process. This table is analogous to the kernel's process table, except that it keeps track only of the per-thread properties such the each thread's program counter, stack pointer, registers, state, etc. The thread table is managed by the run-time system. When a thread is moved to ready state or blocked state, the information needed to restart it is stored in the thread table, exactly the same way as the kernel stores information about processes in the process table.
When a thread does something that may cause it to become blocked locally, for example, waiting for another thread in its process to complete some work, it calls a run-time system procedure. This procedure checks to see if the thread must be put into blocked state. If so, it stores the thread's registers (i.e., its own) in the thread table, looks in the table for a ready thread to run, and reloads the machine registers with the new thread's saved values. As soon as the stack pointer and program counter have been switched, the new thread comes to life again automatically. If the machine has an instruction to store all the registers and another one to load them all, the entire thread switch can be done in a handful of instructions. Doing thread switching like this is at least an order of magnitude faster than trapping to the kernel and is a strong argument in favor of user-level threads packages.
However, there is one key difference with processes. When a thread is finished running for the moment, for example, when it calls thread_yield, the code of thread_yield can save the thread's information in the thread table itself. Furthermore, it can then call the thread scheduler to pick another thread to run. The procedure that saves the thread's state and the scheduler are just local procedures, so invoking them is much more efficient than making a kernel call. Among other issues, no trap is needed, no context switch is needed, the memory cache need not be flushed, and so on. This makes thread scheduling very fast.
User-level threads also have other advantages. They allow each process to have its own customized scheduling algorithm. For some applications, for example, those with a garbage collector thread, not having to worry about a thread being stopped at an inconvenient moment is a plus. They also scale better, since kernel threads invariably require some table space and stack space in the kernel, which can be a problem if there are a very large number of threads.
Despite their better performance, user-level threads packages have some major problems. First among these is the problem of how blocking system calls are implemented. Suppose that a thread reads from the keyboard before any keys have been hit. Letting the thread actually make the system call is unacceptable, since this will stop all the threads. One of the main goals of having threads in the first place was to allow each one to use blocking calls, but to prevent one blocked thread from affecting the others. With blocking system calls, it is hard to see how this goal can be achieved readily.
The system calls could all be changed to be nonblocking (e.g., a read on the keyboard would just return 0 bytes if no characters were already buffered), but requiring changes to the operating system is unattractive. Besides, one of the arguments for user-level threads was precisely that they could run with existing operating systems. In addition, changing the semantics of read will require changes to many user programs.
Another alternative is possible in the event that it is possible to tell in advance if a call will block. In some versions of UNIX, a system call, select, exists, which allows the caller to tell whether a prospective read will block. When this call is present, the library procedure read can be replaced with a new one that first does a select call and then only does the read call if it is safe (i.e., will not block). If the read call will block, the call is not made. Instead, another thread is run. The next time the run-time system gets control, it can check again to see if the read is now safe. This approach requires rewriting parts of the system call library, is inefficient and inelegant, but there is little choice. The code placed around the system call to do the checking is called a jacket or wrapper.
Somewhat analogous to the problem of blocking system calls is the problem of page faults. We will study these in Chap. 4. For the moment, it is sufficient to say that computers can be set up in such a way that not all of the program is in main memory at once. If the program calls or jumps to an instruction that is not in memory, a page fault occurs and the operating system will go and get the missing instruction (and its neighbors) from disk. This is called a page fault. The process is blocked while the necessary instruction is being located and read in. If a thread causes a page fault, the kernel, not even knowing about the existence of threads, naturally blocks the entire process until the disk I/O is complete, even though other threads might be runnable.
Another problem with user-level thread packages is that if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU. Within a single process, there are no clock interrupts, making it impossible to schedule processes round-robin fashion (taking turns). Unless a thread enters the run-time system of its own free will, the scheduler will never get a chance.
One possible solution to the problem of threads running forever is to have the run-time system request a clock signal (interrupt) once a second to give it control, but this, too, is crude and messy to program. Periodic clock interrupts at a higher frequency are not always possible, and even if they are, the total overhead may be substantial. Furthermore, a thread might also need a clock interrupt, interfering with the run-time system's use of the clock.
Another, and probably the most devastating argument against user-level threads, is that programmers generally want threads precisely in applications where the threads block often, as, for example, in a multithreaded Web server. These threads are constantly making system calls. Once a trap has occurred to the kernel to carry out the system call, it is hardly any more work for the kernel to switch threads if the old one has blocked, and having the kernel do this eliminates the need for constantly making select system calls that check to see if read system calls are safe. For applications that are essentially entirely CPU bound and rarely block, what is the point of having threads at all? No one would seriously propose computing the first n prime numbers or playing chess using threads because there is nothing to be gained by doing it that way.
User threads are managed in userspace - that means scheduling, switching, etc. are not from the kernel.
Since, ultimately, the OS kernel is responsible for context switching between "execution units" - your user threads must be associated (ie., "map") to a kernel schedulable object - a kernel thread†1.
So, given N user threads - you could use N kernel threads (a 1:1 map). That allows you to take advantage of the kernel's hardware multi-processing (running on multiple CPUs) and be a pretty simplistic library - basically just deferring most of the work to the kernel. It does, however, make your app portable between OS's as you're not directly calling the kernel thread functions. I believe that POSIX Threads (PThreads) is the preferred *nix implementation, and that it follows the 1:1 map (making it virtually equivalent to a kernel thread). That, however, is not guaranteed as it'd be implementation dependent (a main reason for using PThreads would be portability between kernels).
Or, you could use only 1 kernel thread. That'd allow you to run on non multitasking OS's, or be completely in charge of scheduling. Windows' User Mode Scheduling is an example of this N:1 map.
Or, you could map to an arbitrary number of kernel threads - a N:M map. Windows has Fibers, which would allow you to map N fibers to M kernel threads and cooperatively schedule them. A threadpool could also be an example of this - N workitems for M threads.
†1: A process has at least 1 kernel thread, which is the actual execution unit. Also, a kernel thread must be contained in a process. OS's must schedule the thread to run - not the process.
This is a question about thread library implement.
In Linux, a thread (or task) could be in user space or in kernel space. The process enter kernel space when it ask kernel to do something by syscall(read, write or ioctl).
There is also a so-called kernel-thread that runs always in kernel space and does not represent any user process.
According to Wikipedia and Oracle, user-level threads are actually in a layer mounted on the kernel threads; not that kernel threads execute alongside user-level threads but that, generally speaking, the only entities that are actually executed by the processor/OS are kernel threads.
For example, assume that we have a program with 2 user-level threads, both mapped to (i.e. assigned) the same kernel thread. Sometimes, the kernel thread runs the first user-level thread (and it is said that currently this kernel thread is mapped to the first user-level thread) and some other times the kernel thread runs the second user-level thread. So we say that we have two user-level threads mapped to the same kernel thread.
As a clarification:
The core of an OS is called its kernel, so the threads at the kernel level (i.e. the threads that the kernel knows of and manages) are called kernel threads, the calls to the OS core for services can be called kernel calls, and ... . The only definite relation between kernel things is that they are strongly related to the OS core, nothing more.