Preemptive & Nonpreemptive Kernel VS Premptive & Nonpreemptive Scheduling - operating-system

I'm struggling to understand the difference between between preemptive and nonpreemptive kernels, and premptive & nonpreemptive scheduling.
From Operating System Concepts (Ninth Edition), Silberschatz, Galvin and Gagne:
A preemptive kernel is where the kernel allows a process to be removed and replaced while it is running in kernel mode.
A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. - This to me seems to be the exact same description of the nonpreemeptive kernel.
Preemptive scheduling occurs in these 2 situations (from same book):
*When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
When a process switches from the waiting state to the ready state (for
example, at completion of I/O)*
The book simply states that there is a choice in this scenario, I'm not sure that the choice is. Is the choice whether the same process in the ready queue can be continued to run, or a different process from the ready queue can be selected to run?
Basically, a clear clarification on these 4 terms is what I'm looking for.
Thank you!

The problem you face is that these terms have no standard meaning. I suspect that your book is using them from the point of view of some specific operating system (which one?—Je ne sais quois). If you have searched the internet, you have certainly found conflicting explanations.
For example, Preemptive scheduling can mean:
Scheduling that will interrupt a running process that does not yield the CPU.
Scheduling that will interrupt a running process before it's quantum has expired.
Your book apparently has yet another definition. I cannot tell the meaning from the excerpt. It is entirely possible that book is just confusing on this point (as it apparently is on so many points). One point is that process states are system dependent. To define the term using process states is quite confusing.
This part of it definition makes sense:
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
The preemptive part of the definition makes no sense.
In the case of the term preemptive kernel, that is pretty standard and the description of it you give is somewhat normal. That said, the book's statement should be a bit more refined because every process has to be removed in kernel mode. Normally, one would say something along the lines of "In a non-preemptive kernel, a process cannot be removed when it has entered kernel mode through an exception."
A preemptive kernel is essential for real-time processing.
So you ask:
This to me seems to be the exact same description of the nonpreemeptive kernel.
You have four theoretical combinations:
Preemptive Scheduling Preemptive Kernel
The operating system can forcibly switch processes at nearly any time.
Non-Preemptive Scheduling Preemptive Kernel
This combination does not exist.
Non-Preemptive Scheduling Non-Preemptive Kernel
The process has to explicitly yield to allow the operating system to switch to another process.
Preemptive Scheduling Nonpreemptive Kernel
The operating system can forcibly switch processes except when the process is executing in kernel mode to process an exception (there may be circumstances where the process cannot be switched while handling an interrupt as well).

Related

System startup of multicore computer

I would really like to know how does a multicore CPU start when the computer starts up. I imagine there is like a "dominant core" that loads the BIOS and later on ther kernel to RAM and wakes up the rest of the cores leaving them waiting for code to run (like an infinite while loop?). But that it's only how I guess it works.
Other question is, after the kernel is loaded on memory all cores can do system calls, right?. And how does one core control the tasks of the other cores? Which instructions are used? (in x86 / x86-64)
Yes there is a boot CPU. The firmware handles that. It's usually CPU 0, but what if that one is missing or defective? Then it gets trickier.
On x86 platforms there's the ACPI tables which describe the CPU and memory layouts. The operating system starts the other CPUs with IPI (inter processor interrupts) which kick them out of idle into the interrupt handlers (which were set in memory) and then into operating system functions. Which then choose threads to run and start doing useful things.
If you really want to know how it all works read the source code for Linux or one of the BSDs.
Update: Looks like I was wrong about IPI. It is using interrupts but not the normal IPI ones. The Linux SMP boot is here: https://github.com/torvalds/linux/blob/master/arch/x86/kernel/smpboot.c
It seems to use NMI or sets the CPU reset.

What are some of the advantages and disadvantages of user mode and kernel mode

In an Operating System, threads are typically handled in user mode or kernel mode. What are some of the advantages and disadvantages of each?
User-mode threads are scheduled in user mode by something in the process, and the process itself is the only thing handled by the kernel scheduler.
That means your process gets a certain amount of grunt from the CPU and you have to share it amongst all your user mode threads.
Simple case, you have two processes, one with a single thread and one with a hundred threads.
With a simplistic kernel scheduling policy, the thread in the single-thread process gets 50% of the CPU and each thread in the hundred-thread process gets 0.5% each.
With kernel mode threads, the kernel itself manages your threads and schedules them independently. Using the same simplistic scheduler, each thread would get just a touch under 1% of the CPU grunt (101 threads to share the 100% of CPU).
In an Operating System, threads are typically handled in user mode or kernel mode.
Typically threads are handled in kernel mode.
What are some of the advantages and disadvantages of each?
In theory, the advantage of handling threads in user mode is that it avoids the cost of switching to/from kernel when a thread needs to wait for something (which can be relatively expensive as it involves privilege level switches). In practice this "advantage" often doesn't happen because the thread has to switch to kernel anyway, to ask kernel to do whatever the thread would wait for (e.g. switching to kernel to ask it to read data from a file and then returning to user-space to block/wait instead of blocking/waiting in the kernel while you're already in the kernel). Mostly; it only helps if the kernel isn't involved at all, which only really happens when user-space threads communicate with or share locks with other threads in the same process.
The advantage of handling threads in kernel is that the kernel can support thread priorities properly. For example, if you have two processes that both have a very high priority thread and a very low priority thread; then kernel can make sure CPU time is given to the high priority thread/s when possible (including pre-empting low priority threads when a high priority thread unblocks) because it knows about all threads; but user-space can't do this - one process doesn't know about threads belonging to a different process, so user threading will get it wrong and ruin performance (one process giving CPU time to its own very low priority thread while a very high priority thread belonging to a different process needs the CPU and doesn't get it).
The other advantage of handling threads in the kernel is that (especially for systems with multiple CPUs) the kernel has access to better information and can make smarter scheduling decisions. This includes balancing the load (from any number of processes) across all CPUs while taking into account "CPU topology" (NUMA, SMT, etc; possibly including heterogeneous CPUs - e.g. "big.LITTLE" arrangements); and making trade-offs between thread priorities, CPU temperatures and power consumption (e.g. if one of the CPU's is getting too hot, reduce that CPU's clock speed to let it cool down and use it for low priority threads so that the performance of high priority threads isn't effected).

Is there a critical section race around error in case of non preemptive kernels?

I am reading from the book "Operating system concepts", which says that
a non preemptive kernel is free from race around conditions on the kernel data structures, as only one process is active at a time
I want to ask
Is this true only for a single processor? Because if it is a multiprocessor system, then it could have multiple processes running concurrently, which could be accessing the same kernel data.
Is there any need (if it is possible) of using semaphores in a non preemptive kernel system?

Medium term scheduler

I have read in Galvin book of operating system about the Medium term scheduler.
It was written that:
Sometimes, it is advantageous to swap out the process when it is not executing[waiting for I/O or waiting for CPU] in order to decrease the degree of multiprogramming.
Also, we get more amount of physical memory which makes the execution of other process faster by decreasing the number of page faults[as we have more memory].
So, its the work of medium term scheduler to swap out & swap in partially executed process.
But My question is: Does the work of medium term scheduler is really important in scenarios where we have plenty of available physical/main memory?
The use of medium term scheduler is to improve multiprogramming by allowing multiple processes to reside in main memory by swapping out processes that are waiting (need I/O) or low priority processes and swapping in other processes that were in ready queue.
So you can see that we requied medium term scheduler when we have limited memory. This swapping in and out operation does not take place when we are running a single small program and have large memory.
Similary if we are running multiple programs and we have very large memory(larger than the size of all processes plus addition space for other requirements) then medium term scheduler is not needed. Modern operating systems use paging so instead of swapping processes they swap pages in and out of memory.It is same as a system with very large memory(infinite) would not suffer from page faults.
Medium term scheduling is part of the swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the swapped out-processes.
TUTORIALS POINT
Simply Easy Learning Page 28
Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other process, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Advice on using hypervisor to run a Real Time OS in parallel with Windows/Linux

What are your advice/experience of using a hypervisor (e.g. RTS Real-Time Hypervisor) to run an RTOS in parallel with a non real time OS. Are there any performance implications? Are there any risks involved? (like how can you ensure that the non-real time OS will not interfere with the real time aspects of the RTOS)
From what I understand, a dual core (or hyperthreading) CPU has to be used so that you can assign each OS its own core.
no, it doesn't need dual core or hyperthreading.
no, the non-RT tasks doesn't interfere with RT ones.
The main idea is to have one RTOS, which executes tasks written specifically for this OS, using it's own API. These tasks are set in string priority levels, where a higher priority task will allways take precedence over a lower priority one. The lowest priority tasks will execute only as long as there's no other task available to run (that is, they're all waiting for some event, either a timeout or an external signal).
all this is just like a usual multitasking OS scheduler, it doesn't need multple cores or hardware threads; it's just that the timing guarantees are radically different, and the available API reflects this fact.
In those hybrid implementations, there's a single lowest-level task that runs a full non-RT OS kernel, usually Linux or some other unix-like kernel (i don't know about windows, but should work the same). Nowadays, we call this architecture a hypervisor.
so, since the whole non-RT OS is run as the lowest-priority task, it doesn't have any guarantee of getting processing time at all. any RT task can interrupt it at any time, even when accessing hardware. to keep this, usually the RT tasks have very limited access to the hardware, or there are minimal arbitrations at very low level. ie: can interrupt a disk access (possibly resulting in a access error); but not a PCI access (as long as are short-lived and time-bounded)
there's also some soft-RT extensions to the Linux scheduler for some time now; but the timing guarantees aren't so tight as some hard-RT OSes built with that in mind.