How can kernel run all the time, when CPU can execute only one process at a time ?
That is, if kernel is occupying CPU all the time , then how come other processes run.
Please explain
Thank You
In the same way that you can run multiple userspace processes at the same time: Only one of them is actually using the CPU at any given time. You have some interrupts that force them to give it up.
Code that is part of the operating system is no different here (except that it is in control of setting up this scheduling in the first place).
You also have to distinguish between processes run by the OS in the background (I suppose that is what you are talking about here), and system calls (which are being run as part of "normal" processes that temporarily switch into supervisor mode).
Related
I'm struggling to understand the difference between between preemptive and nonpreemptive kernels, and premptive & nonpreemptive scheduling.
From Operating System Concepts (Ninth Edition), Silberschatz, Galvin and Gagne:
A preemptive kernel is where the kernel allows a process to be removed and replaced while it is running in kernel mode.
A nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. - This to me seems to be the exact same description of the nonpreemeptive kernel.
Preemptive scheduling occurs in these 2 situations (from same book):
*When a process switches from the running state to the ready state (for
example, when an interrupt occurs)
When a process switches from the waiting state to the ready state (for
example, at completion of I/O)*
The book simply states that there is a choice in this scenario, I'm not sure that the choice is. Is the choice whether the same process in the ready queue can be continued to run, or a different process from the ready queue can be selected to run?
Basically, a clear clarification on these 4 terms is what I'm looking for.
Thank you!
The problem you face is that these terms have no standard meaning. I suspect that your book is using them from the point of view of some specific operating system (which one?—Je ne sais quois). If you have searched the internet, you have certainly found conflicting explanations.
For example, Preemptive scheduling can mean:
Scheduling that will interrupt a running process that does not yield the CPU.
Scheduling that will interrupt a running process before it's quantum has expired.
Your book apparently has yet another definition. I cannot tell the meaning from the excerpt. It is entirely possible that book is just confusing on this point (as it apparently is on so many points). One point is that process states are system dependent. To define the term using process states is quite confusing.
This part of it definition makes sense:
Under nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
The preemptive part of the definition makes no sense.
In the case of the term preemptive kernel, that is pretty standard and the description of it you give is somewhat normal. That said, the book's statement should be a bit more refined because every process has to be removed in kernel mode. Normally, one would say something along the lines of "In a non-preemptive kernel, a process cannot be removed when it has entered kernel mode through an exception."
A preemptive kernel is essential for real-time processing.
So you ask:
This to me seems to be the exact same description of the nonpreemeptive kernel.
You have four theoretical combinations:
Preemptive Scheduling Preemptive Kernel
The operating system can forcibly switch processes at nearly any time.
Non-Preemptive Scheduling Preemptive Kernel
This combination does not exist.
Non-Preemptive Scheduling Non-Preemptive Kernel
The process has to explicitly yield to allow the operating system to switch to another process.
Preemptive Scheduling Nonpreemptive Kernel
The operating system can forcibly switch processes except when the process is executing in kernel mode to process an exception (there may be circumstances where the process cannot be switched while handling an interrupt as well).
I would really like to know how does a multicore CPU start when the computer starts up. I imagine there is like a "dominant core" that loads the BIOS and later on ther kernel to RAM and wakes up the rest of the cores leaving them waiting for code to run (like an infinite while loop?). But that it's only how I guess it works.
Other question is, after the kernel is loaded on memory all cores can do system calls, right?. And how does one core control the tasks of the other cores? Which instructions are used? (in x86 / x86-64)
Yes there is a boot CPU. The firmware handles that. It's usually CPU 0, but what if that one is missing or defective? Then it gets trickier.
On x86 platforms there's the ACPI tables which describe the CPU and memory layouts. The operating system starts the other CPUs with IPI (inter processor interrupts) which kick them out of idle into the interrupt handlers (which were set in memory) and then into operating system functions. Which then choose threads to run and start doing useful things.
If you really want to know how it all works read the source code for Linux or one of the BSDs.
Update: Looks like I was wrong about IPI. It is using interrupts but not the normal IPI ones. The Linux SMP boot is here: https://github.com/torvalds/linux/blob/master/arch/x86/kernel/smpboot.c
It seems to use NMI or sets the CPU reset.
Without going into details, how is a Monitor different from an OS?
I read that first there was Serial Processing in the earlier days, and then Monitors and now OS.
Monitor in this context means Batch Monitor.
In the 1950s - mid 60s, before we had true operating systems, we had Batch Monitors. You would "program" the job onto punch cards and put them on an input queue that the machine would process one by one.
The programmer would sit in front of a monitor, which would display memory dumps, debugging information, etc - it was an incredibly tedious process.
Of course the major drawback of a Batch Monitor is that the CPU was often idle. Because CPU speeds are so much faster than I/O speed, the machine would spend the majority of the time reading in the cards (I/O) while the CPU waited.
Nowadays, modern operating systems can run several processes at once and optimize CPU utilization. When a process on the run queue needs to do I/O, the OS puts it on another queue, and the CPU starts processing the next job. When the I/O is done, that process is moved back to the run queue. This way, the CPU is always doing something.
Edit:
After looking up "batch monitor" and not finding many references to it, it seems that it is more commonly referred to as a "batch system" - here's a book for reference; should be able to find a pdf version online:
Modern Operating Systems.
If a process causes a lot of context switches, will the CPU cycles used in the context switch be shown in the process CPU utilization?
In other words, if I run a process that essentially repeatedly executes a system call, then should the output of top show an increase in CPU utilization for the process because of the increase in context switching from user to kernel space and vice versa?
Yes, I think it should.
Look at the man pages for top and time in linux and possibly other *nix systems.
What are your advice/experience of using a hypervisor (e.g. RTS Real-Time Hypervisor) to run an RTOS in parallel with a non real time OS. Are there any performance implications? Are there any risks involved? (like how can you ensure that the non-real time OS will not interfere with the real time aspects of the RTOS)
From what I understand, a dual core (or hyperthreading) CPU has to be used so that you can assign each OS its own core.
no, it doesn't need dual core or hyperthreading.
no, the non-RT tasks doesn't interfere with RT ones.
The main idea is to have one RTOS, which executes tasks written specifically for this OS, using it's own API. These tasks are set in string priority levels, where a higher priority task will allways take precedence over a lower priority one. The lowest priority tasks will execute only as long as there's no other task available to run (that is, they're all waiting for some event, either a timeout or an external signal).
all this is just like a usual multitasking OS scheduler, it doesn't need multple cores or hardware threads; it's just that the timing guarantees are radically different, and the available API reflects this fact.
In those hybrid implementations, there's a single lowest-level task that runs a full non-RT OS kernel, usually Linux or some other unix-like kernel (i don't know about windows, but should work the same). Nowadays, we call this architecture a hypervisor.
so, since the whole non-RT OS is run as the lowest-priority task, it doesn't have any guarantee of getting processing time at all. any RT task can interrupt it at any time, even when accessing hardware. to keep this, usually the RT tasks have very limited access to the hardware, or there are minimal arbitrations at very low level. ie: can interrupt a disk access (possibly resulting in a access error); but not a PCI access (as long as are short-lived and time-bounded)
there's also some soft-RT extensions to the Linux scheduler for some time now; but the timing guarantees aren't so tight as some hard-RT OSes built with that in mind.