Practical ways of implementing preemptive scheduling without hardware support? - operating-system

I understand that using Hardware support for implementing preemptive scheduling is great for efficiency.
I want to know, What are practical ways we can do preemptive scheduling without taking support from hardware? I think one of way is Software Timers.
Also, Other way in multiprocessor system is using the one processor acting as master keep looking at slave processor's processor.
Consider, I'm fine with non-efficient way.
Please, elaborate all ways you think / know can work. Also, preferably but not necessarily works for single processor system.

In order to preempt a process, the operating system has to somehow get control of the CPU without the process's cooperation. Or viewed from the other perspective: The CPU has to somehow decide to stop running the process's code and start running the operating system's code.
Just like processes can't run at the same time as other processes, they can't run at the same time as the OS. The CPU executes instructions in order, that's all it knows. It doesn't run two things at once.
So, here are some reasons for the CPU to switch to executing operating system code instead of process code:
A hardware device sends an interrupt to this CPU - such as a timer, a keypress, a network packet, or a hard drive finishing its operation.
The software running on a different CPU sends an inter-processor interrupt to this CPU.
The running process decides to call a function in the operating system. Depending on the CPU architecture, it could work like a normal call, or it could work like a fake interrupt.
The running process executes an instruction which causes an exception, like accessing unmapped memory, or dividing by zero.
Some kind of hardware debugging interface is used to overwrite the instruction pointer, causing the CPU to suddenly execute different code.
The CPU is actually a simulation and the OS is interpreting the process code, in which case the OS can decide to stop interpreting whenever it wants.
If none of the above things happen, OS code doesn't run. Most OSes will re-evaluate which process should be running, when a hardware event occurs that causes a process to be woken up, and will also use a timer interrupt as a last resort to prevent one program hogging all the CPU time.
Generally, when OS code runs, it has no obligation to return to the same place it was called from. "Preemption" is simply when the OS decides to jump somewhere other than the place it was called from.

Related

Does Operating System runs on a CPU without being context switched? [duplicate]

This question already has answers here:
How does the OS scheduler regain control of CPU?
(3 answers)
Closed 5 years ago.
I know that a single-CPU system can run only one process at any instant. My doubt is, how does OS being itself a separate process runs on the CPU mean while managing to schedule some other process aswell simultaneously (which is not possible,as only one process can be run on a single-CPU system).
In other words,if another process is consuming the CPU at any time does the OS be context switched ?? or where does the OS runs(as it has to be active always to monitor) ??
I even don't know whether its an appropriate question... but kindly let me know if you have an answer. OR correct me if I am wrong !!
Thanks in Advance !!
In a modern operating system the kernel, the core of the OS, in complete control of how much time it allocates to the various user processes it's managing. It can interrupt the execution of a user process through various mechanisms provided by the CPU itself. This is called preempting the process and can be done on a schedule, like executing a user process for a particular number of nanoseconds before automatically interrupting it.
In older operating systems, like DOS and Windows 1.0 through 3.11, macOS 9 and earier, plus many others, they employ a different mode where the user process is responsible for yielding control. If the process doesn't yield there may be little recourse to reassert control of the system. This can lead to crashes or lock-ups, a frequent problem with non-preemptive operating systems of all stripes.
Even then there is often hardware support for things like hardware timers that can trigger a particular chunk of code on a regular basis which can be used to rescue the system from a run-away process. Just because a bit of code is running is no guarantee that it will continue to run indefinitely, without interruption.
A modern CPU is a fantastically complicated piece of equipment. Those with support for things like CPU virtualization can make the single physical CPU behave as if it's a number of virtual CPUs all sharing the same hardware. Each of these virtual CPUs is free to do whatever it wants, including dividing up its time using either a pre-emptive or cooparative model, as well as splitting itself into even more virtual CPUs.
The long and the short of it here is to not assume that the kernel must be actively executing to be in control. It has a number of tools at its disposal to wrest control of the CPU back from any process that might be running.

How does a single processor execute the Operating System as well as the user program Simultaneously? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Okay as we know that a single processor can execute one instruction at one time, which means a single processor can execute either the Operating system's instruction or the user program's instruction at one time.
Now how is it possible that an operating system, and a user program can run at the same time in single processor?
Is cpu assigned to a user program when you open the program and when you close the user program the cpu is assigned back to the Operating system ??
Basically it is impossible to run two threads on a single processor core at once. However it is possible for the system to pretend to do this by swapping threads on and off the CPU. There are basically two ways to do this. Cooperative and Preemptive multitasking.
In the days of Windows 3, CPUs had a single core (I'm sure some big expensive machines had more but not that normal people got to see). Windows 3 didn't interrupt processes. What happened was processes had to periodically relinquish control to the OS. The OS would then continue the process again at a later time. This model is called cooperative multitasking.
Cooperative multitasking has a bit of an issue though. If a process fails to relinquish control to the OS (normally due to a bug) it can hog the system's resources and the system needs rebooting. This is why when Windows 95 was released Microsoft switched to a pre-emptive multitasking model.
With pre-emptive multitasking the hardware allows the OS to set an interrupt for a future time (how this is done varies by hardware system). This means that the OS can guarantee to get back on the CPU. When it does this, it stores the state (mainly the CPU registers) of the running thread and then loads a different one. This means that the OS always has control as it does not rely on the processes relinquishing control.
I'm sure other OS used pre-emptive multitasking before Windows 95 but it was Win 95 that really brought it to the mainstream on the PC.
Another issue that can occur is that one process tries to write to the memory used by another process, or a process tries to directly access some hardware without the operating system's permission. When the CPU starts up it is in Real Mode and loads the OS, the OS can then set up certain restrictions and switch the CPU to protected mode before running a process. While in protected mode the CPU will stop the process from accessing memory addresses and hardware that the OS has not allowed, forcing the process to call back to the OS to access these resources.
This is called Preemption or Time Slicing
In simple terms:
There are multi-threaded CPU's which can manage multiple threads (instructions)
But even that's not enough. The CPU has to split the workload, it does this by pausing a thread (called an interrupt) and working on another.
An average computer might have over a thousand threads running, but only 4 CPU Cores (which can only run 4 threads at a time)
How does it do it?
Some CPU's can only run 4 threads at a time, to manage all the other thousands of threads it must pause the thread and work on another and pause that and work on another, This is called Time Slicing time is not the only factor, priorities & usage come into play too. CPU's are really fast and can do this in < 1ms
EDIT: The "System Interrupts" is what manages all of this, it's not really a process in sense but this piece of windows is what controls all thread execution
Here a simple explanation from : http://doc.qt.io/qt-5/thread-basics.html:
So how is concurrency implemented? Parallel work on single core CPUs
is an illusion which is somewhat similar to the illusion of moving
images in cinema. For processes, the illusion is produced by
interrupting the processor's work on one process after a very short
time. Then the processor moves on to the next process. In order to
switch between processes, the current program counter is saved and the
next processor's program counter is loaded.

Why do we need software interupt to start the execution of the system call?

This may be very foolish question to ask.
However I want to clarify my doubts as i am new to this thing.
As per my understanding the CPU executes the instruction of a process step by step by incrementing the program counter.
Now suppose we have a system call as one of the instruction, then why do we need to give a software interrupt when this instruction is encountered? Can't this system call (sequence of instructions) be executed just as other instructions are executing, because as far i understand the interrupts are to signal certain asynchronous events. But here the system call is a part of the process instruction, which is not asynchronous.
It doesn't require an interrupt. You could make an OS which uses a simple call. But most don't for several reasons. Among them might be:
On many architectures, interrupts elevate or change the CPU's access level, allowing the OS to implement protection of its memory from the unprivileged user code.
Preemptive operating systems already make use of interrupts for scheduling processes. It might be convenient to use the same mechanism.
Interrupts are something present on most architectures. Alternatives might require significant redesign across architectures.
Here is one example of a "system call" which doesn't use an interrupt (if you define a system call as requesting functionality from the OS):
Older versions of ARM do not provide atomic instructions to increment a counter. This means that an atomic increment requires help from the OS. The naive approach would be to make it a system call which makes the process uninterruptible during the load-add-store instructions, but this has a lot of overhead from the interrupt handler. Instead, the Linux kernel has chosen to map a small bit of code into every process at a fixed address. This code contains the atomic increment instructions and can be called directly from user code. The kernel scheduler takes care of ensuring that any operations interrupted in this block are properly restarted.
First of all, system calls are synchronous software interrupts, not asynchronous. When the processor executes the trap machine instruction to go to kernel space, some of the kernel registers get changed by the interrupt handler functions. Modification of these registers requires privileged mode execution, i.e. these can not be changed using user space code.
When the user-space program cannot read data directly from disk, as it doesn't have control over the device driver. The user-space program should not bother with driver code. Communication with the device driver should take place through kernel code itself. We tend to believe that kernel code is pristine and entirely trustworthy; user code is always suspect.
Hence, it requires privileged instructions to change the contents of register and/or accessing driver functionalities; the user cannot execute system call functions as a normal function call. Your processor should know whether you are in the kernel mode to access these devices.
I hope this is clear to some extent.

Is kernel a special program that executes always? and why are these CPU modes?

I am new to this OS stuff. Since the kernel controls the execution of all other programs and the resources they need, I think it should also be executed by the CPU. If so, where does it gets executed? and if i think that what CPU should execute is controlled by the kernel, then how does kernel controls the CPU if the CPU is executing the kernel itself!!!..
It seems like a paradox for me... plz explain... and by the way i didn't get these CPU modes at all... if kernel is controlling all the processes... why are these CPU modes then? if they are there, then are they implemented by the software(OS) or the hardware itself??
thanq...
A quick answer. On platforms like x86, the kernel has full control of the CPU's interrupt and context-switching abilities. So, although the kernel is not running most of the time, every so often it has a chance to decide which program the CPU will switch to and allow some running for that program. This part of the kernel is called the scheduler. Other than that the kernel gets a chance to execute every time a program makes a system call (such as a request to access some hardware, e.g. disk drive, etc.)
P.S The fact that the kernel can stop a running program, seize control of the CPU and schedule a different program is called preemptive multitasking
UPDATE: About CPU modes, I assume you mean the x86-style rings? These are permission levels on the CPU for currently executing code, allowing the CPU to decide whether the program that is currently running is "the kernel" and can do whatever it wants, or perhaps it is a lower-permission-level program that cannot do certain things (such as force a context switch or fiddle with virtual memory)
There is no paradox:
The kernel is a "program" that runs on the machine it controls. It is loaded by the boot loader at the startup of the machine.
Its task is to provide services to applications and control applications.
To do so, it must control the machine that it is running on.
For details, read here: http://en.wikipedia.org/wiki/Operating_System

How can preemptive multitasking work, when OS is just one of the processes?

I am now reading materials about preemptive multitasking - and one thing escapes me.
All of the materials imply, that operating system somehow interrupts the running processes on the CPU from the "outside", therefore causing context switching and the like.
However, I can't imagine how would that work, when operating system's kernel is just another process on the CPU. When another process is already occuping the CPU, how can the OS cause the switch from the "outside"?
The OS is not just another process. The OS controls the behavior of the system when an interrupt occurs.
Before the scheduler starts a process, it arranges for a timer interrupt to be sent when the timeslice ends. Assuming nothing else happens before then, the timer will fire, and the kernel will take over control. If it elects to schedule a different process, it will switch things out to allow the other process to run and then return from the interrupt.
Hardware can signal the processor - this is called an "interrupt" - and when it occurs, control is transferred to the kernel (regardless of which process was executing at the time). This function is built in to the processor. Specifically, control is transferred to an "interrupt handler" which is a function/method within the kernel.The kernel can schedule a timer interrupt, for instance, so that this happens periodically. Once an interrupt occurs and control is transferred to the kernel, the kernel can pass control back to the originally executing process, or another process that is scheduled.