How does kernel gain control? - operating-system

As far as I know that external events such as hardware interrupts or system calls can trigger control transfer from user space to kernel space. But what if there is no external event or system calls, will the running process keep running forever?

It could, but typically the kernel programs a timer device to cause an interrupt at some point in the future. When that interrupt arrives, the kernel has the opportunity to decide what to do next.

Related

Practical ways of implementing preemptive scheduling without hardware support?

I understand that using Hardware support for implementing preemptive scheduling is great for efficiency.
I want to know, What are practical ways we can do preemptive scheduling without taking support from hardware? I think one of way is Software Timers.
Also, Other way in multiprocessor system is using the one processor acting as master keep looking at slave processor's processor.
Consider, I'm fine with non-efficient way.
Please, elaborate all ways you think / know can work. Also, preferably but not necessarily works for single processor system.
In order to preempt a process, the operating system has to somehow get control of the CPU without the process's cooperation. Or viewed from the other perspective: The CPU has to somehow decide to stop running the process's code and start running the operating system's code.
Just like processes can't run at the same time as other processes, they can't run at the same time as the OS. The CPU executes instructions in order, that's all it knows. It doesn't run two things at once.
So, here are some reasons for the CPU to switch to executing operating system code instead of process code:
A hardware device sends an interrupt to this CPU - such as a timer, a keypress, a network packet, or a hard drive finishing its operation.
The software running on a different CPU sends an inter-processor interrupt to this CPU.
The running process decides to call a function in the operating system. Depending on the CPU architecture, it could work like a normal call, or it could work like a fake interrupt.
The running process executes an instruction which causes an exception, like accessing unmapped memory, or dividing by zero.
Some kind of hardware debugging interface is used to overwrite the instruction pointer, causing the CPU to suddenly execute different code.
The CPU is actually a simulation and the OS is interpreting the process code, in which case the OS can decide to stop interpreting whenever it wants.
If none of the above things happen, OS code doesn't run. Most OSes will re-evaluate which process should be running, when a hardware event occurs that causes a process to be woken up, and will also use a timer interrupt as a last resort to prevent one program hogging all the CPU time.
Generally, when OS code runs, it has no obligation to return to the same place it was called from. "Preemption" is simply when the OS decides to jump somewhere other than the place it was called from.

In an operating system, what is the difference between a system call and an interrupt?

In an operating system, what is the difference between a system call and an interrupt? Are all system calls interrupts? Are all interrupts system calls?
Short Answer:
They are different things.
A system call is call by software running on the OS to services
provided by the OS.
An interrupt is usually external hardware component notifying the CPU/Microprocessor about an event that needs handling in software (usually a driver).
I say usually external, because some interrupts can be raised by software (soft interrupt)
Are all system calls interrupts? Depends
Are all interrupts system calls? No
Long answer:
The OS manages CPU time and other hardware connected to the CPU (Memory (RAM), HDD, keyboard, to name a few). It exposes services that allow user programs to access the underlying hardware and these are system calls. Usually these deal with allocating memory, reading/writing files, printing a document and so on.
When the OS interacts with other hardware it usually does so through a driver layer which sets-up the task for the hardware to perform and interrupt once the job is done, so the printer may interrupt once the document is printed or it runs out of pages. It is therefore often the case that a system call leads to generation of interrupts.
Are all system calls interrupts - Depends as they may be implemented as soft interrupts. So when a user program makes a system call, it causes a soft interrupt that results in the OS suspending the calling process, and handle the request itself, then resume the process. But, and I quote from Wikipedia,
"For many RISC processors this (interrupt) is the only technique provided, but
CISC architectures such as x86 support additional techniques. One
example is SYSCALL/SYSRET, SYSENTER/SYSEXIT (the two mechanisms were
independently created by AMD and Intel, respectively, but in essence
do the same thing). These are "fast" control transfer instructions
that are designed to quickly transfer control to the OS for a system
call without the overhead of an interrupt"
The answer to your question depends upon the underlying hardware (and sometimes operating system implementation). I will return to that in a bit.
In an operating system, what is the difference between a system call and an interrupt?
The purpose of an interrupt handler and a system call (and a fault handler) is largely the same: to switch the processor into kernel mode while providing protection from inadvertent or malicious access to kernel structures.
An interrupt is triggered by an asynchronous external event.
A system call (or fault or trap) is triggered synchronously by executing code.
Are all system calls interrupts? Are all interrupts system calls?
System calls are not interrupts because they are not triggered asynchronously by the hardware. A process continues to execute its code stream in a system call, but not in an interrupt.
That being said, Intel's documentation often conflates interrupt, system calls, traps, and faults, as "interrupt."
Some processors treat system calls, traps, faults and interrupts largely the same way. Others (notably Intel) provide different methods for implementing system calls.
In processors that handle all of the above in the same way, each type of interrupt, trap, and fault has a unique number. The processor expects the operating system to set up a vector (array) of pointers to handlers. In addition, there are one or more handlers available for an operating system to implement system calls
Depending upon the number of available handlers, the OS may have a separate handler for each system call or use a register value to determine what specific system function to execute.
In such a system, one can execute an interrupt handler synchronously the same way one invokes a system call.
For example, on the VAX the CHMK #4 instruction, invokes the 4th kernel mode handler. In intel land there is an INT instruction that does roughly the same.
Intel processors have supported the SYSCALL mechanism that provides a different way to implement system calls.

Do we have to enable or disable PCI interrupts on every layer, or only at the closest to hardware?

I'm implementing a PCIe driver, and I'd like to understand at what level the interrupts can be or should be enabled/disabled. I intentionally do not specify OS, as I'm assuming it should be relevant for any platform. By levels I mean the following:
OS specific interrupts handling framework
Interrupts can be disabled or enabled in the PCI/PCIe configuration space registers, e.g. COMMAND register
Interrupts also can be masked at device level, for instance we can
configure device not trigger certain interrupts to the host
I understand that whatever interrupt type is being used on PCIe (INTx emulation, MSI or MSI-X), it has to be delivered to the host OS.
So my question is really -- do we actually have to enable or disable interrupts on every layer, or it's sufficient only at the closest to hardware, e.g. in relevant PCI registers?
Disabling interrupts at the various levels usually has completely different purposes.
Disabling interrupts:
In the OS (really, this means in the CPU) - This is generally about avoiding race conditions. In particular, if state/memory corruption could occur during a particular section of code if the CPU happened to be interrupted, then that section of code will need to disable interrupt handling. Interrupt handlers must not acquire normal locks (by definition they can't be suspended), and they must not attempt to acquire a spin-lock that is held by the thread currently scheduled on the same CPU (because that thread is blocked from progressing by the very same interrupt handler!) so ensuring data safety with interrupt handlers can be tricky. Handling interrupts promptly is generally a good thing, so you want to absolutely minimise such sections in any code you write. Do as much of your interrupt handling in secondary interrupt handlers as possible to avoid such situations. Secondary interrupt handlers are really just callbacks on a regular OS thread which doesn't have any of the restrictions of a primary interrupt handler.
PCI/PCIe configuration - It's my understanding this is mainly about routing interrupts, and is something you normally do once when your driver loads (or is activated by a client) and again when your driver unloads (or is deactivated). This may also be affected by power management events. In some OSes, the PCI(e) level is actually handled for you when you activate PCI device interrupts via higher-level APIs.
On-device - This is usually an optimisation to avoid interrupting the CPU when it doesn't need to be interrupted. The most common scenario is that an event happens on the device, so an interrupt is generated. The driver's primary interrupt handler checks the device registers if the driver needs to do any processing. If so, it disables interrupts on the device, and schedules the driver's secondary interrupt handler to run. The OS eventually runs the secondary handler, which processes whatever information the device has provided, until it runs out of things to do. Then it enables interrupts again, checks once more if there's any work pending from the device and if there are none, it terminates. (If there are items to process in this last check, it re-disables interrupts and starts over from the beginning.) The idea is that until the secondary interrupt handler has finished processing, there really is no point triggering the primary interrupt handler, and a waste of resources, if additional events arrive, because the driver is already busy processing the event queue. The final check after re-enabling interrupts is to avoid a race condition between an event arriving and re-enabling interrupts.
I hope that answers your question.

Why do we need software interupt to start the execution of the system call?

This may be very foolish question to ask.
However I want to clarify my doubts as i am new to this thing.
As per my understanding the CPU executes the instruction of a process step by step by incrementing the program counter.
Now suppose we have a system call as one of the instruction, then why do we need to give a software interrupt when this instruction is encountered? Can't this system call (sequence of instructions) be executed just as other instructions are executing, because as far i understand the interrupts are to signal certain asynchronous events. But here the system call is a part of the process instruction, which is not asynchronous.
It doesn't require an interrupt. You could make an OS which uses a simple call. But most don't for several reasons. Among them might be:
On many architectures, interrupts elevate or change the CPU's access level, allowing the OS to implement protection of its memory from the unprivileged user code.
Preemptive operating systems already make use of interrupts for scheduling processes. It might be convenient to use the same mechanism.
Interrupts are something present on most architectures. Alternatives might require significant redesign across architectures.
Here is one example of a "system call" which doesn't use an interrupt (if you define a system call as requesting functionality from the OS):
Older versions of ARM do not provide atomic instructions to increment a counter. This means that an atomic increment requires help from the OS. The naive approach would be to make it a system call which makes the process uninterruptible during the load-add-store instructions, but this has a lot of overhead from the interrupt handler. Instead, the Linux kernel has chosen to map a small bit of code into every process at a fixed address. This code contains the atomic increment instructions and can be called directly from user code. The kernel scheduler takes care of ensuring that any operations interrupted in this block are properly restarted.
First of all, system calls are synchronous software interrupts, not asynchronous. When the processor executes the trap machine instruction to go to kernel space, some of the kernel registers get changed by the interrupt handler functions. Modification of these registers requires privileged mode execution, i.e. these can not be changed using user space code.
When the user-space program cannot read data directly from disk, as it doesn't have control over the device driver. The user-space program should not bother with driver code. Communication with the device driver should take place through kernel code itself. We tend to believe that kernel code is pristine and entirely trustworthy; user code is always suspect.
Hence, it requires privileged instructions to change the contents of register and/or accessing driver functionalities; the user cannot execute system call functions as a normal function call. Your processor should know whether you are in the kernel mode to access these devices.
I hope this is clear to some extent.

How can preemptive multitasking work, when OS is just one of the processes?

I am now reading materials about preemptive multitasking - and one thing escapes me.
All of the materials imply, that operating system somehow interrupts the running processes on the CPU from the "outside", therefore causing context switching and the like.
However, I can't imagine how would that work, when operating system's kernel is just another process on the CPU. When another process is already occuping the CPU, how can the OS cause the switch from the "outside"?
The OS is not just another process. The OS controls the behavior of the system when an interrupt occurs.
Before the scheduler starts a process, it arranges for a timer interrupt to be sent when the timeslice ends. Assuming nothing else happens before then, the timer will fire, and the kernel will take over control. If it elects to schedule a different process, it will switch things out to allow the other process to run and then return from the interrupt.
Hardware can signal the processor - this is called an "interrupt" - and when it occurs, control is transferred to the kernel (regardless of which process was executing at the time). This function is built in to the processor. Specifically, control is transferred to an "interrupt handler" which is a function/method within the kernel.The kernel can schedule a timer interrupt, for instance, so that this happens periodically. Once an interrupt occurs and control is transferred to the kernel, the kernel can pass control back to the originally executing process, or another process that is scheduled.