In an operating system, what is the difference between a system call and an interrupt? - operating-system

In an operating system, what is the difference between a system call and an interrupt? Are all system calls interrupts? Are all interrupts system calls?

Short Answer:
They are different things.
A system call is call by software running on the OS to services
provided by the OS.
An interrupt is usually external hardware component notifying the CPU/Microprocessor about an event that needs handling in software (usually a driver).
I say usually external, because some interrupts can be raised by software (soft interrupt)
Are all system calls interrupts? Depends
Are all interrupts system calls? No
Long answer:
The OS manages CPU time and other hardware connected to the CPU (Memory (RAM), HDD, keyboard, to name a few). It exposes services that allow user programs to access the underlying hardware and these are system calls. Usually these deal with allocating memory, reading/writing files, printing a document and so on.
When the OS interacts with other hardware it usually does so through a driver layer which sets-up the task for the hardware to perform and interrupt once the job is done, so the printer may interrupt once the document is printed or it runs out of pages. It is therefore often the case that a system call leads to generation of interrupts.
Are all system calls interrupts - Depends as they may be implemented as soft interrupts. So when a user program makes a system call, it causes a soft interrupt that results in the OS suspending the calling process, and handle the request itself, then resume the process. But, and I quote from Wikipedia,
"For many RISC processors this (interrupt) is the only technique provided, but
CISC architectures such as x86 support additional techniques. One
example is SYSCALL/SYSRET, SYSENTER/SYSEXIT (the two mechanisms were
independently created by AMD and Intel, respectively, but in essence
do the same thing). These are "fast" control transfer instructions
that are designed to quickly transfer control to the OS for a system
call without the overhead of an interrupt"

The answer to your question depends upon the underlying hardware (and sometimes operating system implementation). I will return to that in a bit.
In an operating system, what is the difference between a system call and an interrupt?
The purpose of an interrupt handler and a system call (and a fault handler) is largely the same: to switch the processor into kernel mode while providing protection from inadvertent or malicious access to kernel structures.
An interrupt is triggered by an asynchronous external event.
A system call (or fault or trap) is triggered synchronously by executing code.
Are all system calls interrupts? Are all interrupts system calls?
System calls are not interrupts because they are not triggered asynchronously by the hardware. A process continues to execute its code stream in a system call, but not in an interrupt.
That being said, Intel's documentation often conflates interrupt, system calls, traps, and faults, as "interrupt."
Some processors treat system calls, traps, faults and interrupts largely the same way. Others (notably Intel) provide different methods for implementing system calls.
In processors that handle all of the above in the same way, each type of interrupt, trap, and fault has a unique number. The processor expects the operating system to set up a vector (array) of pointers to handlers. In addition, there are one or more handlers available for an operating system to implement system calls
Depending upon the number of available handlers, the OS may have a separate handler for each system call or use a register value to determine what specific system function to execute.
In such a system, one can execute an interrupt handler synchronously the same way one invokes a system call.
For example, on the VAX the CHMK #4 instruction, invokes the 4th kernel mode handler. In intel land there is an INT instruction that does roughly the same.
Intel processors have supported the SYSCALL mechanism that provides a different way to implement system calls.

Related

Practical ways of implementing preemptive scheduling without hardware support?

I understand that using Hardware support for implementing preemptive scheduling is great for efficiency.
I want to know, What are practical ways we can do preemptive scheduling without taking support from hardware? I think one of way is Software Timers.
Also, Other way in multiprocessor system is using the one processor acting as master keep looking at slave processor's processor.
Consider, I'm fine with non-efficient way.
Please, elaborate all ways you think / know can work. Also, preferably but not necessarily works for single processor system.
In order to preempt a process, the operating system has to somehow get control of the CPU without the process's cooperation. Or viewed from the other perspective: The CPU has to somehow decide to stop running the process's code and start running the operating system's code.
Just like processes can't run at the same time as other processes, they can't run at the same time as the OS. The CPU executes instructions in order, that's all it knows. It doesn't run two things at once.
So, here are some reasons for the CPU to switch to executing operating system code instead of process code:
A hardware device sends an interrupt to this CPU - such as a timer, a keypress, a network packet, or a hard drive finishing its operation.
The software running on a different CPU sends an inter-processor interrupt to this CPU.
The running process decides to call a function in the operating system. Depending on the CPU architecture, it could work like a normal call, or it could work like a fake interrupt.
The running process executes an instruction which causes an exception, like accessing unmapped memory, or dividing by zero.
Some kind of hardware debugging interface is used to overwrite the instruction pointer, causing the CPU to suddenly execute different code.
The CPU is actually a simulation and the OS is interpreting the process code, in which case the OS can decide to stop interpreting whenever it wants.
If none of the above things happen, OS code doesn't run. Most OSes will re-evaluate which process should be running, when a hardware event occurs that causes a process to be woken up, and will also use a timer interrupt as a last resort to prevent one program hogging all the CPU time.
Generally, when OS code runs, it has no obligation to return to the same place it was called from. "Preemption" is simply when the OS decides to jump somewhere other than the place it was called from.

how does the Operating Systems code and user applications code run on same processor

We all know that the Operating Systems is responsible for handling resources needed by user application. The OS is also a piece of code that runs, then how does it manages other user programs?
does the OS runs on dedicated processor and monitor the user program on some other processor?
how does the OS actually handles user applications?
It depends upon the structure of the operating system. For any modern operating system the kernel is invoked through exceptions or interrupts. The operating system "monitors" processes during interrupts. An operating system schedules timer interrupts. When the timer goes off the interrupt handler determines whether it needs to switch to a different process.
Another OS management path is through exceptions. An application invokes the operating system through exceptions. An exception handler can also cause the operating system to switch to another process. If a process invokes a read and wait system service, that exception handler will certainly switch to a new process.
In ye olde days, it was common for multi-processors to have one processor that was the dedicated master and was the only processor to handle certain tasks. Now, all normal operating systems use symmetric multi-processing where any processor can handle any task.
An entire book is needed to answer your too broad question.
Read Operating System: Three Easy Pieces (a freely downloadable book).
does the OS runs on dedicated processor and monitor the user program on some other processor?
In general no. The same processor (or core) is either in user-mode (for user programs; read about user space and process isolation and protection rings) or in supervisor-mode (for the operating system kernel)
how does the OS actually handles user applications?
Often by providing system calls which are done, in some controlled way, from applications.
Some academic OSes, e.g. Singularity, have been designed with other principles in mind (formal proof techniques for isolation).
Read also about micro-kernels, unikernels, etc.

Hardware supported mutual exclusion

I'm currently taking a class in Operating Systems, and everything has been smooth until I encountered Concurrency and Mutual Exclusion.
Up until this chapter in the text I am currently reading, I was under the impression that the OS handled calls to certain I/O operations such as printers through queues and interrupts, and the OS also handled the scheduling of processes.
But in this section "Mutual exclusion: Hardware support", it states for a process to guarantee mutual exclusion it is sufficient to block all interrupts, and this can be done through interrupt disabling, however the cost is high since the processor is limited in its ability to interleave(Stallings, p. 211).
If this is a capability, whats stopping a programmer from placing his entire program within a critical section by disabling interrupts? And why can't the OS handle calls to critical resources, in the way that was previously stated(I/O queues & interrupts), but we must rely on programmers to identify their critical sections?
I understand the need for to identify critical sections with shared variables and memory space, but I am baffled as to why a program needs to identify its critical section with regard to I/O devices such as a printers and why the OS can't.
This is not [entirely] correct:
But in this section "Mutual exclusion: Hardware support", it states for a process to guarantee mutual exclusion it is sufficient to block all interrupts, and this can be done through interrupt disabling, however the cost is high since the processor is limited in its ability to interleave.
Processors generally support multiple means of synchronization. The simplest is uninterruptible instructions. These will be generally be short instructions such as set a bit or branch if the bit was set already. Such instructions allow synchronization within a single processor.
As you mention, disabling interrupts is another method. Generally, interrupts have priorities. Usually you can disable all interrupts that has a priority lower than specified. That allows disabling all or some interrupts.
Disabling interrupts only works when locking resources that are not shared by multiple processors.
That is why the quote you have in the context you have it is not [entirely] correct. Disabling interrupts on a processor does not synchronize when there are multiple processors. However, in theory, an operating system could disable all interrupts on all processors but such system would be seriously brain damaged because that would hamper the performance of a multi-processor system. But that might work in, say, a quick-and-dirty student project operating system.
If this is a capability, whats stopping a programmer from placing his entire program within a critical section by disabling interrupts?
Disabling interrupts is only possible in kernel mode.
Another method of hardware synchronization is interlocked instructions. These are instructions that lock the memory of the operands and prevent other processors from accessing that memory while the instruction is executing. Sometimes are are simple add integer interlocked and bit set (or clear) and branch interlocked instructions.

Do we have to enable or disable PCI interrupts on every layer, or only at the closest to hardware?

I'm implementing a PCIe driver, and I'd like to understand at what level the interrupts can be or should be enabled/disabled. I intentionally do not specify OS, as I'm assuming it should be relevant for any platform. By levels I mean the following:
OS specific interrupts handling framework
Interrupts can be disabled or enabled in the PCI/PCIe configuration space registers, e.g. COMMAND register
Interrupts also can be masked at device level, for instance we can
configure device not trigger certain interrupts to the host
I understand that whatever interrupt type is being used on PCIe (INTx emulation, MSI or MSI-X), it has to be delivered to the host OS.
So my question is really -- do we actually have to enable or disable interrupts on every layer, or it's sufficient only at the closest to hardware, e.g. in relevant PCI registers?
Disabling interrupts at the various levels usually has completely different purposes.
Disabling interrupts:
In the OS (really, this means in the CPU) - This is generally about avoiding race conditions. In particular, if state/memory corruption could occur during a particular section of code if the CPU happened to be interrupted, then that section of code will need to disable interrupt handling. Interrupt handlers must not acquire normal locks (by definition they can't be suspended), and they must not attempt to acquire a spin-lock that is held by the thread currently scheduled on the same CPU (because that thread is blocked from progressing by the very same interrupt handler!) so ensuring data safety with interrupt handlers can be tricky. Handling interrupts promptly is generally a good thing, so you want to absolutely minimise such sections in any code you write. Do as much of your interrupt handling in secondary interrupt handlers as possible to avoid such situations. Secondary interrupt handlers are really just callbacks on a regular OS thread which doesn't have any of the restrictions of a primary interrupt handler.
PCI/PCIe configuration - It's my understanding this is mainly about routing interrupts, and is something you normally do once when your driver loads (or is activated by a client) and again when your driver unloads (or is deactivated). This may also be affected by power management events. In some OSes, the PCI(e) level is actually handled for you when you activate PCI device interrupts via higher-level APIs.
On-device - This is usually an optimisation to avoid interrupting the CPU when it doesn't need to be interrupted. The most common scenario is that an event happens on the device, so an interrupt is generated. The driver's primary interrupt handler checks the device registers if the driver needs to do any processing. If so, it disables interrupts on the device, and schedules the driver's secondary interrupt handler to run. The OS eventually runs the secondary handler, which processes whatever information the device has provided, until it runs out of things to do. Then it enables interrupts again, checks once more if there's any work pending from the device and if there are none, it terminates. (If there are items to process in this last check, it re-disables interrupts and starts over from the beginning.) The idea is that until the secondary interrupt handler has finished processing, there really is no point triggering the primary interrupt handler, and a waste of resources, if additional events arrive, because the driver is already busy processing the event queue. The final check after re-enabling interrupts is to avoid a race condition between an event arriving and re-enabling interrupts.
I hope that answers your question.

Why do we need software interupt to start the execution of the system call?

This may be very foolish question to ask.
However I want to clarify my doubts as i am new to this thing.
As per my understanding the CPU executes the instruction of a process step by step by incrementing the program counter.
Now suppose we have a system call as one of the instruction, then why do we need to give a software interrupt when this instruction is encountered? Can't this system call (sequence of instructions) be executed just as other instructions are executing, because as far i understand the interrupts are to signal certain asynchronous events. But here the system call is a part of the process instruction, which is not asynchronous.
It doesn't require an interrupt. You could make an OS which uses a simple call. But most don't for several reasons. Among them might be:
On many architectures, interrupts elevate or change the CPU's access level, allowing the OS to implement protection of its memory from the unprivileged user code.
Preemptive operating systems already make use of interrupts for scheduling processes. It might be convenient to use the same mechanism.
Interrupts are something present on most architectures. Alternatives might require significant redesign across architectures.
Here is one example of a "system call" which doesn't use an interrupt (if you define a system call as requesting functionality from the OS):
Older versions of ARM do not provide atomic instructions to increment a counter. This means that an atomic increment requires help from the OS. The naive approach would be to make it a system call which makes the process uninterruptible during the load-add-store instructions, but this has a lot of overhead from the interrupt handler. Instead, the Linux kernel has chosen to map a small bit of code into every process at a fixed address. This code contains the atomic increment instructions and can be called directly from user code. The kernel scheduler takes care of ensuring that any operations interrupted in this block are properly restarted.
First of all, system calls are synchronous software interrupts, not asynchronous. When the processor executes the trap machine instruction to go to kernel space, some of the kernel registers get changed by the interrupt handler functions. Modification of these registers requires privileged mode execution, i.e. these can not be changed using user space code.
When the user-space program cannot read data directly from disk, as it doesn't have control over the device driver. The user-space program should not bother with driver code. Communication with the device driver should take place through kernel code itself. We tend to believe that kernel code is pristine and entirely trustworthy; user code is always suspect.
Hence, it requires privileged instructions to change the contents of register and/or accessing driver functionalities; the user cannot execute system call functions as a normal function call. Your processor should know whether you are in the kernel mode to access these devices.
I hope this is clear to some extent.