What are the differences between clock and I/O interrupts?
As I understand it a clock interrupt uses the system clock for interrupting the CPU and an I/O interrupt is sent to the CPU based off of program input or output completion. This was helpful in understanding interrupts in general, but I'm trying to compare these two kinds.
edit:
In a multiprogramming context, using a uniprocessor (to make things simple)
Timer/clock interrupts are often used for scheduling. These interrupts invoke the scheduler and it may switch the currently executing thread/process to another by saving the current context and loading another one.
Other than the purpose, an interrupt is an interrupt.
The main purpose of clock interrupt is to help out in what we call it "Multitasking". It deceives us and make us to think that internally parallel working is going on (Means many applications are running at the same time).But in reality it's not.Clock sends interrupt after a specified fraction of second,depends on system speed, to the processor to terminate it's current thread, save its address and data to stake and hold the application of which interrupt is sent.
i hope this will help you.
Related
I am still a new learner in this field and appreciate anyone who can give me some opinions.
Let me introduce the situation of my problem:
1.The keyboard interrupt occurs when Process A is executing. As far as I know, it doesn't matter if Process A was executing in user mode or kernel mode. The Interrupt Handler will be invoked to deal with the keyboard interrupt in kernel mode.
2.The Interrupt Handler will save the state of Process A in its kernel stack and executes the ISR corresponding to the keyboard interrupt(still using the kernel stack of Process A).
3.During the execution of the keyboard ISR, the clock interrupt occurs. Then the interrupts will be nested.
4.The Interrupt Handler will save the state of the keyboard ISR in the kernel stack of Process A and executes the ISR corresponding to the clock interrupt(still using the kernel stack of Process A).
5.The clock ISR updates the system time and finishes. But os finds out the time slice of Process A has been used up.
Question:
1.What will the os do next?
Will the os schedule another process first Or will the os finish the keyboard ISR first? I prefer the former because the state of the interrupted keyboard ISR is saved in the kernel stack of Process A. It can be restored when Process A is selected to run after some time. Am I right?
2.Is there any difference in interrupt handling between common os(like Linux) and real time os?
Unfortunately there is no single answer to this question. The exact behavior depends on the interrupt controller implementation, if the OS supports Symmetric multi processing(SMP) and the specific scheduler.
The interrupt controller implementation is important because some processors do not support nested ISRs while some do. CPUs that do not supported nested interrupts are going to return to kernel operation after servicing an interrupt. If multiple interrupts are triggered in a narrow time window so servicing overlaps, typically the CPU will enter kernel mode very briefly and then return back to an interrupt context to handle the next interrupt. If nested interrupts are supported then the typical behavior is for the CPU to stay in the interrupt context until the "stack" of interrupts is serviced before returning to a kernel context.
The OS supporting SMP is also very important to the exact behavior of interrupt handling. When SMP is supported, it possible, and probably very likely, that the another core may be scheduled to handle the kernel and subsequent user space workload to handle what ever the interrupt trigged. Suppose the ISR served a Ethernet port on core 1, upon completion of the ISR, core 1 could service another interrupt, while core 2 wakes up and runs the user process waiting on the network traffic from the ethernet port.
To add a final wrinkle of complexity, interrupts can typically be routed to different CPUs, the exact way dependent on the interrupt controller implementation. This is done to minimize interrupt latency by keeping all the interrupts from pilling up on one CPU waiting for the sequential handling.
Finally, typical scheduler implementations don't track the ISR servicing time when calculating time slices for a given thread. As for the difference in handling between a traditional fair scheduler or an RTOS, there generally are not significant differences. Ultimately its the interrupt controller hardware that dictates the order interrupts are handled in, not the software's thread scheduler.
In the Kernel of an operating system, we have an Interrupt table that contains many interrupt handlers that handle interrupts from I/O devices and processes. But why can't we just have one interrupt handler? Are interrupt handlers any different from each other?
Another issue is that, with one interrupt-handler, it gets very messy to prioritize interrupts.
Usually, interrupts are disabled in hardware once an interrupt is acknowledged by the CPU that handles it, so preventing multiple, reentrant invocations of the same interrupt and any issues with data/buffer overwrites that would likely ensue. It's also common for an interrupt-handler to propmptly re-enable interrupts of a higher priority, so improving response to those interrupts, (they can then interrupt the lower-priority interrupts).
Using only one interrupt-handler would make prioritizing interrupts exrtemely messy, if possible at all:(
Getting interrupt handlers and drivers to cooperate harmoniously is difficult enough as it is!
Are interrupt handlers any different from each other?
Well, yes. They may all be forced to conform to a set of rules/constraints by the OS design, but yes, they are generally different. A handler that manages an interrupt from a disk DMA controller will surely have different code than a keyboard input handler. They manage different hardware, to start with:)
If you have one interrupt handler, the decision for how the interrupt should be processed is made in code instead of in hardware.
And there are a LOT of things that can trigger an interrupt - so the code would almost certainly reduce overall performance.
In principle there is no reason why you could not have a single interrupt handler that gets called for all interrupts. Such a handler would have to check every single interrupt source. Since most of the time only a small fraction of the possible interrupt sources are active, many cycles would get wasted checking to see which interrupt was triggered. Since ISR routines are generally very frequently called code you would take a large (probably unacceptable) performance hit.
The way a specific interrupt controller handles interrupts can vary quite a bit. To get a very solid understanding you'd have to read the manuals for a variety of different architectures interrupt controller implementations.
However, some interrupt controllers do end up sharing a common ISR routine. The common ISR will read registers in the interrupt controller to determine what vector (basically the source of the interrupt) was triggered. The common ISR then calls another function (that is often referred to as an interrupt service routine as well) that handles that specific interrupt source based off the vector value. Then depending on the implementation of the interrupt controller when the vector specific routine returns control to the common ISR, the common ISR will deassert the interrupt on the interrupt controller returning execution back to the interrupted place in code. Thus, by reading the vector from the a register in the interrupt controller cycles are saved because the common ISR knows what caused the interrupt, rather than examining every possible interrupt source.
I came across that the process ready for execution in the ready queue are given the control of the CPU by the scheduler. The scheduler selects a process based on its scheduling algorithm and then gives the selected process the control of the CPU and later preempts if it is following a preemptive style. I would like to know that if the CPU's processing unit is being used by the processor then who exactly preempts and schedules the processes if the processing unit is not available.
now , i want to share you my thought about the OS,
and I'm sorry my English is not very fluent
What do you think about the OS? Do you think it's 'active'?
no, in my opinion , OS is just a pile of dead code in memory
and this dead code is constituted by interrupt handle function(We just called this dead code 'kernel source code')
ok, now, CPU is execute process A, and suddenly a 'interrupt' is occur, this 'interrupt' may occured because time clock or because a read system call, anyhow, a interrupt is occur. then CPU will jump the constitute interrupt handl function(CPU jump because CPU's constitute is designed). As said previously, this interrupt handle function is the part of OS kernel source code.
and CPU will execute this code. And what this code will do? this code will scheduleļ¼and CPU will execute this code.
Everything happens in the context of a process (Linux calls these lightweight processes but its the same).
Process scheduling generally occurs either as part of a system service call or as part of an interrupt.
In the case of a system service call, the process may determine it cannot execute so it invokes the scheduler to change the context to a new process.
The OS will schedule timer interrupts where it can do scheduling. Scheduling can also occur in other types of interrupts. Interrupts are handled by the current process.
Consider this: When one task/process is running on a single processor system, another task has to wait for its turn till the first task is either suspended or terminates (depending on the scheduling algorithm).
Kernel also consists of various tasks that are using the using the same CPU to do OS related stuff - like scheduling, memory management, responding to system calls etc.
So when a kernel schedules a particular task/process to give it CPU time, does it relinquish its control over the CPU?ie does it momentarily stop? If not how does it continually keep on running to do all OS related tasks while the other process is running on CPU? Does the scheduler move aside to give the next task in line CPU and if so what brings the scheduler back to go on with further scheduling activities? This question is similar but it does not contain enough details -
How can kernel run all the time?
I am confused about this part and I cant understand how this would work.Can somebody please explain this in detail. It would be helpful if you could explain it with an example.
Yeah.. you should stop thinking of the OS kernel as a process and think of it instead of just code and data - a state-machine that processes/threads call in to in order to obtain specific services at one end, (eg. I/O requests) and drivers call in to at the other end to provide service solutions, (eg. I/O completion).
The kernel does not need any threads of execution in itself. It only runs when entered from syscalls, (interrupt-like calls from running user threads/processes), or drivers, (hardware interrupts from disk/NIC/KB/mouse etc hardware). Sometimes, such calls will change the set of threads running on the available cores, (eg. if a thread waiting for a network buffer becomes ready because the NIC driver has completed the action, the OS will probably try to assign it to a core 'immediately', preempting some other thread if required).
If there are no syscalls, and no hardware interrupts, the kernel does nothing because it is not entered - there is nothing for it to do.
What you are missing is that few operating systems these days have a monitor process as you are describing.
At the risk of gross oversimplification, operating systems run through exceptions and interrupts.
Assume you have two processes, P and Q. P is the running process and Q is the next to run. One way to switch processes is the system timer goes off triggering an interrupt. P switches to kernel mode and handles that interrupt. P runs the interrupt code handling the timer and determines that Q should run. P then saves its context and loads Q. At that moment, Q is the running process. The interrupt handler exits and picks up where Q was before.
In other words, process P becomes the kernel scheduler while the interrupt is being processed. Each process becomes the scheduler that loads the next process.
Another example, let us say that Q has queued a read operation to a disk. That operation completes and triggers an interrupt. P, the running process, enters kernel mode to handle the interrupt. P then processes Q's disk read operation.
I have a basic doubt regarding interrupts. Imagine a computer that does not have any interrupts, so in order for it to do I/O the CPU will have to poll* the keyboard for a key press, the mouse for a click etc at regular intervals. Now if it has interrupts the CPU will keep checking whether the interrupt line got high( or low) at regular intervals. So how is CPU cycles getting saved by using interrupts. As per my understanding instead of checking the device now we are checking the interrupt line. Can someone explain what basic logic I am getting wrong.
*Here by polling I don't mean that the CPU is in a busy-wait. To quote Wikipedia "Polling also refers to the situation where a device is repeatedly checked for readiness, and if it is not the computer returns to a different task"
#David Schwartz and #RKT are right, it doesn't take any CPU cycles to check the interrupt line.
Basically, the processor has a set of interrupt wires which are connected to a bunch of devices. When one of the devices has something to say, it turns its interrupt wire on, which triggers the processor (without the help of any software) to pause the execution of current instructions and start running a handler function.
Here's how it works. When the operating system boots, it registers a set of callbacks (a table of function pointers, actually) with the processor using a special instruction which takes the address of the first entry of the table. When interrupt N is triggered, the processor pulls the Nth entry from the table and runs the code at the location in memory it refers to. The code inside the function is written by the OS authors in assembly, but typically all it does is save the state of the stack and registers so that the current task can be resumed after the interrupt handler has been called and then call a higher-level common interrupt handler which is written in C and that handles the logic of "If this a page fault, do X", "If this is a keyboard interrupt, do Y", "If this is a system call, do Z", etc. Of course there are variations on this with different architectures and languages, but the gist of it is the same.
The idea with software interrupts ("signals", in Unix parlance) is the same, except that the OS does the work of setting up the stack for the signal handler to run. The basic procedure is that the userland process registers signal handlers one at a time to the OS via a system call which takes the address of the handler function as an argument, then some time in the future the OS recognizes that it should send that process a signal. The next time that process is run, the OS will set its instruction pointer to the beginning of the handler function and save all its registers to somewhere the process can restore them from before resuming the execution of that process. Usually, the handler will have some sort of routing logic to alert the relevant bit of code that it received a signal. When the process finishes executing the signal handler, it restores the register state that existed previous to the signal handler running, and resumes execution where it left off. Hence, software interrupts are also more efficient than polling for learning about events coming from the kernel to this process (however this is not really a general-use mechanism since most of the signals have specific uses).
It doesn't take any CPU cycles to check the interrupt line. It's done by dedicated hardware, not CPU instructions. The reason it's called an interrupt is because if the interrupt line is asserted, the CPU is interrupted.
"CPU is interrupted" : It will leave (put on hold) the normal program execution and then execute the ISR( interrupt subroutine) and again get back to execution of suspended program.
CPU come to know about interrupts through IRQ(interrupt request) and IF(interrupt flag)
Interrupt: An event generated by a device in a computer to get attention of the CPU.
Provided to improve processor utilization.
To handle an interrupt, there is an Interrupt Service Routine (ISR) associated with it.
To interrupt the processor, the device sends a signal on its IRQ line and continue doing so until the processor acknowledges the interrupt.
CPU then performs a context switch by pushing the Program Status Word (PSW) and PC onto the control stack.
CPU executes the ISR.
whereas Pooling is the process where the computer waits for an external device to check for it readiness.
The computer does not do anything else than check the status of the device
Polling is often used with low-level hardware
Example: when a printer connected via a Parrnell port the computer waits until the next character has been received by the printer.
These process can be as minute as only reading 1 Byte
There are two different methods(Polling & interrupt) to serve I/O of a computer system. In polling, CPU continuously remain busy, either an input data is given to an I/O device and if so, then checks the source port of corresponding device and the priority of that input to serve it.
In Interrupt driven approach, when a data is given to an I/O device, an interrupt is generated and CPU checks the priority of that input to serve it.