Why do we have multiple Interrupt Handlers in a system rather than just one? - operating-system

In the Kernel of an operating system, we have an Interrupt table that contains many interrupt handlers that handle interrupts from I/O devices and processes. But why can't we just have one interrupt handler? Are interrupt handlers any different from each other?

Another issue is that, with one interrupt-handler, it gets very messy to prioritize interrupts.
Usually, interrupts are disabled in hardware once an interrupt is acknowledged by the CPU that handles it, so preventing multiple, reentrant invocations of the same interrupt and any issues with data/buffer overwrites that would likely ensue. It's also common for an interrupt-handler to propmptly re-enable interrupts of a higher priority, so improving response to those interrupts, (they can then interrupt the lower-priority interrupts).
Using only one interrupt-handler would make prioritizing interrupts exrtemely messy, if possible at all:(
Getting interrupt handlers and drivers to cooperate harmoniously is difficult enough as it is!
Are interrupt handlers any different from each other?
Well, yes. They may all be forced to conform to a set of rules/constraints by the OS design, but yes, they are generally different. A handler that manages an interrupt from a disk DMA controller will surely have different code than a keyboard input handler. They manage different hardware, to start with:)

If you have one interrupt handler, the decision for how the interrupt should be processed is made in code instead of in hardware.
And there are a LOT of things that can trigger an interrupt - so the code would almost certainly reduce overall performance.

In principle there is no reason why you could not have a single interrupt handler that gets called for all interrupts. Such a handler would have to check every single interrupt source. Since most of the time only a small fraction of the possible interrupt sources are active, many cycles would get wasted checking to see which interrupt was triggered. Since ISR routines are generally very frequently called code you would take a large (probably unacceptable) performance hit.
The way a specific interrupt controller handles interrupts can vary quite a bit. To get a very solid understanding you'd have to read the manuals for a variety of different architectures interrupt controller implementations.
However, some interrupt controllers do end up sharing a common ISR routine. The common ISR will read registers in the interrupt controller to determine what vector (basically the source of the interrupt) was triggered. The common ISR then calls another function (that is often referred to as an interrupt service routine as well) that handles that specific interrupt source based off the vector value. Then depending on the implementation of the interrupt controller when the vector specific routine returns control to the common ISR, the common ISR will deassert the interrupt on the interrupt controller returning execution back to the interrupted place in code. Thus, by reading the vector from the a register in the interrupt controller cycles are saved because the common ISR knows what caused the interrupt, rather than examining every possible interrupt source.

Related

Can Context Switching happens between ISRs?

I am still a new learner in this field and appreciate anyone who can give me some opinions.
Let me introduce the situation of my problem:
1.The keyboard interrupt occurs when Process A is executing. As far as I know, it doesn't matter if Process A was executing in user mode or kernel mode. The Interrupt Handler will be invoked to deal with the keyboard interrupt in kernel mode.
2.The Interrupt Handler will save the state of Process A in its kernel stack and executes the ISR corresponding to the keyboard interrupt(still using the kernel stack of Process A).
3.During the execution of the keyboard ISR, the clock interrupt occurs. Then the interrupts will be nested.
4.The Interrupt Handler will save the state of the keyboard ISR in the kernel stack of Process A and executes the ISR corresponding to the clock interrupt(still using the kernel stack of Process A).
5.The clock ISR updates the system time and finishes. But os finds out the time slice of Process A has been used up.
Question:
1.What will the os do next?
Will the os schedule another process first Or will the os finish the keyboard ISR first? I prefer the former because the state of the interrupted keyboard ISR is saved in the kernel stack of Process A. It can be restored when Process A is selected to run after some time. Am I right?
2.Is there any difference in interrupt handling between common os(like Linux) and real time os?
Unfortunately there is no single answer to this question. The exact behavior depends on the interrupt controller implementation, if the OS supports Symmetric multi processing(SMP) and the specific scheduler.
The interrupt controller implementation is important because some processors do not support nested ISRs while some do. CPUs that do not supported nested interrupts are going to return to kernel operation after servicing an interrupt. If multiple interrupts are triggered in a narrow time window so servicing overlaps, typically the CPU will enter kernel mode very briefly and then return back to an interrupt context to handle the next interrupt. If nested interrupts are supported then the typical behavior is for the CPU to stay in the interrupt context until the "stack" of interrupts is serviced before returning to a kernel context.
The OS supporting SMP is also very important to the exact behavior of interrupt handling. When SMP is supported, it possible, and probably very likely, that the another core may be scheduled to handle the kernel and subsequent user space workload to handle what ever the interrupt trigged. Suppose the ISR served a Ethernet port on core 1, upon completion of the ISR, core 1 could service another interrupt, while core 2 wakes up and runs the user process waiting on the network traffic from the ethernet port.
To add a final wrinkle of complexity, interrupts can typically be routed to different CPUs, the exact way dependent on the interrupt controller implementation. This is done to minimize interrupt latency by keeping all the interrupts from pilling up on one CPU waiting for the sequential handling.
Finally, typical scheduler implementations don't track the ISR servicing time when calculating time slices for a given thread. As for the difference in handling between a traditional fair scheduler or an RTOS, there generally are not significant differences. Ultimately its the interrupt controller hardware that dictates the order interrupts are handled in, not the software's thread scheduler.

Uart dma receive interrupt stops receiving data after several minutes

I have a project that I have used stm32f746g discovery board. It receives data with fixed size from Uart sequentially and to inform application about each data receive completed, dma callback is used (HAL_UART_RxCpltCallback function). It works fine at the beginning but after several minutes of running, the dma callback stops to be called, and as a result, the specified parameter value doesn't get updated. Because the parameter is used in another thread too (actually a rtos defined timer), I believe this problem is caused by lacking of thread safety. But my problem is that mutex and semaphore don't be supported in ISRs and I need to protect my variable in dma callback which is an interrupt routine. I am using keil rtx to handle multithreading and the timer I use is osTimer that is defined in rtx. How can I handle the issue?
Generally, only one thread should communicate with the ISR. If multiple threads are accessing a variable shared with an ISR, your design is wrong and needs to be fixed. In case of DMA, only one thread should access the buffer.
You'll need to protect the variables shared between that thread and the ISR - not necessarily with a mutex/semaphore but perhaps with something simpler like guaranteeing atomic access (best solution if possible), or by using the non-interrruptable abilitiy that many ISRs have. Example for simple, single-threaded MCU applications. Alternatively just temporarily disable interrupts during access, but that may not be possible, depending on real-time requirements.

How does OS select an interrupt handler?

I have read a few pages about interrupt handling and I am getting more and more confused as to how OS actually selects an interrupt handler to execute.
I read the following:
The CPU asks this question ('Where is the interrupt service routine?') to the hardware by issuing an interrupt acknowledge, and the hardware answers the question by placing an interrupt vector number on the data bus. The CPU uses the interrupt vector number to find out where the interrupt service routine is.
This is one of the things, the other included
Each device having a IRQ harcoded i,.e, each device actually has an interrupt number determined by the line that connects the device to the CPU. That is then used to find the handler in the IDT
hardware that causes an interrupt places the interrupt number in a special registry, which the CPU then reads and uses as interupt number to lookup the handler in the IDT
Does any of this make sense and which one is actually correct?
This actually varies quite substantially based on the actual hardware you're using.
The overview is this:
A hardware event occurs in a particular device.
The device asserts a signal on its interrupt line.
The interrupt line is often connected to an interrupt controller, a dedicated piece of hardware that decides whether to signal the processor.
The interrupt controller decides to signal the processor.
The processor switches to interrupt mode and begins executing the interrupt handler installed by the OS at a predefined location.
The interrupt handler asks the interrupt controller which interrupt line was actually signaled, which tells it which device sent the interrupt.
The interrupt handler dispatches the interrupt message to the device driver.
You are asking about steps 6 and 7. Step 6 depends on the interrupt controller. Some interrupt controllers are actually inside the processor die physically, in which case the "ask" is simply a matter of reading the right memory addresses. Some are on a bus, in which case the processor has to take ownership of the bus, signal the interrupt controller, and have it reply with the interrupt line number.
Step 7 is defined by the OS entirely. The OS might have a table mapping interrupt lines to interrupt function handlers, and that table might be predefined (as is usually the case on embedded systems where the hardware layout is fixed), or it might have been determined during startup as the system discovered what devices were attached to it.

What are the differences between Clock and I/O interrupts?

What are the differences between clock and I/O interrupts?
As I understand it a clock interrupt uses the system clock for interrupting the CPU and an I/O interrupt is sent to the CPU based off of program input or output completion. This was helpful in understanding interrupts in general, but I'm trying to compare these two kinds.
edit:
In a multiprogramming context, using a uniprocessor (to make things simple)
Timer/clock interrupts are often used for scheduling. These interrupts invoke the scheduler and it may switch the currently executing thread/process to another by saving the current context and loading another one.
Other than the purpose, an interrupt is an interrupt.
The main purpose of clock interrupt is to help out in what we call it "Multitasking". It deceives us and make us to think that internally parallel working is going on (Means many applications are running at the same time).But in reality it's not.Clock sends interrupt after a specified fraction of second,depends on system speed, to the processor to terminate it's current thread, save its address and data to stake and hold the application of which interrupt is sent.
i hope this will help you.

Polling vs Interrupt

I have a basic doubt regarding interrupts. Imagine a computer that does not have any interrupts, so in order for it to do I/O the CPU will have to poll* the keyboard for a key press, the mouse for a click etc at regular intervals. Now if it has interrupts the CPU will keep checking whether the interrupt line got high( or low) at regular intervals. So how is CPU cycles getting saved by using interrupts. As per my understanding instead of checking the device now we are checking the interrupt line. Can someone explain what basic logic I am getting wrong.
*Here by polling I don't mean that the CPU is in a busy-wait. To quote Wikipedia "Polling also refers to the situation where a device is repeatedly checked for readiness, and if it is not the computer returns to a different task"
#David Schwartz and #RKT are right, it doesn't take any CPU cycles to check the interrupt line.
Basically, the processor has a set of interrupt wires which are connected to a bunch of devices. When one of the devices has something to say, it turns its interrupt wire on, which triggers the processor (without the help of any software) to pause the execution of current instructions and start running a handler function.
Here's how it works. When the operating system boots, it registers a set of callbacks (a table of function pointers, actually) with the processor using a special instruction which takes the address of the first entry of the table. When interrupt N is triggered, the processor pulls the Nth entry from the table and runs the code at the location in memory it refers to. The code inside the function is written by the OS authors in assembly, but typically all it does is save the state of the stack and registers so that the current task can be resumed after the interrupt handler has been called and then call a higher-level common interrupt handler which is written in C and that handles the logic of "If this a page fault, do X", "If this is a keyboard interrupt, do Y", "If this is a system call, do Z", etc. Of course there are variations on this with different architectures and languages, but the gist of it is the same.
The idea with software interrupts ("signals", in Unix parlance) is the same, except that the OS does the work of setting up the stack for the signal handler to run. The basic procedure is that the userland process registers signal handlers one at a time to the OS via a system call which takes the address of the handler function as an argument, then some time in the future the OS recognizes that it should send that process a signal. The next time that process is run, the OS will set its instruction pointer to the beginning of the handler function and save all its registers to somewhere the process can restore them from before resuming the execution of that process. Usually, the handler will have some sort of routing logic to alert the relevant bit of code that it received a signal. When the process finishes executing the signal handler, it restores the register state that existed previous to the signal handler running, and resumes execution where it left off. Hence, software interrupts are also more efficient than polling for learning about events coming from the kernel to this process (however this is not really a general-use mechanism since most of the signals have specific uses).
It doesn't take any CPU cycles to check the interrupt line. It's done by dedicated hardware, not CPU instructions. The reason it's called an interrupt is because if the interrupt line is asserted, the CPU is interrupted.
"CPU is interrupted" : It will leave (put on hold) the normal program execution and then execute the ISR( interrupt subroutine) and again get back to execution of suspended program.
CPU come to know about interrupts through IRQ(interrupt request) and IF(interrupt flag)
Interrupt: An event generated by a device in a computer to get attention of the CPU.
Provided to improve processor utilization.
To handle an interrupt, there is an Interrupt Service Routine (ISR) associated with it.
To interrupt the processor, the device sends a signal on its IRQ line and continue doing so until the processor acknowledges the interrupt.
CPU then performs a context switch by pushing the Program Status Word (PSW) and PC onto the control stack.
CPU executes the ISR.
whereas Pooling is the process where the computer waits for an external device to check for it readiness.
The computer does not do anything else than check the status of the device
Polling is often used with low-level hardware
Example: when a printer connected via a Parrnell port the computer waits until the next character has been received by the printer.
These process can be as minute as only reading 1 Byte
There are two different methods(Polling & interrupt) to serve I/O of a computer system. In polling, CPU continuously remain busy, either an input data is given to an I/O device and if so, then checks the source port of corresponding device and the priority of that input to serve it.
In Interrupt driven approach, when a data is given to an I/O device, an interrupt is generated and CPU checks the priority of that input to serve it.