Hardware interrupt when Power Button pressed? - operating-system

When we first press the power-on button, on the laptop, does a hardware interrupt occur?
I have read at multiple places that:
"Once the system receives a "Power Good" signal from the power supply, the CPU will seek instructions from the BIOS about initializing the system"
But even before the BIOS instructions getting loaded into the CPU, we have the Bootstrap instructions loaded into the Program counter. So for the memory address to be loaded onto the program counter, there has to be a hardware interrupt at the very start?

When we first press the power-on button, on the laptop, does a hardware interrupt occur?
No.
The CPU has to do various things ("built in self-test", determine if it's the "boot CPU" or not, etc), then the firmware has to configure various things (interrupt controller, etc), then something (firmware) has to create a table of interrupt vectors. All of that has to happen before any interrupt is possible.
So for the memory address to be loaded onto the program counter, there has to be a hardware interrupt at the very start?
For the main "boot CPU"; after its power-on sequence (self test, etc) the CPU ends up in a well defined default state that was built into the CPU by the manufacturer; which includes a default/initial value for each register (including the instruction pointer). This also means that something (firmware) must exist at the address that was built into the CPU by its manufacturer.
For other CPUs ("application processors"); after their power-on sequence they just wait until software wakes them up somehow. For 80x86, waking an AP up is done by software sending a sequence of interrupts from another CPU; where (for modern 80x86 CPUs only - Pentium and newer) part of the address to put into the instruction pointer is included in the message sent to the CPU as part of a "startup interrupt" (SIPI); and the rest of the instruction pointer (and all other registers) is still "well defined default state that was built into the CPU by the manufacturer".

Related

Practical ways of implementing preemptive scheduling without hardware support?

I understand that using Hardware support for implementing preemptive scheduling is great for efficiency.
I want to know, What are practical ways we can do preemptive scheduling without taking support from hardware? I think one of way is Software Timers.
Also, Other way in multiprocessor system is using the one processor acting as master keep looking at slave processor's processor.
Consider, I'm fine with non-efficient way.
Please, elaborate all ways you think / know can work. Also, preferably but not necessarily works for single processor system.
In order to preempt a process, the operating system has to somehow get control of the CPU without the process's cooperation. Or viewed from the other perspective: The CPU has to somehow decide to stop running the process's code and start running the operating system's code.
Just like processes can't run at the same time as other processes, they can't run at the same time as the OS. The CPU executes instructions in order, that's all it knows. It doesn't run two things at once.
So, here are some reasons for the CPU to switch to executing operating system code instead of process code:
A hardware device sends an interrupt to this CPU - such as a timer, a keypress, a network packet, or a hard drive finishing its operation.
The software running on a different CPU sends an inter-processor interrupt to this CPU.
The running process decides to call a function in the operating system. Depending on the CPU architecture, it could work like a normal call, or it could work like a fake interrupt.
The running process executes an instruction which causes an exception, like accessing unmapped memory, or dividing by zero.
Some kind of hardware debugging interface is used to overwrite the instruction pointer, causing the CPU to suddenly execute different code.
The CPU is actually a simulation and the OS is interpreting the process code, in which case the OS can decide to stop interpreting whenever it wants.
If none of the above things happen, OS code doesn't run. Most OSes will re-evaluate which process should be running, when a hardware event occurs that causes a process to be woken up, and will also use a timer interrupt as a last resort to prevent one program hogging all the CPU time.
Generally, when OS code runs, it has no obligation to return to the same place it was called from. "Preemption" is simply when the OS decides to jump somewhere other than the place it was called from.

How do interrupts work in multi-core system?

I want to write code for interrupts of the buttons on Raspberry pi 2. This board uses QUAD Core Broadcom BCM2836 CPU (ARM architecture). That mean, only one CPU is on this board (Raspberry pi 2). But I don't know how do interrupts in multi-core system. I wonder whether interrupt line is connected to each core or one CPU. So, I found the paragraph below via Google:
Interrupts on multi-core systems
On a multi-core system, each interrupt is directed to one (and only one) CPU, although it doesn't matter which. How this happens is under control of the programmable interrupt controller chip(s) on the board. When you initialize the PICs in your system's startup, you can program them to deliver the interrupts to whichever CPU you want to; on some PICs you can even get the interrupt to rotate between the CPUs each time it goes off.
Does this mean that interrupts happen with each CPU? I can't understand exactly above info. If interrupts happen to each core, I must take account of critical section for shared data on each interrupt service routine of the buttons.
If interrupts happen to each CPU, I don't have to take account of critical section for shared data. What is correct?
To sum up, I wonder How do interrupts in multi-core system? Is the interrupt line is connected to each core or CPU? So, should I have to take account of critical section for same interrupt?
your quote from google looks quite generic or perhaps even leaning on the size of x86, but doesnt really matter if that were the case.
I sure hope that you would be able to control interrupts per cpu such that you can have one type go to one and another to another.
Likewise that there is a choice to have all of them interrupted in case you want that.
Interrupts are irrelevant to shared resources, you have to handle shared resources whether you are in an ISR or not, so the interrupt doesnt matter you have to deal with it. Having the ability to isolate interrupts from one peripheral to one cpu could make the sharing easier in that you could have one cpu own a resource and other cpus make requests to the cpu that owns it for example.
Dual, Quad, etc cores doesnt matter, treat each core as a single cpu, which it is, and solve the interrupt problems as you would for a single cpu. Again shared resources are shared resources, during interrupts or not during interrupts. Solve the problem for one cpu then deal with any sharing.
Being an ARM each chip vendors implementation can vary from another, so there cannot be one universal answer, you have to read the arm docs for the arm core (and if possible the specific version as they can/do vary) as well as the chip vendors docs for whatever they have around the arm core. Being a Broadcom in this case, good luck with chip vendor docs. They are at best limited, esp with the raspi2. You might have to dig through the linux sources. No matter what, arm, x86, mips, etc, you have to just read the documentation and do some experiments. Start off by treating each core as a standalone cpu, then deal with sharing of resources if required.
If I remember right the default case is to have just the first core running the kernel7.img off the sd card, the other three are spinning in a loop waiting for an address (each has its own) to be written to get them to jump to that and start doing something else. So you quite literally can just start off with a single cpu, no sharing, and figure that out, if you choose to not have code on the other cpus that touch that resource, done. if you do THEN figure out how to share a resource.

Do we have to enable or disable PCI interrupts on every layer, or only at the closest to hardware?

I'm implementing a PCIe driver, and I'd like to understand at what level the interrupts can be or should be enabled/disabled. I intentionally do not specify OS, as I'm assuming it should be relevant for any platform. By levels I mean the following:
OS specific interrupts handling framework
Interrupts can be disabled or enabled in the PCI/PCIe configuration space registers, e.g. COMMAND register
Interrupts also can be masked at device level, for instance we can
configure device not trigger certain interrupts to the host
I understand that whatever interrupt type is being used on PCIe (INTx emulation, MSI or MSI-X), it has to be delivered to the host OS.
So my question is really -- do we actually have to enable or disable interrupts on every layer, or it's sufficient only at the closest to hardware, e.g. in relevant PCI registers?
Disabling interrupts at the various levels usually has completely different purposes.
Disabling interrupts:
In the OS (really, this means in the CPU) - This is generally about avoiding race conditions. In particular, if state/memory corruption could occur during a particular section of code if the CPU happened to be interrupted, then that section of code will need to disable interrupt handling. Interrupt handlers must not acquire normal locks (by definition they can't be suspended), and they must not attempt to acquire a spin-lock that is held by the thread currently scheduled on the same CPU (because that thread is blocked from progressing by the very same interrupt handler!) so ensuring data safety with interrupt handlers can be tricky. Handling interrupts promptly is generally a good thing, so you want to absolutely minimise such sections in any code you write. Do as much of your interrupt handling in secondary interrupt handlers as possible to avoid such situations. Secondary interrupt handlers are really just callbacks on a regular OS thread which doesn't have any of the restrictions of a primary interrupt handler.
PCI/PCIe configuration - It's my understanding this is mainly about routing interrupts, and is something you normally do once when your driver loads (or is activated by a client) and again when your driver unloads (or is deactivated). This may also be affected by power management events. In some OSes, the PCI(e) level is actually handled for you when you activate PCI device interrupts via higher-level APIs.
On-device - This is usually an optimisation to avoid interrupting the CPU when it doesn't need to be interrupted. The most common scenario is that an event happens on the device, so an interrupt is generated. The driver's primary interrupt handler checks the device registers if the driver needs to do any processing. If so, it disables interrupts on the device, and schedules the driver's secondary interrupt handler to run. The OS eventually runs the secondary handler, which processes whatever information the device has provided, until it runs out of things to do. Then it enables interrupts again, checks once more if there's any work pending from the device and if there are none, it terminates. (If there are items to process in this last check, it re-disables interrupts and starts over from the beginning.) The idea is that until the secondary interrupt handler has finished processing, there really is no point triggering the primary interrupt handler, and a waste of resources, if additional events arrive, because the driver is already busy processing the event queue. The final check after re-enabling interrupts is to avoid a race condition between an event arriving and re-enabling interrupts.
I hope that answers your question.

OS guard on hardware interrupt - how does it work?

I'm reading about interrupt handling in mondern CPUs and operating systems, but I can't figure out one point:
As soon as some hardware device changes the state (current/voltage?) on an interrupt pin of the CPU, the CPU stops after processing the prevailing instruction and jumps to execute the interrupt handler code. Now imagine the interrupt handler code has to change some kind of state in scheduler's data structures, however before the OS was interrupted it was also fumbling around in the same structures. That would lead to messed up data, so there must be a solution.
I would guess the OS and the interrupt handler both use a semaphore, implemented through some atomic compare/set memory operation to protect the shared data structures. However, if the OS gets interrupted while holding such a semaphore, the interrupt handler could not do anything and the interrupt would just vanish, because busy waiting for that semaphore would never return control to the OS, hence the lock is never released.
How is this problem solved? There must be some trick that I'm missing...
Maybe an hardware detail you are missing can explain your confusion.
Whenever an hardware interrupt occurs, something along these lines happen:
1 - The CPU goes to a privileged mode, further hardware interrupts are disabled (normally a bit in the processor flags register), and execution jumps to the interrupt handler.
2 - Once the OS interrupt handling is done, it re-enables hardware interrupts, so further interrupts can happen.
So, in short, the OS/interrupt handler can control when hardware interrupts are allowed to interrupt the normal flow.
An easy solution to your problem would be just have the OS disable hardware interrupts while messing with those data structures.
In practice, things get more complex to minimize interrupt latency.
Things can change from one architecture to another, but the basic principle is still that further hardware interrupts are disabled when one happens, and they can be enabled/disabled (provided the CPU is running in the required privileged modes).
Check the end part of this: http://en.wikibooks.org/wiki/X86_Assembly/Advanced_Interrupts

How can preemptive multitasking work, when OS is just one of the processes?

I am now reading materials about preemptive multitasking - and one thing escapes me.
All of the materials imply, that operating system somehow interrupts the running processes on the CPU from the "outside", therefore causing context switching and the like.
However, I can't imagine how would that work, when operating system's kernel is just another process on the CPU. When another process is already occuping the CPU, how can the OS cause the switch from the "outside"?
The OS is not just another process. The OS controls the behavior of the system when an interrupt occurs.
Before the scheduler starts a process, it arranges for a timer interrupt to be sent when the timeslice ends. Assuming nothing else happens before then, the timer will fire, and the kernel will take over control. If it elects to schedule a different process, it will switch things out to allow the other process to run and then return from the interrupt.
Hardware can signal the processor - this is called an "interrupt" - and when it occurs, control is transferred to the kernel (regardless of which process was executing at the time). This function is built in to the processor. Specifically, control is transferred to an "interrupt handler" which is a function/method within the kernel.The kernel can schedule a timer interrupt, for instance, so that this happens periodically. Once an interrupt occurs and control is transferred to the kernel, the kernel can pass control back to the originally executing process, or another process that is scheduled.