I'm currently taking a class in Operating Systems, and everything has been smooth until I encountered Concurrency and Mutual Exclusion.
Up until this chapter in the text I am currently reading, I was under the impression that the OS handled calls to certain I/O operations such as printers through queues and interrupts, and the OS also handled the scheduling of processes.
But in this section "Mutual exclusion: Hardware support", it states for a process to guarantee mutual exclusion it is sufficient to block all interrupts, and this can be done through interrupt disabling, however the cost is high since the processor is limited in its ability to interleave(Stallings, p. 211).
If this is a capability, whats stopping a programmer from placing his entire program within a critical section by disabling interrupts? And why can't the OS handle calls to critical resources, in the way that was previously stated(I/O queues & interrupts), but we must rely on programmers to identify their critical sections?
I understand the need for to identify critical sections with shared variables and memory space, but I am baffled as to why a program needs to identify its critical section with regard to I/O devices such as a printers and why the OS can't.
This is not [entirely] correct:
But in this section "Mutual exclusion: Hardware support", it states for a process to guarantee mutual exclusion it is sufficient to block all interrupts, and this can be done through interrupt disabling, however the cost is high since the processor is limited in its ability to interleave.
Processors generally support multiple means of synchronization. The simplest is uninterruptible instructions. These will be generally be short instructions such as set a bit or branch if the bit was set already. Such instructions allow synchronization within a single processor.
As you mention, disabling interrupts is another method. Generally, interrupts have priorities. Usually you can disable all interrupts that has a priority lower than specified. That allows disabling all or some interrupts.
Disabling interrupts only works when locking resources that are not shared by multiple processors.
That is why the quote you have in the context you have it is not [entirely] correct. Disabling interrupts on a processor does not synchronize when there are multiple processors. However, in theory, an operating system could disable all interrupts on all processors but such system would be seriously brain damaged because that would hamper the performance of a multi-processor system. But that might work in, say, a quick-and-dirty student project operating system.
If this is a capability, whats stopping a programmer from placing his entire program within a critical section by disabling interrupts?
Disabling interrupts is only possible in kernel mode.
Another method of hardware synchronization is interlocked instructions. These are instructions that lock the memory of the operands and prevent other processors from accessing that memory while the instruction is executing. Sometimes are are simple add integer interlocked and bit set (or clear) and branch interlocked instructions.
Related
I want to write code for interrupts of the buttons on Raspberry pi 2. This board uses QUAD Core Broadcom BCM2836 CPU (ARM architecture). That mean, only one CPU is on this board (Raspberry pi 2). But I don't know how do interrupts in multi-core system. I wonder whether interrupt line is connected to each core or one CPU. So, I found the paragraph below via Google:
Interrupts on multi-core systems
On a multi-core system, each interrupt is directed to one (and only one) CPU, although it doesn't matter which. How this happens is under control of the programmable interrupt controller chip(s) on the board. When you initialize the PICs in your system's startup, you can program them to deliver the interrupts to whichever CPU you want to; on some PICs you can even get the interrupt to rotate between the CPUs each time it goes off.
Does this mean that interrupts happen with each CPU? I can't understand exactly above info. If interrupts happen to each core, I must take account of critical section for shared data on each interrupt service routine of the buttons.
If interrupts happen to each CPU, I don't have to take account of critical section for shared data. What is correct?
To sum up, I wonder How do interrupts in multi-core system? Is the interrupt line is connected to each core or CPU? So, should I have to take account of critical section for same interrupt?
your quote from google looks quite generic or perhaps even leaning on the size of x86, but doesnt really matter if that were the case.
I sure hope that you would be able to control interrupts per cpu such that you can have one type go to one and another to another.
Likewise that there is a choice to have all of them interrupted in case you want that.
Interrupts are irrelevant to shared resources, you have to handle shared resources whether you are in an ISR or not, so the interrupt doesnt matter you have to deal with it. Having the ability to isolate interrupts from one peripheral to one cpu could make the sharing easier in that you could have one cpu own a resource and other cpus make requests to the cpu that owns it for example.
Dual, Quad, etc cores doesnt matter, treat each core as a single cpu, which it is, and solve the interrupt problems as you would for a single cpu. Again shared resources are shared resources, during interrupts or not during interrupts. Solve the problem for one cpu then deal with any sharing.
Being an ARM each chip vendors implementation can vary from another, so there cannot be one universal answer, you have to read the arm docs for the arm core (and if possible the specific version as they can/do vary) as well as the chip vendors docs for whatever they have around the arm core. Being a Broadcom in this case, good luck with chip vendor docs. They are at best limited, esp with the raspi2. You might have to dig through the linux sources. No matter what, arm, x86, mips, etc, you have to just read the documentation and do some experiments. Start off by treating each core as a standalone cpu, then deal with sharing of resources if required.
If I remember right the default case is to have just the first core running the kernel7.img off the sd card, the other three are spinning in a loop waiting for an address (each has its own) to be written to get them to jump to that and start doing something else. So you quite literally can just start off with a single cpu, no sharing, and figure that out, if you choose to not have code on the other cpus that touch that resource, done. if you do THEN figure out how to share a resource.
In an operating system, what is the difference between a system call and an interrupt? Are all system calls interrupts? Are all interrupts system calls?
Short Answer:
They are different things.
A system call is call by software running on the OS to services
provided by the OS.
An interrupt is usually external hardware component notifying the CPU/Microprocessor about an event that needs handling in software (usually a driver).
I say usually external, because some interrupts can be raised by software (soft interrupt)
Are all system calls interrupts? Depends
Are all interrupts system calls? No
Long answer:
The OS manages CPU time and other hardware connected to the CPU (Memory (RAM), HDD, keyboard, to name a few). It exposes services that allow user programs to access the underlying hardware and these are system calls. Usually these deal with allocating memory, reading/writing files, printing a document and so on.
When the OS interacts with other hardware it usually does so through a driver layer which sets-up the task for the hardware to perform and interrupt once the job is done, so the printer may interrupt once the document is printed or it runs out of pages. It is therefore often the case that a system call leads to generation of interrupts.
Are all system calls interrupts - Depends as they may be implemented as soft interrupts. So when a user program makes a system call, it causes a soft interrupt that results in the OS suspending the calling process, and handle the request itself, then resume the process. But, and I quote from Wikipedia,
"For many RISC processors this (interrupt) is the only technique provided, but
CISC architectures such as x86 support additional techniques. One
example is SYSCALL/SYSRET, SYSENTER/SYSEXIT (the two mechanisms were
independently created by AMD and Intel, respectively, but in essence
do the same thing). These are "fast" control transfer instructions
that are designed to quickly transfer control to the OS for a system
call without the overhead of an interrupt"
The answer to your question depends upon the underlying hardware (and sometimes operating system implementation). I will return to that in a bit.
In an operating system, what is the difference between a system call and an interrupt?
The purpose of an interrupt handler and a system call (and a fault handler) is largely the same: to switch the processor into kernel mode while providing protection from inadvertent or malicious access to kernel structures.
An interrupt is triggered by an asynchronous external event.
A system call (or fault or trap) is triggered synchronously by executing code.
Are all system calls interrupts? Are all interrupts system calls?
System calls are not interrupts because they are not triggered asynchronously by the hardware. A process continues to execute its code stream in a system call, but not in an interrupt.
That being said, Intel's documentation often conflates interrupt, system calls, traps, and faults, as "interrupt."
Some processors treat system calls, traps, faults and interrupts largely the same way. Others (notably Intel) provide different methods for implementing system calls.
In processors that handle all of the above in the same way, each type of interrupt, trap, and fault has a unique number. The processor expects the operating system to set up a vector (array) of pointers to handlers. In addition, there are one or more handlers available for an operating system to implement system calls
Depending upon the number of available handlers, the OS may have a separate handler for each system call or use a register value to determine what specific system function to execute.
In such a system, one can execute an interrupt handler synchronously the same way one invokes a system call.
For example, on the VAX the CHMK #4 instruction, invokes the 4th kernel mode handler. In intel land there is an INT instruction that does roughly the same.
Intel processors have supported the SYSCALL mechanism that provides a different way to implement system calls.
I'm implementing a PCIe driver, and I'd like to understand at what level the interrupts can be or should be enabled/disabled. I intentionally do not specify OS, as I'm assuming it should be relevant for any platform. By levels I mean the following:
OS specific interrupts handling framework
Interrupts can be disabled or enabled in the PCI/PCIe configuration space registers, e.g. COMMAND register
Interrupts also can be masked at device level, for instance we can
configure device not trigger certain interrupts to the host
I understand that whatever interrupt type is being used on PCIe (INTx emulation, MSI or MSI-X), it has to be delivered to the host OS.
So my question is really -- do we actually have to enable or disable interrupts on every layer, or it's sufficient only at the closest to hardware, e.g. in relevant PCI registers?
Disabling interrupts at the various levels usually has completely different purposes.
Disabling interrupts:
In the OS (really, this means in the CPU) - This is generally about avoiding race conditions. In particular, if state/memory corruption could occur during a particular section of code if the CPU happened to be interrupted, then that section of code will need to disable interrupt handling. Interrupt handlers must not acquire normal locks (by definition they can't be suspended), and they must not attempt to acquire a spin-lock that is held by the thread currently scheduled on the same CPU (because that thread is blocked from progressing by the very same interrupt handler!) so ensuring data safety with interrupt handlers can be tricky. Handling interrupts promptly is generally a good thing, so you want to absolutely minimise such sections in any code you write. Do as much of your interrupt handling in secondary interrupt handlers as possible to avoid such situations. Secondary interrupt handlers are really just callbacks on a regular OS thread which doesn't have any of the restrictions of a primary interrupt handler.
PCI/PCIe configuration - It's my understanding this is mainly about routing interrupts, and is something you normally do once when your driver loads (or is activated by a client) and again when your driver unloads (or is deactivated). This may also be affected by power management events. In some OSes, the PCI(e) level is actually handled for you when you activate PCI device interrupts via higher-level APIs.
On-device - This is usually an optimisation to avoid interrupting the CPU when it doesn't need to be interrupted. The most common scenario is that an event happens on the device, so an interrupt is generated. The driver's primary interrupt handler checks the device registers if the driver needs to do any processing. If so, it disables interrupts on the device, and schedules the driver's secondary interrupt handler to run. The OS eventually runs the secondary handler, which processes whatever information the device has provided, until it runs out of things to do. Then it enables interrupts again, checks once more if there's any work pending from the device and if there are none, it terminates. (If there are items to process in this last check, it re-disables interrupts and starts over from the beginning.) The idea is that until the secondary interrupt handler has finished processing, there really is no point triggering the primary interrupt handler, and a waste of resources, if additional events arrive, because the driver is already busy processing the event queue. The final check after re-enabling interrupts is to avoid a race condition between an event arriving and re-enabling interrupts.
I hope that answers your question.
I am trying to learn an RTOS from scratch and for this, I use freeRTOS.org as a reference. I find out this site as a best resource to learn an RTOS. However, I have some doubts and I was trying to find out but not able to get exact answers.
1) How to find out that device have Real-time capability e.g. some controller has (TI Hercules) and other don't have(MSP430)?
2) Does that depend upon the architecture of the CORE (ARM Cortex-R CPU in TI Hercules TMS570)?
I know that these questions make nuisance, but I don't know how to get the answer of these questions.
Thanks in advance
EDIT:
One more query I have that what is meant by "OS" in RTOS? Does that mean the same OS like others or it's just contains the source code file for the API's?
Figuring out whether a device has a "Real-Time" capability is somewhat arbitrary and depends on your project's timing requirements. If you have timing requirements that are very high, you'll want to use a faster microcontroller/processor.
Using an RTOS (e.g. FreeRTOS, eCOS, or uCOS-X) can help ensure that a given task will execute at a predictable time. The FreeRTOS website provides a good discussion of what operating systems are and what it means for an operating system to claim Real-Time capabilities. http://www.freertos.org/about-RTOS.html
You can also see from the ports pages of uC/OS-X and FreeRTOS that they can run on a variety target microcontrollers / microprocessors.
Real-time capability is a matter of degree. A 32-bit DSP running at 1 GHz has more real-time capability than an 8-bit microcontroller running at 16 MHz. The more powerful microcontroller could be paired with faster memories and ports and could manage applications requiring large amounts of data and computations (such as real-time video image processing). The less powerful microcontroller would be limited to less demanding applications requiring a relatively small amount of data and computations (perhaps real-time motor control).
The MSP430 has real-time capabilities and it's used in a variety of real-time applications. There are many RTOS that have been ported to the MSP430, including FreeRTOS.
When selecting a microcontroller for a real-time application you need to consider the data bandwidth and computational requirements of the application. How much data needs to be processed in what amount of time? Also consider the range and precision of the data (integer or floating point). Then figure out which microcontroller can support those requirements.
While Cortex-R is optimised for hard real-time; that does not imply that other processors are not suited to real-time applications, or even better suited to a specific application. What you need to consider is whether a particular combination of RTOS and processor will meet the real-time constraints of your application; and even then the most critical factor is your software design rather then the platform.
The main goal you want to obtain from an RTOS is determinism, most other features are already available in most other non-RTOS operating systems.
The -OS part in RTOS means Operating System, simply put, and as all other operating systems, RTOSes provide the required infrastructure for managing processor resources so you work on a higher level when designing your application. For accessing those functionalities the OS provides an API. Using that API you can use semaphores, message queues, mutexes, etc.
An RTOS has one requirement to be an RTOS, it must be pre-emptive. This means that it must support task priorities so when a higher-priority task gets ready to run, one of possible task states, the scheduler must switch the current context to that task.
This operation has two implications, one is the requirement of one precise and dedicated timer, tick timer, and the other is that, during context switching, there is a considerable memory operations overhead. The current CPU status, or CPU's in case of multi-core SoCs, must be copied into the pre-empted task's context information and the new ready to run task's context must be restored in the CPU.
ARM processors already provide support for the System Timer, which is intended for a dedicated use as an OS tick timer. Not so long ago, the tick timer was required to be implemented with a regular, non-dedicated timer.
One optimization in cores designed for RTOSes with real-time capabilities is the ability to save/restore the CPU context state with minimum code, so it results in much less execution time than that in regular processors.
It is possible to implement an RTOS in nearly any processor, and there are some implementations targeted to resource constrained cores. You mainly need a timer with interrupt capacity and RAM. If the CPU is very fast you can run the OS tick at high rates, sub-millisecond in some real-time applications with DSPs, or at a lower rate like just 10~100 ticks per second for applications with low timing requirements in low end CPUs.
Some theoretical knowledge would be quite useful too, e.g. figuring out whether a given task set is schedulable under given scheduling approach (sometimes it may not), differences between static-priority and dynamic-priority scheduling, priority inversion problem, etc.
This may be very foolish question to ask.
However I want to clarify my doubts as i am new to this thing.
As per my understanding the CPU executes the instruction of a process step by step by incrementing the program counter.
Now suppose we have a system call as one of the instruction, then why do we need to give a software interrupt when this instruction is encountered? Can't this system call (sequence of instructions) be executed just as other instructions are executing, because as far i understand the interrupts are to signal certain asynchronous events. But here the system call is a part of the process instruction, which is not asynchronous.
It doesn't require an interrupt. You could make an OS which uses a simple call. But most don't for several reasons. Among them might be:
On many architectures, interrupts elevate or change the CPU's access level, allowing the OS to implement protection of its memory from the unprivileged user code.
Preemptive operating systems already make use of interrupts for scheduling processes. It might be convenient to use the same mechanism.
Interrupts are something present on most architectures. Alternatives might require significant redesign across architectures.
Here is one example of a "system call" which doesn't use an interrupt (if you define a system call as requesting functionality from the OS):
Older versions of ARM do not provide atomic instructions to increment a counter. This means that an atomic increment requires help from the OS. The naive approach would be to make it a system call which makes the process uninterruptible during the load-add-store instructions, but this has a lot of overhead from the interrupt handler. Instead, the Linux kernel has chosen to map a small bit of code into every process at a fixed address. This code contains the atomic increment instructions and can be called directly from user code. The kernel scheduler takes care of ensuring that any operations interrupted in this block are properly restarted.
First of all, system calls are synchronous software interrupts, not asynchronous. When the processor executes the trap machine instruction to go to kernel space, some of the kernel registers get changed by the interrupt handler functions. Modification of these registers requires privileged mode execution, i.e. these can not be changed using user space code.
When the user-space program cannot read data directly from disk, as it doesn't have control over the device driver. The user-space program should not bother with driver code. Communication with the device driver should take place through kernel code itself. We tend to believe that kernel code is pristine and entirely trustworthy; user code is always suspect.
Hence, it requires privileged instructions to change the contents of register and/or accessing driver functionalities; the user cannot execute system call functions as a normal function call. Your processor should know whether you are in the kernel mode to access these devices.
I hope this is clear to some extent.