ISR execution in a non-preemptive system - operating-system

in a non preemptive system, after an ISR finishes execution, will the interrupted task continue execution even if a higher priority task was activated?

This answer is specific to FreeRTOS, and may not be relevant to other RTOS'es.
FreeRTOS is preemptive by default. However, it can also be configured to be non-preemptive by the config option in FreeRTOSConfig.h
#define configUSE_PREEMPTION 0
Normally, a return from ISR does not trigger a context switch. But in preemptive systems it's often desirable, so in most FreeRTOS examples, you see portYIELD_FROM_ISR(xHigherPriorityTaskWoken); at the end of the ISR, which triggers a context switch if xHigherPriorityTaskWoken is pdTRUE.
xHigherPriorityTaskWoken is initialized to pdFALSE at the start of the ISR (manually by the user), and operations which can cause a context switch, such as vTaskNotifyGiveFromISR() , xQueueSendToBackFromISR() etc. , take it as an argument and set it to pdTRUE if a context switch is required after the system call.
In a non-preemptive configuration, you simply pass NULL to such system calls instead of xHigherPriorityTaskWoken, and do not call portYIELD_FROM_ISR() at the end of the ISR. In this case, even if a higher priority task is awakened by the ISR, execution returns to the currently running task and remains there until this task yields or makes a blocking system call.
You may mix ISR yield mechanism with preemption method. For example, you can force a context switch (preemption) from ISR even when configUSE_PREEMPTION is 0, but this may cause problems if the interrupted/preempted task doesn't expect it to happen, so I don't recommend it.

Related

Why disable interrupt before context switch

I was reading the OS textbook, in the synchronization chapter,it says :
In particular,
most implementations of thread systems enforce the invariant that a thread
always disables interrupts before performing a context switch
Hence when writing Aquire() before go to sleep it will first disable interrupt.
My question is why interrupt disable is needed before context switch, is it use to protect the registers and keep the Aquire() atomic?
Aquire() is used before the critical section as:
Aquire(){
disable interrupt;
if (is busy){
put on wait queue;
sleep();
}
else set_busy;
enable interrupt;
}
Go to sleep will implement context switch,why should we disable interrupt during context switch?Can we change the code to :
Aquire(){
disable interrupt;
if (is busy){
enable interrupt;
put on wait queue;
sleep();
}
else set_busy;
enable interrupt;
}
That is enables interrupt in thread A instead of letting other thread B after context switch(after A go to sleep) enable interrupt?
Typically, a synchronization primitive requires updating multiple data locations simultaneously. For example, a semaphore Acquire might require changing the state of the current thread to blocked, updating the count of the semaphore, removing the current thread from a queue and placing it on another queue. Since simultaneously isn't really possible(*), it is necessary to devise an access protocol to simulate this. In a single cpu system, the easiest way to do this is disable interrupts, perform the updates, then re-enable interrupts. All software following this protocol will see the updates at once.
Multi-cpu systems typically need something extra to synchronize threads on separate cpus from interfering. Disabling interrupts is insufficient, since that only affects the current cpu. The something extra is typically a spin lock, which behaves much like a mutex or binary semaphore, except that the caller sits in a retry loop until it becomes available.
Even in the multi-cpu system, the operation has to be performed with interrupts disabled. Imagine Thread#0 has acquired a spinlock on cpu#0; then an interrupt on cpu#0 causes Thread#1 to preempt, and Thread#1 then attempts to acquire the same spinlock. There are many scenarios which amount to this.
(*) Transaction-al Memory provides something like this, but with limited applicability, and the implementation has to provide an independent implementation to ensure forward progress. Also, since transactions do not nest, they really need to disable interrupts as well.

Can a process/thread run while interrupts are disabled?

I have the following "pretend" implementation of a semaphore's wait() operation. Assume a single core, single processor environement:
wait () {
Disable interrupts
sem->value--
if (sem->value < 0) {
save_state (current) ; //"Manually" save the context of the current running process
State[current] = Blocked; //Block it
Queue current to block queue;
current = Select from the ready queue; //Select another process to run
State[current] = Running; //Put the retrieved process in the running state
restore_state (current); //"Manually" restore the context of the new process
}
Enable interrupts
}
The implementation is to test our knowledge on disabling interrupts to protect the critical section. One of the questions is to determine whether the new process that is selected from the ready queue in wait() runs while interrupts are disabled or after they are enabled.
I'm struggling with the answer as I see it in two ways.
(Obvious answer): The process is allowed to run while interrupts are disabled since clearly this is what the code is intended to do. But I have my doubts...
When interrupts are disabled the kernel is not aware of any changes made to the running state/blocked state. The register and other resource allocations can only be done after interrupts have been enabled.
Any tips would be greatly appreciated.
If a process/thread is able to run with interrupts disabled, then that process/thread is able to prevent the operating system from interrupting it, and therefore able to hog all CPU time, and can therefore be an unstoppable malicious denial of service attack.
For some CPUs under some conditions (e.g. 80x86 with IOPL set to 3) it is possible for an OS to allow a process/thread to disable IRQs, and is possible to let a process/thread run with IRQs disabled but without the ability to enable/disable IRQs (e.g. disable IRQs in the kernel just before returning to user-space); but because they're security disasters very few operating systems will allow either.
However; semaphores also involve interaction with the scheduler (blocking a task until it can acquire the semaphore, and unblocking a task when it can acquire the semaphore), and the scheduler (its "ready to run" queues, processs/thread states, etc) and the ability to access the full process/thread's state (e.g. special "kernel only" registers, like whichever register controls which virtual address space is currently selected) are also typically only accessible from kernel's code (and not allowed to be accessed from user-space, by a process/thread).
In other words; it's reasonable (ignoring bizarre and unlikely cases) to assume that over 50% of the code in your wait() function can not be implemented in user-space and must be implemented in the kernel; and therefore it's reasonable to assume that your wait() function is intended to be implemented in the kernel (and not intended to be implemented in user-space, by a process or thread).

The reason why Task deletion of uCOS should not occur during ISR

I'm modifying some functionalities (mainly scheduling) of uCos-ii.
And I found out that OSTaskDel function does nothing when it is called by ISR.
Though I learned some basic features of OS, I really don't understand why that should be prohibited.
All it does is withrawl from readylist and release of acquired resources like TCB or semaphores...
Is there any reason for them to be banned while handling interrupt?
It is not clear from the documentation why it is prohibited in this case, but OSTaskDel() explicitly calls OS_Sched(), and in an ISR this should only happen when the outer-most nested interrupt handler exists (handled by OSIntExit()).
I don't think the following is advisable, because there may be other reasons why this is prohibited, but you could remove the:
if (OSIntNesting > 0) {
return (OS_TASK_DEL_ISR);
}
then make the OS_Sched() call conditional as follows:
if (OSIntNesting == 0) {
OS_Sched();
}
If this dies horribly, remember I said it was ill-advised!
This operation will extend your interrupt processing time in any case so is probably a bad idea if only for that reason.
It is a bad idea in general (not just from an ISR) to asynchronously delete another task regardless of that tasks state or resource usage. uC/OS-II provides the OSTaskDelReq() function to manage task deletion in a way that allows a task to delete itself on request and therefore be able to correctly release all its resources. Even without that, sending a request via the task's normal IPC mechanisms is usually better (and more portable).
If a task is not designed for self-deletion on demand, then you might simply use OSSuspend().
Generally, you cannot do a few things in ISRs:
block on a semaphore and the like
block while acquiring a spin lock, if it's a single-CPU system
cause a page fault, that has to be resolved by the virtual memory subsystem (with virtual on-disk memory, that is)
If you do any of the above in an ISR, you'll have a deadlock.
OSTaskDel() is probably doing some of those things.

Restrictions while kernel is running an ISR routine

What are some of the important do's and dont's inside a kernel mode and ISR Routine ?
For example -
Is context-switching disabled while running an interrupt handler ?
Can a context switch happen when a process is inside a critical
section ?
What circumstances inside kernel mode merit disabling of further interrupts ?
How come a process switch can occur on a page-fault, where a process fetches data from the disk, but not happen during other occurences of interrupts.
How do you classify if a executable path can be interrupted/rescheduled/pre-empted ?
What are the other things one has to remember when process is in kernel mode or handling ISR routine ?
In short: NO CONTEXT SWITCH, EVER.
This means:
No preemption
No locks on mutexes (use spin locks instead and ensure your non-ISR counterparts acquire them with spin_lock_irqsave to disable IRQs)
No call to any kernel function that can sleep (check the function's documentation, some functions also have _cansleep variants).
A process switch can occur on a page fault, but it happens after the corresponding ISR has been processed. Basically a path can be scheduled if it is not an ISR and if you do not have a spinlock locked. If you hold a spinlock, you must avoid sleeping until it is released.
Since ISRs are very restrained, then handling of IRQs is usually split between a top-half (that runs in ISR context and does the critical job) and a bottom-half (that runs later as a kernel thread and does whatever can be delayed) which can sleep. See this page for more information:
http://www.makelinux.net/ldd3/chp-10-sect-4

Interrupt masking: why?

I was reading up on interrupts. It is possible to suspend non-critical interrupts via a special interrupt mask. This is called interrupt masking. What i dont know is when/why you might want to or need to temporarily suspend interrupts? Possibly Semaphores, or programming in a multi-processor environment?
The OS does that when it prepares to run its own "let's orchestrate the world" code.
For example, at some point the OS thread scheduler has control. It prepares the processor registers and everything else that needs to be done before it lets a thread run so that the environment for that process and thread is set up. Then, before letting that thread run, it sets a timer interrupt to be raised after the time it intends to let the thread have on the CPU elapses.
After that time period (quantum) has elapsed, the interrupt is raised and the OS scheduler takes control again. It has to figure out what needs to be done next. To do that, it needs to save the state of the CPU registers so that it knows how to undo the side effects of the code it executes. If another interrupt is raised for any reason (e.g. some async I/O completes) while state is being saved, this would leave the OS in a situation where its world is not in a valid state (in effect, saving the state needs to be an atomic operation).
To avoid being caught in that situation, the OS kernel therefore disables interrupts while any such operations that need to be atomic are performed. After it has done whatever needs doing and the system is in a known state again, it reenables interrupts.
I used to program on an ARM board that had about 10 interrupts that could occur. Each particular program that I wrote was never interested in more than 4 of them. For instance there were 2 timers on the board, but my programs only used 1. I would mask the 2nd timer's interrupt. If I didn't mask that timer, it might have been enabled and continued making interrupts which would slow down my code.
Another example was that I would use the UART receive REGISTER full interrupt and so would never need the UART receive BUFFER full interrupt to occur.
I hope this gives you some insight as to why you might want to disable interrupts.
In addition to answers already given, there's an element of priority to it. There are some interrupts you need or want to be able to respond to as quickly as possible and others you'd like to know about but only when you're not so busy. The most obvious example might be refilling the write buffer on a DVD writer (where, if you don't do so in time, some hardware will simply write the DVD incorrectly) versus processing a new packet from the network. You'd disable the interrupt for the latter upon receiving the interrupt for the former, and keep it disabled for the duration of filling the buffer.
In practise, quite a lot of CPUs have interrupt priority built directly into the hardware. When an interrupt occurs, the disabled flags are set for lesser interrupts and, often, that interrupt at the same time as reading the interrupt vector and jumping to the relevant address. Dictating that receipt of an interrupt also implicitly masks that interrupt until the end of the interrupt handler has the nice side effect of loosening restrictions on interrupting hardware. E.g. you can simply say that signal high triggers the interrupt and leave the external hardware to decide how long it wants to hold the line high for without worrying about inadvertently triggering multiple interrupts.
In many antiquated systems (including the z80 and 6502) there tends to be only two levels of interrupt — maskable and non-maskable, which I think is where the language of enabling or disabling interrupts comes from. But even as far back as the original 68000 you've got eight levels of interrupt and a current priority level in the CPU that dictates which levels of incoming interrupt will actually be allowed to take effect.
Imagine your CPU is in "int3" handler now and at that time "int2" happens and the newly happened "int2" has a lower priority compared with "int3". How would we handle with this situation?
A way is when handling "int3", we are masking out other lower priority interrupters. That is we see the "int2" is signaling to CPU but the CPU would not be interrupted by it. After we finishing handling the "int3", we make a return from "int3" and unmasking the lower priority interrupters.
The place we returned to can be:
Another process(in a preemptive system)
The process that was interrupted by "int3"(in a non-preemptive system or preemptive system)
An int handler that is interrupted by "int3", say int1's handler.
In cases 1 and 2, because we unmasked the lower priority interrupters and "int2" is still signaling the CPU: "hi, there is a something for you to handle immediately", then the CPU would be interrupted again, when it is executing instructions from a process, to handle "int2"
In case 3, if the priority of “int2” is higher than "int1", then the CPU would be interrupted again, when it is executing instructions from "int1"'s handler, to handle "int2".
Otherwise, "int1"'s handler is executed without interrupting (because we are also masking out the interrupters with priority lower then "int1" ) and the CPU would return to a process after handling the “int1” and unmask. At that time "int2" would be handled.