On the cortex-m3 platform, why does UCOS-III not use SVC to perform pendsv? - cortex-m3

Recently, I am reading the source code of UCOS-III, and I have an question about UCOS-III's task switching when it running on the Cortex-M3 platform. It use PendSV for task switching by directly writing register SCB_ICSR(Interrupt control and state register), but accessing register SCB_ICSR Requires privilege operation level. This means that the processor is runing in the process mode at privilege operation level without exceptions and interrupts, which I don't think it's safe. Why does UCOS-III not use SVC to perform pendsv? Is it a matter of efficiency? Could somebody please explain this for me? Thanks.
Background:
Software:UCOS-III
Hardware:Cortex-M3(STM32F103)
Code:
.thumb_func
OSStartHighRdy:
LDR R0, =NVIC_SYSPRI14 # Set the PendSV
exception priority
LDR R1, =NVIC_PENDSV_PRI
STRB R1, [R0]
MOVS R0, #0 # Set the PSP to 0 for initial context switch call
MSR PSP, R0
LDR R0, =OS_CPU_ExceptStkBase # Initialize the MSP to the OS_CPU_ExceptStkBase
LDR R1, [R0]
MSR MSP, R1
LDR R0, =NVIC_INT_CTRL # Trigger the PendSV exception (causes context switch)
LDR R1, =NVIC_PENDSVSET
STR R1, [R0]
CPSIE I # Enable interrupts at processor level
I think this method is better:
Cortex-M3 task switch using SVC and Pensv
Task A calls SVC for task switching (for example, waiting for some work to complete).
The OS receives the request, prepares for context switching, and pends the PendSV exception.
When the CPU exits SVC, it enters PendSV immediately and does the context switch.
When PendSV finishes and returns to Thread level, it executes Task B.
An interrupt occurs and the interrupt handler is entered.
While running the interrupt handler routine, a SYSTICK exception (for OS tick) takes place.
The OS carries out the essential operation, then pends the PendSV exception and gets ready
for the context switch.
When the SYSTICK exception exits, it returns to the interrupt service routine.
When the interrupt service routine completes, the PendSV starts and does the actual context
switch operations.
When PendSV is complete, the program returns to Thread level;
this time it returns to Task A and continues the processing.

Unless you are using the MPU extensions there really does not make that much difference between running in user or privileged mode. Yes it is a bit safer running in user mode as you cannot modify all the registers, but you then have to provide an SVC call to be able to raise the privilege and have the ability to create tasks that are either user mode or privileged mode. I expect this is provided when you have the MPU extensions.
I don't know UCOS-III but I would assume that all tasks are running privileged as do most Cortex-M RTOS, unless they support the MPU.

SafERTOS for example is using MPU on Cortex-M3.

Related

Azure RTOS ThreadX with STM32L476VG

I would like to install threadX on a STM32L476VG. I am quite new to RTOS programming.
As I set up some simple applications I run into a HardFault whenever I called the tx_thread_resume function within an interrupt routine (lets say the USART3 interrupt).
I figured that this is when SVC 0 is called at the end of the tx_thread_resume function. The problem can be solved, when the interrupt priority of the interrupt in which routine the tx_thread_resume call is made is set to 0xF (USART1 interrupt in this case).
I suspect this is because threadX sets the interrupt priority of the SVC exception to 0xF and a SVC Call with lower priority within the not terminated interrupt routine of the USART1 is causing the hardfault. But I don't find any evidence in the documentation, not in threadX documentation and not in the STM32L4 documentation about the SVC Call.
Is my suspicion about the SVC call being illigal causing the hardfault correct or lies the reason for the hardfault elsewhere?
Thank you for your time.
We don't actually call SVC in the normal ThreadX Cortex-M ports (only in the Modules ports). We do set the PENDSV bit in ICSR. This is done in tx_thread_system_return.s which is called at the end of tx_thread_system_resume.c.
By default, ThreadX sets PendSV to the lowest priority (0xFF) in tx_initialize_low_level.s, which should not be changed. All of your interrupts should be higher priority than this. What is intended to happen is that once your (e.g. USART1) ISR completes, the Cortex-M hardware detects the pending PendSV exception and does tail-chaining, immediately processing the PendSV interrupt once the USART1 interrupt completes.
I do not suspect your hardfault is caused by the PendSV interrupt. Can you step through the tx_thread_system_resume.c and tx_thread_system_return.s functions and find exactly where the hardfault is hit?
Also, what is the size of the thread stack that has been interrupted?

Can a process/thread run while interrupts are disabled?

I have the following "pretend" implementation of a semaphore's wait() operation. Assume a single core, single processor environement:
wait () {
Disable interrupts
sem->value--
if (sem->value < 0) {
save_state (current) ; //"Manually" save the context of the current running process
State[current] = Blocked; //Block it
Queue current to block queue;
current = Select from the ready queue; //Select another process to run
State[current] = Running; //Put the retrieved process in the running state
restore_state (current); //"Manually" restore the context of the new process
}
Enable interrupts
}
The implementation is to test our knowledge on disabling interrupts to protect the critical section. One of the questions is to determine whether the new process that is selected from the ready queue in wait() runs while interrupts are disabled or after they are enabled.
I'm struggling with the answer as I see it in two ways.
(Obvious answer): The process is allowed to run while interrupts are disabled since clearly this is what the code is intended to do. But I have my doubts...
When interrupts are disabled the kernel is not aware of any changes made to the running state/blocked state. The register and other resource allocations can only be done after interrupts have been enabled.
Any tips would be greatly appreciated.
If a process/thread is able to run with interrupts disabled, then that process/thread is able to prevent the operating system from interrupting it, and therefore able to hog all CPU time, and can therefore be an unstoppable malicious denial of service attack.
For some CPUs under some conditions (e.g. 80x86 with IOPL set to 3) it is possible for an OS to allow a process/thread to disable IRQs, and is possible to let a process/thread run with IRQs disabled but without the ability to enable/disable IRQs (e.g. disable IRQs in the kernel just before returning to user-space); but because they're security disasters very few operating systems will allow either.
However; semaphores also involve interaction with the scheduler (blocking a task until it can acquire the semaphore, and unblocking a task when it can acquire the semaphore), and the scheduler (its "ready to run" queues, processs/thread states, etc) and the ability to access the full process/thread's state (e.g. special "kernel only" registers, like whichever register controls which virtual address space is currently selected) are also typically only accessible from kernel's code (and not allowed to be accessed from user-space, by a process/thread).
In other words; it's reasonable (ignoring bizarre and unlikely cases) to assume that over 50% of the code in your wait() function can not be implemented in user-space and must be implemented in the kernel; and therefore it's reasonable to assume that your wait() function is intended to be implemented in the kernel (and not intended to be implemented in user-space, by a process or thread).

how does the processor know an instruction is making a system call

system call -- It is an instruction that generates an interrupt that causes OS to gain
control of processor.
so if a running process issue a system call (e.g. create/terminate/read/write etc), a interrupt is generated which cause the KERNEL TO TAKE CONTROL of the processor which then executes the required interrupt handler routine. correct?
then can anyone tell me how the processor known that this instruction is supposed to block the process, go to privileged mode, and bring kernel code.
I mean as a programmer i would just type stream1=system.io.readfile(ABC) or something, which translates to open and read file ABC.
Now what is monitoring the execution of this process, is there a magical power in the cpu to detect this?
As from what i have read a PROCESSOR can only execute only process at a time, so WHERE IS THE MONITOR PROGRAM RUNNING?
How can the KERNEL monitor if a system call is made or not when IT IS NOT IN RUNNING STATE!!
or does the computer have a SYSTEM CALL INSTRUCTION TABLE which it compares with before executing any instruction?
please help
thanku
The kernel doesn't monitor the process to detect a system call. Instead, the process generates an interrupt which transfers control to the kernel, because that's what software-generated interrupts do according to the instruction set reference manual.
For example, on Unix the process stuffs the syscall number in eax and runs an an int 0x80 instruction, which generates interrupt 0x80. The CPU reacts to this by looking in the Interrupt Descriptor Table to find the kernel's handler for that interrupt. This handler is the entry point for system calls.
So, to call _exit(0) (the raw system call, not the glibc exit() function which flushes buffers) in 32-bit x86 Linux:
movl $1, %eax # The system-call number. __NR_exit is 1 for 32-bit
xor %ebx,%ebx # put the arg (exit status) in ebx
int $0x80
Let's analyse each questions you have posed.
Yes, your understanding is correct.
See, if any process/thread wants to get inside kernel there are only two mechanisms, one is by executing TRAP machine instruction and other is through interrupts. Usually interrupts are generated by the hardware, so any other process/threads wants to get into kernel it goes through TRAP. So as usual when TRAP is executed by the process it issues interrupt (mostly software interrupt) to your kernel. Along with trap you will also mentions the system call number, this acts as input to your interrupt handler inside kernel. Based on system call number your kernel finds the system call function inside system call table and it starts to execute that function. Kernel will set the mode bit inside cs register as soon as it starts to handle interrupts to intimate the processor as current instruction is a privileged instruction. By this your processor will comes to know whether the current instruction is privileged or not. Once your system call function finished it's execution your kernel will execute IRET instruction. Which will clear mode bit inside CS register to inform whatever instruction from now inwards are from user mode.
There is no magical power inside processor, switching between user and kernel context makes us to think that processor is a magical thing. It is just a piece of hardware which has the capability to execute tons of instructions at a very high rate.
4..5..6. Answers for all these questions are answered in above cases.
I hope I've answered your questions up to some extent.
The interrupt controller signals the CPU that an interrupt has occurred, passes the interrupt number (since interrupts are assigned priorities to handle simultaneous interrupts) thus the interrupt number to determine wich handler to start. The CPu jumps to the interrupt handler and when the interrupt is done, the program state reloaded and resumes.
[Reference: Silberchatz, Operating System Concepts 8th Edition]
What you're looking for is mode bit. Basically there is a register called cs register. Normally its value is set to 3 (user mode). For privileged instructions, kernel sets its value to 0. Looking at this value, processor knows which kind of instruction it is. If you're interested digging more please refer this excellent article.
Other Ref.
Where is mode bit
Modern hardware supports multiple user sessions. If your hw supports multi user mode, i provides a mechanism called interrupt. An interrupt basically stops the execution of the current code to execute other code (e.g kernel code).
Which code is executed is decided by parameters, that get passed to the interrupt, by the code that issues the interrupt. The hw will increase the run level, load the kernel code into the memory and forces the cpu to execute this code. When the kernel code returns, it again directly informs the hw and the run level gets decreased.
The HW will then restore the cpu state before the interrupt and set the cpu the the next line in the code that started the interrupt. Done.
Since the code is actively calling the hw, which again actively calls the kernel, no monitoring needs to be done by the kernel itself.
Side note:
Try to keep your question short. Make clear what you want. The first answer was correct for the question you posted, you just didnt phrase it well. Make clear that you are new to the topic and need a detailed explanation of basic concepts instead of explaining what you understood so far and don't use caps lock.
Please accept the answer cnicutar provided. thank you.

Where to return from an interrupt

I've read (and studied) about Interrupt Handling.
What I always fail to understand, is how do we know where to return to (PC / IP) from the Interrupt Handler.
As I understand it:
An Interrupt is caused by a device (say the keyboard)
The relevant handler is called - under the running process. That is, no context switch to the OS is performed.
The Interrupt Handler finishes, and passes control back to the running application.
The process depicted above, which is my understanding of Interrupt Handling, takes place within the current running process' context. So it's akin to a method call, rather than to a context switch.
However, being that we didn't actually make the CALL to the Interrupt Handler, we didn't have a chance to push the current IP to the stack.
So how do we know where to jump back from an Interrupt. I'm confused.
Would appreciate any explanation, including one-liners that simply point to a good pdf/ppt addressing this question specifically.
[I'm generally referring to above process under Linux and C code - but all good answers are welcomed]
It's pretty architecture dependent.
On Intel processors, the interrupt return address is pushed on the stack when an interrupt occurs. You would use an iret instruction to return from the interrupt context.
On ARM, an interrupt causes a processor mode change (to the INT, FIQ, or SVC mode, for example), saving the current CPSR (current program status register) into the SPSR (saved program status register), putting the current execution address into the new mode's LR (link register), and then jumping to the appropriate interrupt vector. Therefore, returning from an interrupt is done by moving the SPSR into the CPSR and then jumping to an address saved in LR - usually done in one step with a subs or movs instruction:
movs pc, lr
When an interrupt is triggered, the CPU pushes several registers onto the stack, including the instruction pointer (EIP) of the code that was executing before the interrupt. You can put iret and the end of your ISR to pop these values, and restore EIP (as well as CS, EFLAGS, SS and ESP).
By the way, interrupts aren't necessarily triggered by devices. In Linux and DOS, user space programs use interrupts (via int) to make system calls. Some kernel code uses interrupts, for example intentionally triple faulting in order to force a shutdown.
The interrupt triggering mechanism in the CPU pushes the return address on the stack (among other things).

How to save value of the user stack pointer into variable from the IRQ mode

I am trying to build simple and small preemptive OS for the ARM processor (for experimenting with the ARM architecture).
I have my TCB that have pointer to the proper thread's stack which I am updating/reading from in my dispatch() method - something like this (mixed C and assembly)
asm("
ldr r5, =oldSP
ldr sp, [r5]
");
myThread->sp = oldSP;
myThread = getNewThread();
newSP = myThread->sp
asm("
ldr r5, =newSP
str sp, [r5]
");
When I call this dispatch() from user mode (explicit call), everything works all right - threads are losing and getting processor as they should.
However, I am trying to build preemptive OS, so I need timer IRQ to call dispatch - and that is my problem - in irq mode, r13_usr register is hidden, so I can't access it. I can't change to the SVC mode either - it is hidden there, too.
One solution that I see is switching to the user mode (after I entered dispatch method), updating/changing sp fields and switching back to the irq mode to continue where I left. Is that possible?
Another solution is to try not to enter IRQ mode again - I just need to handle hardware things (set proper status bit in timer periphery), call dispatch() (still in irq mode) in which I will mask timer interrupt, change to the user mode, do context switch, unmask timer interrupt and continue. Resumed thread should continue where it was suspended (before entering IRQ). Is this correct? (This should be correct if on interrupt, processor pushes r4-r11 and lr into user stack, but I think I am wrong here...)
Thank you.
I think I might have answered a similar question here: "ARM - access R13 and R14 from Supervisor Mode"
In your case, just use "IRQ mode" instead of "Supervisor mode", but I think the same principle applies.
Long & short of it, if you switch from IRQ to user mode, that's a one-way trap door, you can't simply switch back to IRQ mode under software control.
But by manipulating the CPSR and switching to system mode, you can get r13_usr and then switch back to the previous mode (in your case, IRQ mode).
Normally you setup all of this on boot, the various stack registers, handlers, etc. If there is a reason to go to user mode and get out of it, you can use swi from user mode and have the swi handler do whatever it was you wanted to do in user mode or task switch to a svc mode handler or something like that.
Which ARM variant do you use? On modern variants like Cortex_M3 you have the MRS/MSR instruction. This way you can access the MSP and PSP registers by moving them to/from a general purpose register.
CMSIS even defines __get_MSP() and __get_PSP() as C functions, as well as their __set[...] counterparts.
EDIT: This seems to work in Thumb-2 only. Sorry for the noise.