How to save value of the user stack pointer into variable from the IRQ mode - operating-system

I am trying to build simple and small preemptive OS for the ARM processor (for experimenting with the ARM architecture).
I have my TCB that have pointer to the proper thread's stack which I am updating/reading from in my dispatch() method - something like this (mixed C and assembly)
asm("
ldr r5, =oldSP
ldr sp, [r5]
");
myThread->sp = oldSP;
myThread = getNewThread();
newSP = myThread->sp
asm("
ldr r5, =newSP
str sp, [r5]
");
When I call this dispatch() from user mode (explicit call), everything works all right - threads are losing and getting processor as they should.
However, I am trying to build preemptive OS, so I need timer IRQ to call dispatch - and that is my problem - in irq mode, r13_usr register is hidden, so I can't access it. I can't change to the SVC mode either - it is hidden there, too.
One solution that I see is switching to the user mode (after I entered dispatch method), updating/changing sp fields and switching back to the irq mode to continue where I left. Is that possible?
Another solution is to try not to enter IRQ mode again - I just need to handle hardware things (set proper status bit in timer periphery), call dispatch() (still in irq mode) in which I will mask timer interrupt, change to the user mode, do context switch, unmask timer interrupt and continue. Resumed thread should continue where it was suspended (before entering IRQ). Is this correct? (This should be correct if on interrupt, processor pushes r4-r11 and lr into user stack, but I think I am wrong here...)
Thank you.

I think I might have answered a similar question here: "ARM - access R13 and R14 from Supervisor Mode"
In your case, just use "IRQ mode" instead of "Supervisor mode", but I think the same principle applies.
Long & short of it, if you switch from IRQ to user mode, that's a one-way trap door, you can't simply switch back to IRQ mode under software control.
But by manipulating the CPSR and switching to system mode, you can get r13_usr and then switch back to the previous mode (in your case, IRQ mode).

Normally you setup all of this on boot, the various stack registers, handlers, etc. If there is a reason to go to user mode and get out of it, you can use swi from user mode and have the swi handler do whatever it was you wanted to do in user mode or task switch to a svc mode handler or something like that.

Which ARM variant do you use? On modern variants like Cortex_M3 you have the MRS/MSR instruction. This way you can access the MSP and PSP registers by moving them to/from a general purpose register.
CMSIS even defines __get_MSP() and __get_PSP() as C functions, as well as their __set[...] counterparts.
EDIT: This seems to work in Thumb-2 only. Sorry for the noise.

Related

Motorola 68K TRAP instruction as a bridge to OS

I'm not an expert, but just a hobbyist. I was playing with 68000 architecture in the past and I've been always thinking of its TRAP instruction. This instruction is always described as a "bridge" to an OS (in some systems however it's not used in this regard, but that's a different story). How this is achieved? TRAP itself is a privileged instruction, so how this OS invoking mechanism works in user mode? My guess is that the privilege violation exception is triggered and the exception handler checks what particular instruction has caused the exception. If it's a TRAP instruction then the instruction is simply executed (maybe TRAP's operand i.e. TRAP vector number is checked as well), of course now in the supervisor mode. Am I right?
The TRAP instruction is not privileged, you can call it from either user mode or supervisor mode.
It's the TRAP instruction itself that will force the CPU to supervisor mode, and then depending of the #xx number you used will jump to any of the 16 possible callbacks from the memory area $80 to $BC.
TRAP also pushes to the stack the PC and SR values, so when the last function call returns it goes back to whatever mode was setup before you called TRAP.

how does the processor know an instruction is making a system call

system call -- It is an instruction that generates an interrupt that causes OS to gain
control of processor.
so if a running process issue a system call (e.g. create/terminate/read/write etc), a interrupt is generated which cause the KERNEL TO TAKE CONTROL of the processor which then executes the required interrupt handler routine. correct?
then can anyone tell me how the processor known that this instruction is supposed to block the process, go to privileged mode, and bring kernel code.
I mean as a programmer i would just type stream1=system.io.readfile(ABC) or something, which translates to open and read file ABC.
Now what is monitoring the execution of this process, is there a magical power in the cpu to detect this?
As from what i have read a PROCESSOR can only execute only process at a time, so WHERE IS THE MONITOR PROGRAM RUNNING?
How can the KERNEL monitor if a system call is made or not when IT IS NOT IN RUNNING STATE!!
or does the computer have a SYSTEM CALL INSTRUCTION TABLE which it compares with before executing any instruction?
please help
thanku
The kernel doesn't monitor the process to detect a system call. Instead, the process generates an interrupt which transfers control to the kernel, because that's what software-generated interrupts do according to the instruction set reference manual.
For example, on Unix the process stuffs the syscall number in eax and runs an an int 0x80 instruction, which generates interrupt 0x80. The CPU reacts to this by looking in the Interrupt Descriptor Table to find the kernel's handler for that interrupt. This handler is the entry point for system calls.
So, to call _exit(0) (the raw system call, not the glibc exit() function which flushes buffers) in 32-bit x86 Linux:
movl $1, %eax # The system-call number. __NR_exit is 1 for 32-bit
xor %ebx,%ebx # put the arg (exit status) in ebx
int $0x80
Let's analyse each questions you have posed.
Yes, your understanding is correct.
See, if any process/thread wants to get inside kernel there are only two mechanisms, one is by executing TRAP machine instruction and other is through interrupts. Usually interrupts are generated by the hardware, so any other process/threads wants to get into kernel it goes through TRAP. So as usual when TRAP is executed by the process it issues interrupt (mostly software interrupt) to your kernel. Along with trap you will also mentions the system call number, this acts as input to your interrupt handler inside kernel. Based on system call number your kernel finds the system call function inside system call table and it starts to execute that function. Kernel will set the mode bit inside cs register as soon as it starts to handle interrupts to intimate the processor as current instruction is a privileged instruction. By this your processor will comes to know whether the current instruction is privileged or not. Once your system call function finished it's execution your kernel will execute IRET instruction. Which will clear mode bit inside CS register to inform whatever instruction from now inwards are from user mode.
There is no magical power inside processor, switching between user and kernel context makes us to think that processor is a magical thing. It is just a piece of hardware which has the capability to execute tons of instructions at a very high rate.
4..5..6. Answers for all these questions are answered in above cases.
I hope I've answered your questions up to some extent.
The interrupt controller signals the CPU that an interrupt has occurred, passes the interrupt number (since interrupts are assigned priorities to handle simultaneous interrupts) thus the interrupt number to determine wich handler to start. The CPu jumps to the interrupt handler and when the interrupt is done, the program state reloaded and resumes.
[Reference: Silberchatz, Operating System Concepts 8th Edition]
What you're looking for is mode bit. Basically there is a register called cs register. Normally its value is set to 3 (user mode). For privileged instructions, kernel sets its value to 0. Looking at this value, processor knows which kind of instruction it is. If you're interested digging more please refer this excellent article.
Other Ref.
Where is mode bit
Modern hardware supports multiple user sessions. If your hw supports multi user mode, i provides a mechanism called interrupt. An interrupt basically stops the execution of the current code to execute other code (e.g kernel code).
Which code is executed is decided by parameters, that get passed to the interrupt, by the code that issues the interrupt. The hw will increase the run level, load the kernel code into the memory and forces the cpu to execute this code. When the kernel code returns, it again directly informs the hw and the run level gets decreased.
The HW will then restore the cpu state before the interrupt and set the cpu the the next line in the code that started the interrupt. Done.
Since the code is actively calling the hw, which again actively calls the kernel, no monitoring needs to be done by the kernel itself.
Side note:
Try to keep your question short. Make clear what you want. The first answer was correct for the question you posted, you just didnt phrase it well. Make clear that you are new to the topic and need a detailed explanation of basic concepts instead of explaining what you understood so far and don't use caps lock.
Please accept the answer cnicutar provided. thank you.

Where to return from an interrupt

I've read (and studied) about Interrupt Handling.
What I always fail to understand, is how do we know where to return to (PC / IP) from the Interrupt Handler.
As I understand it:
An Interrupt is caused by a device (say the keyboard)
The relevant handler is called - under the running process. That is, no context switch to the OS is performed.
The Interrupt Handler finishes, and passes control back to the running application.
The process depicted above, which is my understanding of Interrupt Handling, takes place within the current running process' context. So it's akin to a method call, rather than to a context switch.
However, being that we didn't actually make the CALL to the Interrupt Handler, we didn't have a chance to push the current IP to the stack.
So how do we know where to jump back from an Interrupt. I'm confused.
Would appreciate any explanation, including one-liners that simply point to a good pdf/ppt addressing this question specifically.
[I'm generally referring to above process under Linux and C code - but all good answers are welcomed]
It's pretty architecture dependent.
On Intel processors, the interrupt return address is pushed on the stack when an interrupt occurs. You would use an iret instruction to return from the interrupt context.
On ARM, an interrupt causes a processor mode change (to the INT, FIQ, or SVC mode, for example), saving the current CPSR (current program status register) into the SPSR (saved program status register), putting the current execution address into the new mode's LR (link register), and then jumping to the appropriate interrupt vector. Therefore, returning from an interrupt is done by moving the SPSR into the CPSR and then jumping to an address saved in LR - usually done in one step with a subs or movs instruction:
movs pc, lr
When an interrupt is triggered, the CPU pushes several registers onto the stack, including the instruction pointer (EIP) of the code that was executing before the interrupt. You can put iret and the end of your ISR to pop these values, and restore EIP (as well as CS, EFLAGS, SS and ESP).
By the way, interrupts aren't necessarily triggered by devices. In Linux and DOS, user space programs use interrupts (via int) to make system calls. Some kernel code uses interrupts, for example intentionally triple faulting in order to force a shutdown.
The interrupt triggering mechanism in the CPU pushes the return address on the stack (among other things).

FreeRTOS Sleep Mode hazards while using MSP430f5438

I wrote an an idle hook shown here
void vApplicationIdleHook( void )
{
asm("nop");
P1OUT &= ~0x01;//go to sleep lights off!
LPM3;// LPM Mode - remove to make debug a little easier...
asm("nop");
}
That should cause the LED to turn off, and MSP430 to go to sleep when there is nothing to do. I turn the LED on during some tasks.
I also made sure to modify the sleep mode bit in the SR upon exit of any interrupt that could possibly wake the MCU (with the exception of the scheduler tick isr in portext.s43. The macro in iar is
__bic_SR_register_on_exit(LPM3_bits); // Exit Interrupt as active CPU
However, it seems as though putting the MCU to sleep causes some irregular behavior. The led stays on always, although when i scope it, it will turn off for a couple instructions cycles when ever i wake the mcu via one of the interrupts (UART), and then turn back on.
If I comment out the LPM3 instruction, things go as planned. The led stays off for most of the time and only comes on when a task is running.
I am using a MSP4f305438
Any ideas?
Perhaps the problem is the call __bic_SR_register_on_exit(LPM3_bits). This macro changes the LPM bits in the stacked SR, so it must know where to find the saved SR on the stack. I believe that __bic_SR_register_on_exit() is designed for the standard interrupt stack frame generated by the compiler when you use the __interrupt directive. However, a preemptive RTOS, like FreeRTOS, uses its own stack frame typically bigger than the stack frame generated by the compiler, because an RTOS must store the complete context. In this case __bic_SR_register_on_exit() called from an ISR might not find the SR on the stack. Worse, it probably corrupts some other saved register value on the stack.
For a preemptive kernel I would not call __bic_SR_register_on_exit() from the ISRs. The consequence is that the idle callback is called only once and never again, because every time the RTOS performs a context switch back to the idle task the side effect is restoring the SR with the LPM bits turned on. This causes a sleep mode (which is what you want), but your LED won't get toggled.
Miro Samek
state-machine.com

How could an assembly OUTB function cause a triple fault?

In my systems programming class we are working on a small, simple hobby OS. Personally I have been working on an ATA hard disk driver. I have discovered that a single line of code seems to cause a fault which then immediately reboots the system. The code in question is at the end of my interrupt service routine for the IDE interrupts. Since I was using the IDE channels, they are sent through the slave PIC (which is cascaded through the master). Originally my code was only sending the end-of-interrupt byte to the slave, but then my professor told me that I should be sending it to the master PIC as well.
SO here is my problem, when I un-comment the line which sends the EOI byte to the master PIC, the systems triple faults and then reboots. Likewise, if I leave it commented the system stays running.
_outb( PIC_MASTER_CMD_PORT, PIC_EOI ); // this causes (or at least sets off) a triple fault reboot
_outb( PIC_SLAVE_CMD_PORT, PIC_EOI );
Without seeing the rest of the system, is it possible for someone to explain what could possibly be happening here?
NOTE: Just as a shot in the dark, I replaced the _outb() call with another _outb() call which just made sure that the interrupts were enable for the IDE controller, however, the generated assembly would have been almost identical. This did not cause a fault.
*_outb() is a wrapper for the x86 OUTB instruction.
What is so special about my function to send EOI to the master PIC that is an issue?
I realize without seeing the code this may be impossible to answer, but thanks for looking!
Triple faults usually point to a stack overflow or odd stack pointer. When a fault or interrupt occurs, the system immediately tries to push some more junk onto the stack (before invoking the fault handler). If the stack is hosed, this will cause another fault, which then tries to push more stuff on the stack, which causes another fault. At this point, the system gives up on you and reboots.
I know this because I actually have a silly patent (while working at Dell about 20 years ago) on a way to cause a CPU reset without external hardware (used to be done through the keyboard controller):
MOV ESP,1
PUSH EAX ; triple fault and reset!
An OUTB instruction can't cause a fault on its own. My guess is you are re-enabling an interrupt, and the interrupt gets triggered while something is wrong with your stack.
When you re-enable the PIC, are you doing it with the CPU's interrupt flag set, or cleared (ie. are you doing it sometime after a CLI opcode, or, sometime after an STI opcode)?
Assuming that the CPU's interrupt flag is enabled, your act of re-enabling the PIC allows any pending interrupts to reach the CPU: which would interrupt your code, dispatch to a vector specified by the IDT, etc.
So I expect that it's not your opcode that's directly causing the fault: rather, what's faulting is code that's run as the result of an interrupt which happens as a result of your re-enabling the PIC.