How could an assembly OUTB function cause a triple fault? - operating-system

In my systems programming class we are working on a small, simple hobby OS. Personally I have been working on an ATA hard disk driver. I have discovered that a single line of code seems to cause a fault which then immediately reboots the system. The code in question is at the end of my interrupt service routine for the IDE interrupts. Since I was using the IDE channels, they are sent through the slave PIC (which is cascaded through the master). Originally my code was only sending the end-of-interrupt byte to the slave, but then my professor told me that I should be sending it to the master PIC as well.
SO here is my problem, when I un-comment the line which sends the EOI byte to the master PIC, the systems triple faults and then reboots. Likewise, if I leave it commented the system stays running.
_outb( PIC_MASTER_CMD_PORT, PIC_EOI ); // this causes (or at least sets off) a triple fault reboot
_outb( PIC_SLAVE_CMD_PORT, PIC_EOI );
Without seeing the rest of the system, is it possible for someone to explain what could possibly be happening here?
NOTE: Just as a shot in the dark, I replaced the _outb() call with another _outb() call which just made sure that the interrupts were enable for the IDE controller, however, the generated assembly would have been almost identical. This did not cause a fault.
*_outb() is a wrapper for the x86 OUTB instruction.
What is so special about my function to send EOI to the master PIC that is an issue?
I realize without seeing the code this may be impossible to answer, but thanks for looking!

Triple faults usually point to a stack overflow or odd stack pointer. When a fault or interrupt occurs, the system immediately tries to push some more junk onto the stack (before invoking the fault handler). If the stack is hosed, this will cause another fault, which then tries to push more stuff on the stack, which causes another fault. At this point, the system gives up on you and reboots.
I know this because I actually have a silly patent (while working at Dell about 20 years ago) on a way to cause a CPU reset without external hardware (used to be done through the keyboard controller):
MOV ESP,1
PUSH EAX ; triple fault and reset!
An OUTB instruction can't cause a fault on its own. My guess is you are re-enabling an interrupt, and the interrupt gets triggered while something is wrong with your stack.

When you re-enable the PIC, are you doing it with the CPU's interrupt flag set, or cleared (ie. are you doing it sometime after a CLI opcode, or, sometime after an STI opcode)?
Assuming that the CPU's interrupt flag is enabled, your act of re-enabling the PIC allows any pending interrupts to reach the CPU: which would interrupt your code, dispatch to a vector specified by the IDT, etc.
So I expect that it's not your opcode that's directly causing the fault: rather, what's faulting is code that's run as the result of an interrupt which happens as a result of your re-enabling the PIC.

Related

Computer Reboots After "sti" Instruction

I am trying to implement interrupts in x86 operating system project. However, after loading interrupt descriptor table with lidt, I issue sti command and this "sti" command reboots the computer. And also, I am in the protected mode. Any idea what might be happening?
Some things cause exceptions. When the CPU can't start the corresponding exception handler it falls back to a generic "double fault" exception, and when the CPU can't start that exception handler the CPU falls back to a "triple fault" condition which mostly means that the computer is reset.
It's likely that there are pending IRQs (that occurred while interrupts were masked with "cli" and have been waiting for CPU to be ready to receive them); so when you do "sti" the interrupt controller sees the CPU is ready to receive an IRQ now and immediately sends one to the CPU; and likely that the interrupt handler for whichever IRQ the CPU receives is causing an exception (that leads to double fault, that leads to triple fault/reset).
The easiest way to figure out what is happening is to run it under an emulator that tells you what happened in its logs. The alternative is to write usable exception handler/s for any exceptions that are involved (most likely, a general protection fault exception handler); so that the exception handler can give you information about what went wrong (e.g. the "error code" provided by the CPU to the general protection fault handler may indicate which IDT entry the CPU tried to use for the IRQ).
Note that during boot the best sequence is to mask all IRQs in the interrupt controller/s, then let firmware handle any pending IRQs (e.g. with interrupts enabled, do some "NOP" instructions). That way there can't be any pending IRQs when you "sti" later (and you can unmask individual IRQ sources when you actually want them unmasked - e.g. when you install a device driver that uses a specific IRQ). Sadly most people (tutorials, GRUB, etc) do everything wrong and just "cli" without masking IRQs in the interrupt controller/s (and then do things like remap the PIC chips, etc; which makes things even more confusing), and then end up having to cope with the consequences of doing everything wrong. ;-)

how does the operating system treat few interrupts and keep processes going?

I'm learning computer organization and structure (I'm using Linux OS with x86-64 architecture). we've studied that when an interrupt occurs in user mode, the OS is notified and it switches between the user stack and the kernel stack by loading the kernels rsp from the TSS, afterwards it saves the necessary registers (such as rip) and in case of software interrupt it also saves the error-code. in the end, just before jumping to the adequate handler routine it zeroes the TF and in case of hardware interrupt it zeroes the IF also. I wanted to ask about few things:
the error code is save in the rip, so why loading both?
if I consider a case where few interrupts happen together which causes the IF and TF to turn on, if I zero the TF and IF, but I treat only one interrupt at a time, aren't I leave all the other interrupts untreated? in general, how does the OS treat few interrupts that occur at the same time when using the method of IDT with specific vector for each interrupt?
does this happen because each program has it's own virtual memory and thus the interruption handling processes of all the programs are unrelated? where can i read more about it?
how does an operating system keep other necessary progresses running while handling the interrupt?
thank you very much for your time and attention!
the error code is save in the rip, so why loading both?
You're misunderstanding some things about the error code. Specifically:
it's not generated by software interrupts (e.g. instructions like int 0x80)
it is generated by some exceptions (page fault, general protection fault, double fault, etc).
the error code (if used) is not saved in the RIP, it's pushed on the stack so that the exception handler can use it to get more information about the cause of the exception
2a. if I consider a case where few interrupts happen together which causes the IF and TF to turn on, if I zero the TF and IF, but I treat only one interrupt at a time, aren't I leave all the other interrupts untreated?
When the IF flag is clear, mask-able IRQs (which doesn't include other types of interrupts - software interrupts, exceptions) are postponed (not disabled) until the IF flag is set again. They're "temporarily untreated" until they're treated later.
The TF flag only matters for debugging (e.g. single-step debugging, where you want the CPU to generate a trap after every instruction executed). It's only cleared in case the process (in user-space) was being debugged, so that you don't accidentally continue debugging the kernel itself; but most processes aren't being debugged like this so most of the time the TF flag is already clear (and clearing it when it's already clear doesn't really do anything).
2b. in general, how does the OS treat few interrupts that occur at the same time when using the method of IDT with specific vector for each interrupt? does this happen because each program has it's own virtual memory and thus the interruption handling processes of all the programs are unrelated? where can i read more about it?
There's complex rules that determine when an interrupt can interrupt (including when it can interrupt another interrupt). These rules mostly only apply to IRQs (not software interrupts that the kernel won't ever use itself, and not exceptions which are taken as soon as they occur). Understanding the rules means understanding the IF flag and the interrupt controller (e.g. how interrupt vectors and the "task priority register" in the local APIC influence the "processor priority register" in the local APIC, which determines which groups of IRQs will be postponed when the IF flag is set). Information about this can be obtained from Intel's manuals, but how Linux uses it can only be obtained from Linux source code and/or Linux specific documentation.
On top of that there's "whatever mechanisms and practices the OS felt like adding on top" (e.g. deferred procedure calls, tasklets, softIRQs, additional stack management) that add more complications (which can also only be obtained from Linux source code and/or Linux specific documentation).
Note: I'm not a Linux kernel developer so can't/won't provide links to places to look for Linux specific documentation.
how does an operating system keep other necessary progresses running while handling the interrupt?
A single CPU can't run 2 different pieces of code (e.g. an interrupt handler and user-space code) at the same time. Instead it runs them one at a time (e.g. runs user-space code, then switches to an IRQ handler for very short amount of time, then returns to the user-space code). Because the IRQ handler only runs for a very short amount of time it creates the illusion that everything is happening at the same time (even though it's not).
Of course when you have multiple CPUs, different CPUs can/do run different pieces of code at the same time.

What exactly happens when an OS goes into kernel mode?

I find that neither my textbooks or my googling skills give me a proper answer to this question. I know it depends on the operating system, but on a general note: what happens and why?
My textbook says that a system call causes the OS to go into kernel mode, given that it's not already there. This is needed because the kernel mode is what has control over I/O-devices and other things outside of a specific process' adress space. But if I understand it correctly, a switch to kernel mode does not necessarily mean a process context switch (where you save the current state of the process elsewhere than the CPU so that some other process can run).
Why is this? I was kinda thinking that some "admin"-process was switched in and took care of the system call from the process and sent the result to the process' address space, but I guess I'm wrong. I can't seem to grasp what ACTUALLY is happening in a switch to and from kernel mode and how this affects a process' ability to operate on I/O-devices.
Thanks alot :)
EDIT: bonus question: does a library call necessarily end up in a system call? If no, do you have any examples of library calls that do not end up in system calls? If yes, why do we have library calls?
Historically system calls have been issued with interrupts. Linux used the 0x80 vector and Windows used the 0x2F vector to access system calls and stored the function's index in the eax register. More recently, we started using the SYSENTER and SYSEXIT instructions. User applications run in Ring3 or userspace/usermode. The CPU is very tricky here and switching from kernel mode to user mode requires special care. It actually involves fooling the CPU to think it was from usermode when issuing a special instruction called iret. The only way to get back from usermode to kernelmode is via an interrupt or the already mentioned SYSENTER/EXIT instruction pairs. They both use a special structure called the TaskStateSegment or TSS for short. These allows to the CPU to find where the kernel's stack is, so yes, it essentially requires a task switch.
But what really happens?
When you issue an system call, the CPU looks for the TSS, gets its esp0 value, which is the kernel's stack pointer and places it into esp. The CPU then looks up the interrupt vector's index in another special structure the InterruptDescriptorTable or IDT for short, and finds an address. This address is where the function that handles the system call is. The CPU pushes the flags register, the code segment, the user's stack and the instruction pointer for the next instruction that is after the int instruction. After the systemcall has been serviced, the kernel issues an iret. Then the CPU returns back to usermode and your application continues as normal.
Do all library calls end in system calls?
Well most of them do, but there are some which don't. For example take a look at memcpy and the rest.

Is low latency mode safe to use with Linux serial ports?

Is it safe to use the low_latency tty mode with Linux serial ports? The tty_flip_buffer_push function is documented that it "must not be called from IRQ context if port->low_latency is set." Nevertheless, many low-level serial port drivers call it from an ISR whether or not the flag is set. For example, the mpc52xx driver calls flip buffer unconditionally after each read from its FIFO.
A consequence of the low latency flip buffer in the ISR is that the line discipline driver is entered within the IRQ context. My goal is to get latency of one millisecond or less, reading from a high speed mpc52xx serial port. Setting low_latency acheives the latency goal, but it also violates the documented precondition for tty_flip_buffer_push.
This question was asked on linux-serial on Fri, 19 Aug 2011.
No, low latency is not safe in general.
However, in the particular case of 3.10.5 low_latency is safe.
The comments above tty_flip_buffer_push read:
"This function must not be called from IRQ context if port->low_latency is set."
However, the code (3.10.5, drivers/tty/tty_buffer.c) contradicts this:
void tty_flip_buffer_push(struct tty_port *port)
{
struct tty_bufhead *buf = &port->buf;
unsigned long flags;
spin_lock_irqsave(&buf->lock, flags);
if (buf->tail != NULL)
buf->tail->commit = buf->tail->used;
spin_unlock_irqrestore(&buf->lock, flags);
if (port->low_latency)
flush_to_ldisc(&buf->work);
else
schedule_work(&buf->work);
}
EXPORT_SYMBOL(tty_flip_buffer_push);
The use of spin_lock_irqsave/spin_unlock_irqrestore makes this code safe to call from interrupt context.
There is a test for low_latency and if it is set, flush_to_ldisc is called directly. This flushes the flip buffer to the line discipline immediately, at the cost of making the interrupt processing longer. The flush_to_ldisc routine is also coded to be safe for use in interrupt context. I guess that an earlier version was unsafe.
If low_latency is not set, then schedule_work is called. Calling schedule_work is the classic way to invoke the "bottom half" handler from the "top half" in interrupt context. This causes flush_to_ldisc to be called from the "bottom half" handler at the next clock tick.
Looking a little deeper, both the comment and the test seem to be in Alan Cox's original e0495736 commit of tty_buffer.c. This commit was a re-write of earlier code, so it seems that at one time there wasn't a test. Whoever added the test and fixed flush_to_ldisc to be interrupt-safe did not bother to fix the comment.
So, always believe the code, not the comments.
However, in the same code in 3.12-rc* (as of October 23, 2013) it looks like the problem was opened again when the spin_lock_irqsave's in flush_to_ldisc were removed and mutex_locks were added. That is, setting UPF_LOW_LATENCY in the serial_struct flags and calling the TIOCSSERIAL ioctl will again cause "scheduling while atomic".
The latest update from the maintainer is:
On 10/19/2013 07:16 PM, Jonathan Ben Avraham wrote:
> Hi Peter,
> "tty_flip_buffer_push" is called from IRQ handlers in most drivers/tty/serial UART drivers.
>
> "tty_flip_buffer_push" calls "flush_to_ldisc" if low_latency is set.
> "flush_to_ldisc" calls "mutex_lock" in 3.12-rc5, which cannot be used in interrupt context.
>
> Does this mean that setting "low_latency" cannot be used safely in 3.12-rc5?
Yes, I broke low_latency.
Part of the problem is that the 3.11- use of low_latency was unsafe; too many shared
data areas were simply accessed without appropriate safeguards.
I'm working on fixing it but probably won't make it for 3.12 final.
Regards,
Peter Hurley
So, it looks like you should not depend on low_latency unless you are sure that you are never going to change your kernel from a version that supports it.
Update: February 18, 2014, kernel 3.13.2
Stanislaw Gruszka wrote:
Hi,
setserial has low_latency option which should minimize receive latency
(scheduler delay). AFAICT it is used if someone talk to external device
via RS-485/RS-232 and need to have quick requests and responses . On
kernel this feature was implemented by direct tty processing from
interrupt context:
void tty_flip_buffer_push(struct tty_port *port)
{
struct tty_bufhead *buf = &port->buf;
buf->tail->commit = buf->tail->used;
if (port->low_latency)
flush_to_ldisc(&buf->work);
else
schedule_work(&buf->work);
}
But after 3.12 tty locking changes, calling flush_to_ldisc() from
interrupt context is a bug (we got scheduling while atomic bug report
here: https://bugzilla.redhat.com/show_bug.cgi?id=1065087 )
I'm not sure how this should be solved. After Peter get rid all of those
race condition in tty layer, we probably don't want go back to use
spin_lock's there. Maybe we can create WQ_HIGHPRI workqueue and schedule
flush_to_ldisc() work there. Or perhaps users that need to low latency,
should switch to thread irq and prioritize serial irq to meat
retirements. Anyway setserial low_latency is now broken and all who use
this feature in the past can not do this any longer on 3.12+ kernels.
Thoughts ?
Stanislaw
A patch has been posted to LKML to address the problem. It removes the generic code for handling low_latency but keeps the parameter for the low-level drivers to use.
http://www.kernelhub.org/?p=2&msg=419071
I tried forcing low_latency on Linux 3.12 with serial console. The kernel was very unstable. If preemption was enabled, it would hang after a few minutes of use.
So the answer for now is to stay away.

Where to return from an interrupt

I've read (and studied) about Interrupt Handling.
What I always fail to understand, is how do we know where to return to (PC / IP) from the Interrupt Handler.
As I understand it:
An Interrupt is caused by a device (say the keyboard)
The relevant handler is called - under the running process. That is, no context switch to the OS is performed.
The Interrupt Handler finishes, and passes control back to the running application.
The process depicted above, which is my understanding of Interrupt Handling, takes place within the current running process' context. So it's akin to a method call, rather than to a context switch.
However, being that we didn't actually make the CALL to the Interrupt Handler, we didn't have a chance to push the current IP to the stack.
So how do we know where to jump back from an Interrupt. I'm confused.
Would appreciate any explanation, including one-liners that simply point to a good pdf/ppt addressing this question specifically.
[I'm generally referring to above process under Linux and C code - but all good answers are welcomed]
It's pretty architecture dependent.
On Intel processors, the interrupt return address is pushed on the stack when an interrupt occurs. You would use an iret instruction to return from the interrupt context.
On ARM, an interrupt causes a processor mode change (to the INT, FIQ, or SVC mode, for example), saving the current CPSR (current program status register) into the SPSR (saved program status register), putting the current execution address into the new mode's LR (link register), and then jumping to the appropriate interrupt vector. Therefore, returning from an interrupt is done by moving the SPSR into the CPSR and then jumping to an address saved in LR - usually done in one step with a subs or movs instruction:
movs pc, lr
When an interrupt is triggered, the CPU pushes several registers onto the stack, including the instruction pointer (EIP) of the code that was executing before the interrupt. You can put iret and the end of your ISR to pop these values, and restore EIP (as well as CS, EFLAGS, SS and ESP).
By the way, interrupts aren't necessarily triggered by devices. In Linux and DOS, user space programs use interrupts (via int) to make system calls. Some kernel code uses interrupts, for example intentionally triple faulting in order to force a shutdown.
The interrupt triggering mechanism in the CPU pushes the return address on the stack (among other things).