Interrupt Service Routine - operating-system

I studied that we can not really tamper the interrupt vector table but what happens when we install a new device driver in our computer,how does its address get stored in interrupt vector table?

Related

STM32F4 Timer Triggered DMA SPI – NSS Problem

I have a STM32F417IG microcontroller an external 16bit-DAC (TI DAC81404) that is supposed to generate a Signal with a sampling rate of 32kHz. The communication via SPI should not involve any CPU resources. That is why I want to use a timer triggered DMA to shift the data with a rate of 32kHz to the SPI data register in order to send the data to the DAC.
Information about the DAC
Whenever the DAC receives a channel address and the new corresponding 16bit value the DAC will renew its output voltage to the new received value. This is achieved by:
Pulling the CS/NSS/SYNC – pin to low
Sending the 24bit/3 byte long message and
Pulling the CS back to a high state
The first 8bit of the message are containing among other information the information where the output voltage should be applied. The next and concurrently the last 16bit are containing the new value.
Information about STM32
Unfortunately the microcontroller of ST are having a hardware problem with the NSS-pin. Starting the communication via SPI the NSS-pin is pulled low. Now the pin is low as long as SPI is enabled (. (reference manual page 877). That is sadly not the right way for communicate with device that are in need of a rise of the NSS after each message. A “solution” would be to toggle the NSS-pin manually as suggested in the manual (When a master is communicating with SPI slaves which need to be de-selected between transmissions, the NSS pin must be configured as GPIO or another GPIO must be used and toggled by software.)
Problem
If DMA is used the ordinary way the CPU is only used when starting the process. By toggling the NSS twice every 1/32000 s this leads to corresponding CPU interactions.
My question is whether I missed something in order to achieve a communication without CPU.
If not my goal is now to reduce the CPU processing time to a minimum. My pIan is to trigger DMA with a timer. So every 1/32k seconds the data register of SPI is filled with the 24bit data for the DAC.
The NSS could be toggled by a timer interrupt.
I have problems achieving it because I do not know how to link the timer with the DMA of the SPI using HAL-functions. Can anyone help me?
This is a tricky one. It might be difficult to avoid having one interrupt per sample with this combination of DAC and microcontroller.
However, one approach I would look at is to have the CS signal created as a timer output-compare (like PWM). You can use multiple channels of the same timer or link multiple timers to create a delay between the CS output and the DMA trigger. You should allow some room for jitter, because depending on what else is happening the DMA might not respond instantly. This won't hurt your DAC output signal though, because it only outputs the value on the rising edge of chip select (called SYNC in the DAC datasheet) which will still be from your first timer.

How to figure out the interrupt source on I/O APIC?

I understand that I/O APIC chip has 24 PINs, usually single chip system will map PIN 0~23 IRQ 32~55 respectively. Furthermore I could edit the related RTEs to allocate interrupt handler functions.
But how can I figure out the I/O APIC interrupt source on each PINs?
I understand that it is related to ACPI, but on detail how should I do this, is it mapped on some ACPI table? or I should use AML to check it??
Thank you very much!!
The general steps (for a modern OS) are:
Preparation
a) Parse the ACPI "APIC/MADT" table to determine if PIC chips exist (PCAT_COMPAT flag), how many IO APICs there are, and how many inputs each IO APIC has. If ACPI doesn't exist, you might want to try searching for/parsing older "MultiProcessor Spec." table and extracting the same information; however, if ACPI does exist it's possible that the "MultiProcessor Spec." table is designed to provide a "minimum stub" that's contains no real information (so you must check ACPI first and prefer using ACPI if it exists), and it may not be worth the hassle of supporting systems that don't support ACPI (especially if the operating system requires a 64-bit CPU, etc).
b) Parse the ACPI "FADT" to determine if MSI may (or must not) be enabled
c) Determine if the OS will use PIC alone, IO APICs alone, or IO APIC plus MSI. Note that this can (should?) take into account the operating system's own boot parameters and/or configuration (e.g. so if there's a compatibility problem the end user can work around the problem).
d) If PIC chips exist; mask all IRQs in the PIC chips, then reconfigure the PIC chips (to set whatever "base vector number" you want them to use - e.g. maybe so that the master PIC is interrupt vectors 32 to 39 and the slave is vectors 40 to 47). If IO APIC/s exist, mask all IRQs in each IO APIC. Note: if the PIC chips exist they both have a "spurious IRQ" that can't be masked, so if you don't want to use PIC chips it's still a good idea to reconfigure the PIC chips such that their spurious IRQs (and the interrupt handlers for them) aren't going to be in the way.
e) Use an ACPI AML interpreter to execute the _PIC object; to inform ACPI/AML that you will be using either IO APIC or PIC. Note that "OS uses PIC" is the default for backward compatibility, so this step could be skipped if you're not using IO APIC.
f) Configure the local APIC in each CPU (not covered here).
Devices
Before starting a device driver for a device:
a) Figure out the device's details (e.g. use "class, subclass and programming interface" fields from PCI configuration space to figure out what the device is) and check if you actually have a device driver for it; and decide if you want the device to use PCI IRQs or MSI.
b1) If the device will be using PCI IRQs and if the OS is using PIC chips (and not IO APICs); get the "Interrupt Line" field from the device's PCI configuration space and determine which interrupt vector it will be by adding the corresponding PIC chip's "base interrupt vector" to it.
b2) If the device will be using PCI IRQs (and not MSI) and if the OS is using IO APIC and not PIC; determine which "interrupt pin at the PCI slot" the device uses by reading the "Interupt Pin" field from the device's PCI configuration space. Then use an ACPI AML interpreter to execute the _PRT object and get a current (not forgetting that PCI-E supports "hot-plug") PCI IRQ routing table. Use this table (and the PCI device's "bus:device:function" address and which "interrupt pin" it uses) to determine where the PCI IRQ is connected (e.g. which global interrupt, which determines which input of which IO APIC). Then; if you haven't already (because the same interrupt line is shared by a different device) use some kind of "interrupt vector manager" to allocate an interrupt vector for the PCI IRQ, and configure the IO APIC input to generate that interrupt vector. Note that (for IO APIC and MSI) "interrupt vector" determines "IRQ priority", so so for high speed/latency sensitive devices (e.g. network card) you'll want interrupt vectors that imply "high IRQ priority" and for a slower/less latency sensitive devices (e.g. USB controller) you'll want to use interrupt vectors that imply "lower IRQ priority".
b3) If the device will be using MSI; determine how many consecutive interrupt vectors the device wants; then use some kind of "interrupt vector manager" to try allocate as many consecutive interrupt vectors as the device wants. Note that it is possible to give the device less interrupts than it wants.
c) Regardless of how it happened, you now know which interrupt vector/s the device will use. Start the device driver that's suitable for the device, and tell the device driver which interrupt vectors its device will use (and which MMIO regions, etc).
Note: There's more advanced ways to assign interrupt vectors than "first come first served"; and there's probably no technical reason why you can't re-evaluate/re-assign interrupt vectors later as some kind of dynamic optimization scheme (e.g. re-allocating interrupt vectors so they're given to frequently used PCI devices instead of idle/unused PCI devices).

Zilog z80 I, R registers purpose

There are I and R registers in the Control section of the Z80 cpu, what is their purpose and usage?
The R register is the memory refresh register. It's used to refresh dynamic RAM. Essentially, it is incremented on each instruction and placed on the address bus (when not in use for fetching or storing data) so that dynamic RAM chips can be refreshed.
You can ignore the R register, although people did use it as a source of semi random numbers.
The I register is the interrupt vector base register. In interrupt mode 2, the Z80 has a table of 128 interrupt vectors. The I register tells you the page in RAM where that table is.

How the reset sequence is carried out in cortex m3 in case of boot code?

After getting grip over the various fault handlers in cortex m3
Now I'm studying reset sequence and reset handler.
In normal case after power on reset. PC points to 0x00000000 where initial msp value is stored
Then at 0x00000004 reset vector is kept
Means after initializing msp reset handler is called.
In case of boot code how is the reset sequence and how the vector table is relocated after booting process
The reset sequence of the processor is the same regardless of the code that is being run. Often, boot code may choose to relocate the vector table and this is done using the "Vector Table Offset Register". The vector table can be relocated to some place in RAM or another ROM location. Boot code must at a minimum define the initial main stack pointer value, the reset vector address, the NMI vector address and the hard fault address. The last two because they can occur during the boot process.

Interrupt Vector. Location / Who sets it? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Knowing that An interrupt vector is the memory address of an interrupt handler, or an index into an array called an interrupt vector table that contains the memory addresses of interrupt handlers. When an interrupt is generated, the Operating System saves its execution state via a context switch, and begins execution of the interrupt handler at the interrupt vector.
I have some question, i have been searching hardly but no answer yet.
Is the interrupt vector stored at RAM? and if it is stored at RAM, who sets it to ram? the OS?
interrupt vector is the memory address of an interrupt handler
memory is synonym to RAM, so yes interrupt vector in stored in the RAM.If a device driver wants to register a interrupt handler function, you need to call appropriate OS calls(incase of linux it is request_irqs), and it would create a entry in the Interrupt vector table. This entry would point to wherever you interrupt handler function resides in memory/RAM. It's the OS that holds the responsibility to manage the interrupt vector table.
So, whenever that specific interrupt occurs, your interrupt handler function would be called.
It is in the FLASH. Not in the RAM. The registers are in the RAM. Cause the RAM is to store all the data. But the FLASH is to store all the program. And the interrupt vectors are generated by Compiler.
It depends on the hardware.
If there's only one address that the CPU can jump to on an interrupt then whether that's ROM or RAM depends on the memory map that the system has built around the CPU. Ditto for a predefined interrupt vector table. If the CPU allows a base address to be set for the interrupt table then it's obviously up to the OS.
Generally speaking, an OS that loads completely from disk — like Windows — will obviously keep it in RAM.
OSes that are stored partly or wholly in ROM generally keep the vector table in RAM so that it can be modified at runtime. On very constrained and well-defined systems like the 8-bit Acorn MOS that's because software might conceivably want to take full control of the hardware — if memory serves then that specific system has the hardware vector in ROM because of the fundamentals of the memory map but puts a routine there that then soft vectors through RAM. So it was a very deliberate decision.
On relatively more spacious systems like the classic Mac OS that's because it allows the ROM to be patched after the fact. If a bug is found in a particular interrupt routine after a machine has shipped then an OS update could be issued that would load a RAM replacement for the routine and just change the vector table. That's especially useful in Mac OS because all calls into the system use a trap mechanism that's analogous to an interrupt.
On the PC under modern windows OS's , the interrupt vectors are stored in the Interrupt Descriptor Table (IDT). You can find out where this table is located using the LIDT instruction (Load Interrupt Descriptor Table). But you cannot change a value there, unless you can get your code to run in Priviledge Level Zero (ring o), and Microsoft and Intel have conspired to make that almost impossible under Windows, as all instructions which would change the Code Segment Register (CS) to ring 0 are blocked to user programs. That's why WINTEL, like Australopithicus, might prove to be a dead-end in evolutionary terms (I hope).
Basically, PCs are nothing more than a smart terminal; you have to use them just as a terminal on your own machine to do REAL work, like controlling something.