Using HAL_GetTick in a interruption - stm32

Im working on a STM32F411CEU6 using STM32CubeIDE, Im making a library that works whit UART interruption, inside the UART interruption Im using the HAL_GetTick function to keep track of time, when I use this function outside the interruption It work properly, but when I try to use it inside the interruption the uwTick halt.
I understand that uwTick is a global variable that is incremented on interruption, my first guess was that the UART interruption had greater priority over the System tick timer interruption (I'm guessing that this interruption is the one that trigger the uwTick increment), but the System tick timer interruption has a higher interruption in the pinout configuration UI.
What is going on?
Should I change my approach and use a timer (reading the counter inside)?
additional information:
-Im triggering the interruption whit the HAL_UART_Receive_IT(&huartx, &USART_receive[0], 1), where USART_receive is a receive buffer
-The function that use the HAL_GetTick function is being call in the void USART1_IRQHandler(void) handler after the HAL_UART_IRQHandler(&huart1) function
Thanks in advance!

A higher interrupt priority is represented by a lower number and vice-versa. Maybe you need to switch the priorities around to do what you want.
However, please note:
It is conventional for systick to be one of the lowest priority interrupts in the system, with only pendsv/svcall lower.
It is generally considered a bad idea to try to delay within an interrupt, especially for several milliseconds. It is probably better to set a flag or something in the interrupt and let your main context carry out the delayed action.

Related

How does the computer implement callbacks?

I already know the general usage of callback. First,I register a "callback function",when some event occur,this function will be triggered(be executed).
What confuses me is how do I know if the event is occur? The solution I can get is polling.Is there a better way to check whether the event occur in less than the O(n) time ?
All right,Maybe the above question is too abstract.A more realistic description is does epoll_wait avoid using O(n) time to check whether the ready file descriptor?
If so, how did it do it?
Is there a callback mechanism that is different from polling essentially?
Usually, but not exclusively, callbacks get called after some peripheral I/O device signals an operation completion by raising a hardware interrupt. A long chain of stuff involving things like driver interrupt handlers, semaphores, protection ring changes, thread and process context changes, message assembly/enqueueing/requiring/handling/dispatching etc etc then cause your callback to be called, maybe by some system thread, or from a message-handling or signal-handling thread of your own that has to conform to a specific structure or constraint.
So no, polling is generally unnecessary, and unwanted.

How the callback functions work in stm32 Hal Library?

As we all know,the Hal Lib provides some callback function to manage hardware interrupt.But i don't know how them work?
Te fact is that I am using HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart) this function so as to receive other devices' data and check those data.So I use the usart interrupt to receive them.
But I don't know when the callback function will be executed,is it depends on the receive buffer's length or the data's buffer?
I guess the hardware interrupt will be triggered while a character has been received,but the callback function will be executed after the receive buffer is full.
PS:I am using the stm32-nucleo-f410 development board to communicate with an AT commend device,and I am a novice about it.
(So sorry for my poor English!)
Thanks a lot.
The callback you are referring to is called when the amount of data specified in the receive functions (the third argument to HAL_UART_Receive_IT). You are correct that the UART interrupt service routine (ISR) is called every time a character is received, but when using the HAL that happens internally to the library and doesn't need to be managed by you. Every time the ISR is called, the received character is moved into the array you provide via the second argument of HAL_UART_Receive_IT, and when the number of characters specified by the call is reached, the callback will be called in that ISR (so make sure not to do anything that will take too much time to complete - ISRs should be short, and the ISRs in the HAL library are already pretty lengthy to handle every possible use case).
Further, if you find that the callback is not being triggered even if you are sending enough data to the peripheral, make sure the interrupt is actually enabled - the HAL_UART_Receive_IT function doesn't actually enable the interrupt, that has to be done during initialization of the peripheral.

Interrupt masking: why?

I was reading up on interrupts. It is possible to suspend non-critical interrupts via a special interrupt mask. This is called interrupt masking. What i dont know is when/why you might want to or need to temporarily suspend interrupts? Possibly Semaphores, or programming in a multi-processor environment?
The OS does that when it prepares to run its own "let's orchestrate the world" code.
For example, at some point the OS thread scheduler has control. It prepares the processor registers and everything else that needs to be done before it lets a thread run so that the environment for that process and thread is set up. Then, before letting that thread run, it sets a timer interrupt to be raised after the time it intends to let the thread have on the CPU elapses.
After that time period (quantum) has elapsed, the interrupt is raised and the OS scheduler takes control again. It has to figure out what needs to be done next. To do that, it needs to save the state of the CPU registers so that it knows how to undo the side effects of the code it executes. If another interrupt is raised for any reason (e.g. some async I/O completes) while state is being saved, this would leave the OS in a situation where its world is not in a valid state (in effect, saving the state needs to be an atomic operation).
To avoid being caught in that situation, the OS kernel therefore disables interrupts while any such operations that need to be atomic are performed. After it has done whatever needs doing and the system is in a known state again, it reenables interrupts.
I used to program on an ARM board that had about 10 interrupts that could occur. Each particular program that I wrote was never interested in more than 4 of them. For instance there were 2 timers on the board, but my programs only used 1. I would mask the 2nd timer's interrupt. If I didn't mask that timer, it might have been enabled and continued making interrupts which would slow down my code.
Another example was that I would use the UART receive REGISTER full interrupt and so would never need the UART receive BUFFER full interrupt to occur.
I hope this gives you some insight as to why you might want to disable interrupts.
In addition to answers already given, there's an element of priority to it. There are some interrupts you need or want to be able to respond to as quickly as possible and others you'd like to know about but only when you're not so busy. The most obvious example might be refilling the write buffer on a DVD writer (where, if you don't do so in time, some hardware will simply write the DVD incorrectly) versus processing a new packet from the network. You'd disable the interrupt for the latter upon receiving the interrupt for the former, and keep it disabled for the duration of filling the buffer.
In practise, quite a lot of CPUs have interrupt priority built directly into the hardware. When an interrupt occurs, the disabled flags are set for lesser interrupts and, often, that interrupt at the same time as reading the interrupt vector and jumping to the relevant address. Dictating that receipt of an interrupt also implicitly masks that interrupt until the end of the interrupt handler has the nice side effect of loosening restrictions on interrupting hardware. E.g. you can simply say that signal high triggers the interrupt and leave the external hardware to decide how long it wants to hold the line high for without worrying about inadvertently triggering multiple interrupts.
In many antiquated systems (including the z80 and 6502) there tends to be only two levels of interrupt — maskable and non-maskable, which I think is where the language of enabling or disabling interrupts comes from. But even as far back as the original 68000 you've got eight levels of interrupt and a current priority level in the CPU that dictates which levels of incoming interrupt will actually be allowed to take effect.
Imagine your CPU is in "int3" handler now and at that time "int2" happens and the newly happened "int2" has a lower priority compared with "int3". How would we handle with this situation?
A way is when handling "int3", we are masking out other lower priority interrupters. That is we see the "int2" is signaling to CPU but the CPU would not be interrupted by it. After we finishing handling the "int3", we make a return from "int3" and unmasking the lower priority interrupters.
The place we returned to can be:
Another process(in a preemptive system)
The process that was interrupted by "int3"(in a non-preemptive system or preemptive system)
An int handler that is interrupted by "int3", say int1's handler.
In cases 1 and 2, because we unmasked the lower priority interrupters and "int2" is still signaling the CPU: "hi, there is a something for you to handle immediately", then the CPU would be interrupted again, when it is executing instructions from a process, to handle "int2"
In case 3, if the priority of “int2” is higher than "int1", then the CPU would be interrupted again, when it is executing instructions from "int1"'s handler, to handle "int2".
Otherwise, "int1"'s handler is executed without interrupting (because we are also masking out the interrupters with priority lower then "int1" ) and the CPU would return to a process after handling the “int1” and unmask. At that time "int2" would be handled.

How to handle class methods being called again before they are finished?

What is the best way to handle this situation on an iPhone device: My program ramps the pitch of a sound between two values. A button pressed calls a method that has a while loop that does the ramping in small increments. It will take some time to finish. In the meantime the user has pressed another button calling the same method. Now I want the loop in the first call to stop and the second to start from the current state. Here is the something like what the method should look like:
-(void)changePitchSample: (float) newPitch{
float oldPitch=channel.pitch;
if (oldPitch>newPitch) {
while (channel.pitch>newPitch) {
channel.pitch = channel.pitch-0.001;
}
}
else if (oldPitch<newPitch) {
while (channel.pitch<newPitch) {
channel.pitch = channel.pitch+0.001;
}
}
}
Now how to best handle the situation where the method is called again? Do I need some kind of mulitthreading? I do not need two processes going at the same time, so it seems there must be some easier solution that I cannot find (being new to this language).
Any help greatly appreciated!
You cannot do this like that. While your loop is running no events will be processed. So if the user pushes the button again nothing will happen before your loop is finished. Also like this you can’t control the speed of your ramp. I’d suggest using a NSTimer. In your changePitchSample: method you store the new pitch somewhere (don’t overwrite the old one) and start a timer that fires once. When the timer fires you increment your pitch and if it is less than the new pitch you restart the timer.
Have a look at NSOperation and the Concurrency Programming Guide. You can first start you operation the increase the pitch and also store the operation object. On the second call you can call [operation cancel] to stop the last operation. Start a second operation to i.e. decrease the pitch and also store the new object.
Btw: What you are doing right now is very bad since you "block the main thread". Calculations that take some time should not be directly executed. You should probably also have a look at NSTimer to make your code independent of the processor speed.
Don't use a while loop; it blocks everything else. Use a timer and a state machine. The timer can call the state machine at the rate at which you want things to change. The state machine can look at the last ramp value and the time of the last button hit (or even an array of UI event times) and decide whether and how much to ramp the volume during the next time step (logic is often just a pile of if and select/case statements if the control algorithm isn't amenable to a nice table). Then the state machine can call the object or routine that handles the actual sound level.

FreeRTOS Sleep Mode hazards while using MSP430f5438

I wrote an an idle hook shown here
void vApplicationIdleHook( void )
{
asm("nop");
P1OUT &= ~0x01;//go to sleep lights off!
LPM3;// LPM Mode - remove to make debug a little easier...
asm("nop");
}
That should cause the LED to turn off, and MSP430 to go to sleep when there is nothing to do. I turn the LED on during some tasks.
I also made sure to modify the sleep mode bit in the SR upon exit of any interrupt that could possibly wake the MCU (with the exception of the scheduler tick isr in portext.s43. The macro in iar is
__bic_SR_register_on_exit(LPM3_bits); // Exit Interrupt as active CPU
However, it seems as though putting the MCU to sleep causes some irregular behavior. The led stays on always, although when i scope it, it will turn off for a couple instructions cycles when ever i wake the mcu via one of the interrupts (UART), and then turn back on.
If I comment out the LPM3 instruction, things go as planned. The led stays off for most of the time and only comes on when a task is running.
I am using a MSP4f305438
Any ideas?
Perhaps the problem is the call __bic_SR_register_on_exit(LPM3_bits). This macro changes the LPM bits in the stacked SR, so it must know where to find the saved SR on the stack. I believe that __bic_SR_register_on_exit() is designed for the standard interrupt stack frame generated by the compiler when you use the __interrupt directive. However, a preemptive RTOS, like FreeRTOS, uses its own stack frame typically bigger than the stack frame generated by the compiler, because an RTOS must store the complete context. In this case __bic_SR_register_on_exit() called from an ISR might not find the SR on the stack. Worse, it probably corrupts some other saved register value on the stack.
For a preemptive kernel I would not call __bic_SR_register_on_exit() from the ISRs. The consequence is that the idle callback is called only once and never again, because every time the RTOS performs a context switch back to the idle task the side effect is restoring the SR with the LPM bits turned on. This causes a sleep mode (which is what you want), but your LED won't get toggled.
Miro Samek
state-machine.com