Some confusion on how FreeRTOS queues tasks on the STM32 platform - stm32

I'm currently working on developing some software for a project on a ARM Cortext M4 MCU, where three overall main tasks need to be executed:
1) Init_all.c - Must run first and ONLY once at startup
2) Task1.c - Runs infrequently; Once every 10 seconds
3) Task2.c - Must run most frequently
I've decided to experiment around with the usage of FreeRTOS to see if there's any benefits in using FreeRTOS scheduling over a simple infinite While(1) loop
Based on what I've read from the documentation so far (correct me if I'm wrong),
1) FreeRTOS runs tasks based on priority - if no interrupts or stop conditions are coded into the highest priority task, any task of lower priority will never get queued and run
2) If nothing is placed in the infinite for(;;) loop, that task will only run once (the code outside the loop, say to initialise the peripherals once)
Since FreeRTOS selects and queues tasks by priority, the initial solution i came up with in assigning and creating tasks was this:
1) Init_all.c - Highest priority; for(;;) loop only contains code to trigger LED
2) Task1.c - Second highest priority **but** I include a 10 second interrupt vTaskDelay( xDelay10000ms ) right at the start of the for(;;) loop
3) Task2.c - Lowest priority
Upon testing this, if FreeRTOS logic follows, I should expect to never see Task1.c nor Task2.c queued, since Init_all.c task will never end as it has code to trigger the LED within the for(;;) loop.
Which leads me to my question: Two puzzling observations were seen when I implemented the above:
Observation 1:
1) Init_all.c task runs
2) Followed by Task1.c which then gets interrupted and stopped immediately for 10 seconds.
3) During which, Task2.c runs till the 10 seconds are up. Then Task1.c takes over and runs the code after the vTaskDelay( xDelay6000ms ) interrupt.
All this while, Init_all.c is still running, but idk at what position of the queue it is at all. The LED does indeed trigger every second, but again, pure confusion on why Init_all.c task is even running.
The code for Task1.c is as below to provide a better illustration:
// Task1.c
void Task1(void const * argument)
{
// Timer interrupt
const TickType_t xDelay10000ms = pdMS_TO_TICKS( 10000 );
for( ;; )
{
/** Immediately block this task for 10secs upon starting it **/
vTaskDelay( xDelay10000ms );
( code below to execute AFTER the Task1 resumes from the interrupt )
}
Observation 2:
Task2 takes 1 second to run, so it should theoretically run 10 times in the 10 seconds window it is given by the interrupt.
However, I'd see a strange result where Task2 sometimes ran 9 times, then 10 times.
Am i understanding the FreeRTOS concepts wrongly? Thanks! Code is below:
Init_all.c task:
void StartDefaultTask(void const * argument)
{
init_sensor1();
init_sensor2();
init_sensor3();
init_sensor4();
for( ; ; )
{
GREEN_LED_ON();
osDelay(50);
GREEN_LED_OFF();
osDelay(1000);
}
}
Task1:
void Task1(void const * argument)
{
// Timer interrupt
const TickType_t xDelay6000ms = pdMS_TO_TICKS( 6000 );
// All Initialisation
for( ;; )
{
// Timer interrupt this task to ensure apptasks.c finishes first?
vTaskDelay( xDelay6000ms );
takesensor1data();
takesensor2data();
}
}
Task 2:
void Task2(void const * argument)
{
// Timer interrupt
const TickType_t xDelay6000ms = pdMS_TO_TICKS( 6000 );
// All Initialisation
for( ;; )
{
takesensor3data();
takesensor4data();
}
}

Init_all.c - Must run first and ONLY once at startup
Search for "Daemon Task Startup Hook" on this page: https://www.freertos.org/a00016.html
2) If nothing is placed in the infinite for(;;) loop, that task will only run once (the code outside the loop, say to initialise the peripherals once)
If you do that then you must delete the task (call vTaskDelete(NULL)) at the end of its implementation. https://www.freertos.org/implementing-a-FreeRTOS-task.html
As to your main question - you don't show the code of the init task so its impossible to say. However generally this kind of system is implemented to be event driven, so most tasks spend most of their time in the blocked state (not using any cpu so lower priority tasks can execute).

I afraid your observations are wrong
If lower priority task is not giving back control or the interrupt routine is not waking up the higher priority task - the task may newer get the control.
There is no anything like position in the queue here. It is not Linux or Windows.

Related

Can't we use HAL_Delay() in ISR of stm32 F407VG

I am new to stm32, I tried to implement an interrupt using the user button of stm32F407VG.
I added a HAL_Delay() inside the interrupt function.
When the button is pressed, the Interrupt service routine start executing but it never comes back to the main() function.
That's the part of the code which is responsible for the interrupt:
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin)
{
if(GPIO_Pin==GPIO_PIN_0)
{
if(prev_val==false)
{
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_12|GPIO_PIN_13|GPIO_PIN_14, 1);
prev_val=true;
}
else
{
HAL_GPIO_WritePin(GPIOD, GPIO_PIN_12|GPIO_PIN_13|GPIO_PIN_14, 0);
prev_val = false;
}
HAL_Delay(1000);
}
}
Take care: If using the default HAL settings provided by ST, the priority for SysTick IRQ is set to 15 when calling HAL_Init().
So you have to change that in the stm32f7xx_hal_conf.h file or by using the HAL_InitTick(TickPriority) function.
See also Usermanual page 31:
HAL_Delay(). this function implements a delay (expressed in milliseconds) using the SysTick timer.
Care must be taken when using HAL_Delay() since this function provides an accurate delay (expressed in
milliseconds) based on a variable incremented in SysTick ISR. This means that if HAL_Delay() is called from
a peripheral ISR, then the SysTick interrupt must have highest priority (numerically lower) than the
peripheral interrupt, otherwise the caller ISR is blocked.
I found the way to deal with it, my interrupt priority was 0 by default. And HAL_Delay() also has the priority 0.
So I reduced the priority of the external interrupt and set it to 1.
Now its working fine

Why won't AnyEvent::child callbacks ever run if interval timer events are always ready?

Update this issue can be resolved using the fixes present in https://github.com/zbentley/AnyEvent-Impl-Perl-Improved/tree/io-starvation
Context:
I am integrating AnyEvent with some otherwise-synchronous code. The synchronous code needs to install some watchers (on timers, child processes, and files), wait for at least one watcher to complete, do some synchronous/blocking/legacy stuff, and repeat.
I am using the pure-perl AnyEvent::Loop-based event loop, which is good enough for my purposes at this point; most of what I need it for is signal/process/timer tracking.
The problem:
If I have a callback that can block the event loop for a moment, child-process-exit events/callbacks never fire. The simplest example I could make watches a child process and runs an interval timer. The interval timer does something blocking before it finishes:
use AnyEvent;
# Start a timer that, every 0.5 seconds, sleeps for 1 second, then prints "timer":
my $w2 = AnyEvent->timer(
after => 0,
interval => 0.5,
cb => sub {
sleep 1; # Simulated blocking operation. If this is removed, everything works.
say "timer";
},
);
# Fork off a pid that waits for 1 second and then exits:
my $pid = fork();
if ( $pid == 0 ) {
sleep 1;
exit;
}
# Print "child" when the child process exits:
my $w1 = AnyEvent->child(
pid => $pid,
cb => sub {
say "child";
},
);
AnyEvent->condvar->recv;
This code leaves the child process zombied, and prints "timer" over and over, for "ever" (I ran it for several minutes). If the sleep 1 call is removed from the callback for the timer, the code works correctly and the child process watcher fires as expected.
I'd expect the child watcher to run eventually (at some point after the child exited, and any interval events in the event queue ran, blocked, and finished), but it does not.
The sleep 1 could be any blocking operation. It can be replaced with a busy-wait or any other thing that takes long enough. It doesn't even need to take a second; it appears to only need to be a) running during the child-exit event/SIGCHLD delivery, and b) result in the interval always being due to run according to the wallclock.
Questions:
Why isn't AnyEvent ever running my child-process watcher callback?
How can I multiplex child-process-exit events with interval events that may block for so long that the next interval becomes due?
What I've tried:
My theory is that timer events which become "ready" due to time spent outside of the event loop can indefinitely pre-empt other types of ready events (like child process watchers) somewhere inside AnyEvent. I've tried a few things:
Using AnyEvent::Strict doesn't surface any errors or change behavior in any way.
Partial solution: Removing the interval event at any point does make the child process watcher fire (as if there's some internal event polling/queue population done inside AnyEvent that only happens if there are no timer events already "ready" according to the wallclock). Drawbacks: in the general case that doesn't work, since I'd have to know when my child process had exited to know when to postpone my intervals, which is tautological.
Partial solution: Unlike child-process watchers, other interval timers seem to be able to multiplex with each other just fine, so I can install a manual call to waitpid in another interval timer to check for and reap children. Drawbacks: child-waiting can be artificially delayed (my use case involves lots of frequent process creation/destruction), any AnyEvent::child watchers that are installed and fire successfully will auto-reap the child and not tell my interval/waitpid timer, requiring orchestration, and it just generally feels like I'm misusing AnyEvent.
The interval is the time between the start of each timer callback, i.e. not the time between the end of a callback and the start of the next callback. You setup a timer with interval 0.5 and the action for the timer is to sleep one second. This means that once the timer is triggered it will be triggered immediately again and again because the interval is always over after the timer returned.
Thus depending on the implementation of the event loop it might happen that no other events will be processed because it is busy running the same timer over and over. I don't know which underlying event loop you are using (check $AnyEvent::MODEL) but if you look at the source code of AnyEvent::Loop (the loop for the pure Perl implementation, i.e. model is AnyEvent::Impl::Perl) you will find the following code:
if (#timer && $timer[0][0] <= $MNOW) {
do {
my $timer = shift #timer;
$timer->[1] && $timer->[1]($timer);
} while #timer && $timer[0][0] <= $MNOW;
As you can see it will be busy executing timers as long as there are timers which need to run. And with your setup of the interval (0.5) and the behavior of the timer (sleep one second) there will always be a timer which needs to be executed.
If you instead change your timer so that there is actual room for the processing of other events by setting the interval to be larger than the blocking time (like 2 seconds instead of 0.5) everything works fine:
...
interval => 2,
cb => sub {
sleep 1; # Simulated blocking operation. Sleep less than the interval!!
say "timer";
...
timer
child
timer
timer
Update this issue can be resolved using the fixes present in https://github.com/zbentley/AnyEvent-Impl-Perl-Improved/tree/io-starvation
#steffen-ulrich's answer is correct, but points out a very flawed behavior in AnyEvent: since there is no underlying event queue, certain kinds of events that always report "ready" can indefinitely pre-empt others.
Here is a workaround:
For interval timers that are always "ready" due to a blocking operation that happens outside of the event loop, it is possible to prevent starvation by chaining interval invocations onto the next run of the event loop, like this:
use AnyEvent;
sub deferred_interval {
my %args = #_;
# Some silly wrangling to emulate AnyEvent's normal
# "watchers are uninstalled when they are destroyed" behavior:
${$args{reference}} = 1;
$args{oldref} //= delete($args{reference});
return unless ${$args{oldref}};
AnyEvent::postpone {
${$args{oldref}} = AnyEvent->timer(
after => delete($args{after}) // $args{interval},
cb => sub {
$args{cb}->(#_);
deferred_interval(%args);
}
);
};
return ${$args{oldref}};
}
# Start a timer that, at most once every 0.5 seconds, sleeps
# for 1 second, and then prints "timer":
my $w1; $w1 = deferred_interval(
after => 0.1,
reference => \$w2,
interval => 0.5,
cb => sub {
sleep 1; # Simulated blocking operation.
say "timer";
},
);
# Fork off a pid that waits for 1 second and then exits:
my $pid = fork();
if ( $pid == 0 ) {
sleep 1;
exit;
}
# Print "child" when the child process exits:
my $w1 = AnyEvent->child(
pid => $pid,
cb => sub {
say "child";
},
);
AnyEvent->condvar->recv;
Using that code, the child process watcher will fire more or less on time, and the interval will keep firing. The tradeoff is that each interval timer will only start after each blocking callback finishes. Given an interval time of I and a blocking-callback runtime of B, this approach will fire an interval event roughly every I + B seconds, and the previous approach from the question will take min(I,B) seconds (at the expense of potential starvation).
I think that a lot of the headaches here could be avoided if AnyEvent had a backing queue (many common event loops take this approach to prevent situations exactly like this one), or if the implementation of AnyEvent::postpone installed a "NextTick"-like event emitter to be fired only after all other emitters had been checked for events.

Signal is always caught by parent process first

Consider the following piece of code running under Solaris 11.3 (a simplified version of system(3C)):
int main(int argc, char **argv) {
pid_t pid = fork();
pid_t w;
int status;
if (pid == 0) {
execvp(argv[1], argv + 1);
perror("Failed to exec");
exit(127);
}
if (pid > 0) {
w = waitpid(pid, &status, 0);
if (w == -1) {
perror("Wait: ");
exit(1);
}
else if (WIFEXITED(status) > 0) {
printf("\nFinish code: %d\n", WEXITSTATUS(status));
}
else {
printf("\nUnexpected termination of child process.\n");
}
}
if (pid == -1) {
perror("Failed to fork");
}
}
The problem I get is that whenever the process is finished via a signal (for instance, SIGINT) the "Unexpected termination" message is never printed.
The way I see it, the whole process group receives signals from the terminal, and in this case the parent process simply terminates before waitpid(2) returns (Which happens every time, apparently).
If that is the case, I have a follow-up question. How to retrieve infromation about the signal that terminated the child process from the parent without using a signal handler? For example, I could have added another if-else block with a WIFSIGNALED check and a WTERMSIG call passing the variable status (In fact, I did, but upon termination with Ctrl+C the program delivered no output whatsoever)
So what exactly and in which order is happening there?
You say, “… whenever the process is finished via a signal
(for instance, SIGINT) …”, but you aren’t specific enough
to enable anybody to answer your question definitively. 
If you are sending a signal to the child process with a kill command,
you have an odd problem. 
But if, as I suspect (and as you suggest when you say
“the whole process group receives signals from the terminal”),
you are just typing Ctrl+C, it’s simple:
When you type an INTR, QUIT, or SUSP character,
the corresponding signal (SIGINT, SIGQUIT, or SIGTSTP) is sent
simultaneously to all processes in the terminal process group.
OK, strictly speaking, it’s not simultaneous. 
It happens in a loop in the terminal driver
(specifically, I believe, the “line discipline” handler), in the kernel. 
No user process execution can occur before this loop completes.
You say “… the parent process simply terminates
before waitpid(2) returns (… every time, apparently).” 
Technically this is true. 
As described above, all processes in the process group
(including your parent and child processes) receive the signal
(essentially) simultaneously. 
Since the parent is not handling the signal, itself,
it terminates before it can possibly do any processing
triggered by the child’s receipt of the signal.
You say “Signal is always caught by parent process first”. 
No; see above. 
And the processes terminate in an unspecified order —
this may be the order in which they appear in the process table
(which is indeterminate),
or determined by some subtle (and, perhaps, undocumented) aspect
of the scheduler’s algorithm.
Related U&L questions:
What is the purpose of abstractions, session, session leader
and process groups?
What are the responsibilities of each Pseudo-Terminal (PTY) component
(software, master side, slave side)?
Does it work ok if you send signals via a "kill" from another tty? I tried this on linux. Seems the same behavior.
I think you're right if that shell control signals are passed to the process group....and you have a race. You need in the parent to catch and delay them.
What I've done is do "./prog cat"
Doing a kill -SIGINT
works fine.
Doing a control-C prints nothing.
Doing a setsid() in front has the parent terminate, but the child keep running.

Confusion regarding usage of event.triggered

I'm trying out some code that is supposed to block until moving to a new simulation time step (similar to waiting for sys.tick_start in e).
I tried writing a function that does this:
task wait_triggered();
event e;
`uvm_info("DBG", "Waiting trig", UVM_NONE)
-> e;
$display("e.triggered = ", e.triggered);
wait (!e.triggered);
`uvm_info("DBG", "Wait trig done", UVM_NONE)
endtask
The idea behind it is that I trigger some event, meaning that its triggered field is going to be 1 when control reaches the line with wait(!e.triggered). This line should unblock in the next time slot, when triggered is going to be cleared.
To test this out I added some other thread that consumes simulation time:
fork
wait_triggered();
begin
`uvm_info("DBG", "Doing stuff", UVM_NONE)
#1;
`uvm_info("DBG", "Did stuff", UVM_NONE)
end
join
#1;
$finish(1);
I see the messages Doing stuff and Did stuff, but Wait trig done never comes. The simulation also stops before reaching the finish(1). One simulator told me that this is because no further events have been scheduled.
All simulators exhibit the same behavior, so there must be something I'm missing. Could anyone explain what's going on?
The problem is with wait (!e.triggered); when e.triggered is changing from 1 to zero. It has to change in a region where nothing can be scheduled, so whether it changes at the end of the current time slot, or the beginning of the next time slot is unobservable. So the wait will hang waiting for the end of the current time slot, which never comes.
I think the closest thing to what you are looking for is #1step. This blocks for the smallest simulation precision time step. But I've got to believe there is a better way to code what you want without having to know if time is advancing.

Code with a potential deadlock

// down = acquire the resource
// up = release the resource
typedef int semaphore;
semaphore resource_1;
semaphore resource_2;
void process_A(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
void process_B(void) {
down(&resource_2);
down(&resource_1);
use_both_resources();
up(&resource_1);
up(&resource_2);
}
Why does this code causes deadlock?
If we change the code of process_B where the both processes ask for the resources in the same order as:
void process_B(void) {
down(&resource_1);
down(&resource_2);
use_both_resources();
up(&resource_2);
up(&resource_1);
}
Then there is no deadlock.
Why so?
Imagine that process A is running and try to get the resource_1 and gets it.
Now, process B takes control and try to get resource_2. And gets it. Now, process B tries to get resource_1 and does not get it, because it belongs to resource A. Then, process B goes to sleep.
Process A gets control again and try to get resource_2, but it belongs to process B. Now he goes to sleep too.
At this point, process A is waiting for resource_2 and process B is waiting for resource_1.
If you change the order, process B will never lock resource_2 unless it gets resource_1 first, the same for process A.
They will never be dead locked.
A necessary condition for a deadlock is a cycle of resource acquisitions. The first example constructs this a cycle 1->2->1. The second example acquires the resources in a fixed order which makes a cycle and henceforth a deadlock impossible.