Dispatching Tasks periodically from a List FreeRTOS - operating-system

I am using FreeRTOS to dispatch task set of 4 periodical tasks.
All tasks have the same period of 10 time units, but they differ in their release times. Release times are 10,3,5,0 time units for tasks T1,T2,T3,T4 respectively. All 4 tasks are stored inside the linked list gll_t* pTaskList. Tasks should run e.g.,
t=0 T4 is released, t=3 T2 is released, t=5 T3 is released, t=10 T1 is relased and T4 is executed again since it was released at t = 0, and so on...
However I have two problems with my dispatcher code:
1. Problem At t=0, only T4 is ready, but note that T1 has release time at 10, According to my if statement for T1 I have 0 % (10 + 10) == 0, and T1 gets released even though it's not ready. I could introduce a boolean that tells whether a task has been released, but is there a smarter way to do it without introducing extra variables?
2. Problem At t=26, no tasks are ready, however, task T2 gets released. According to my if statement for T2 I have 26 % (3 + 10) == 0.
void prvTaskSchedulerProcess(void *pvParameters) {
...
uint32_t uCurrentTickCount = 0;
gll_t* pTaskList = (gll_t*) pvParameters;
WorkerTask_t* pWorkerTask = NULL;
while (true) {
for (uint8_t uIndex = 0; uIndex < pTaskList->size; uIndex++) {
pWorkerTask = gll_get(pTaskList, uIndex);
// Check if the task is ready to be executed
if ( (uCurrentTickCount % (pWorkerTask->uReleaseTime + pWorkerTask->uPeriod) ) == 0) ){
// Dispatch the ready task
vTaskResume(pWorkerTask->xHandle);
}
}
uCurrentTickCount++;
// Sleep the scheduler task, so the other tasks can run
vTaskDelay(TASK_SCHEDULER_TICK_TIME * SCHEDULER_OUTPUT_FREQUENCY_MS);
}
}
Using extra flags seems like a simple solution. However I was told that introducing flags variables is not best solution, because it makes code less readable and maintainable. Thus, I want to avoid using them. How would the correct task dispatching be achieved without using the extra flags (possibly correcting my if statement condition)?

Note using vTaskResume() in this way in inherently dangerous if there is even the tiniest little chance that the task you are resuming has not finished its previous actions and suspended itself again. See the API docs for a fuller explanation https://www.freertos.org/taskresumefromisr.html

To fix problem 2, use the following condition instead of what you have.
if ((uCurrentTickCount % pWorkerTask->uPeriod) == pWorkerTask->uReleaseTime)
To fix problem 1 (and problem 2), use the following condition.
if ((uCurrentTickCount >= pWorkerTask->uReleaseTime) &&
((uCurrentTickCount % pWorkerTask->uPeriod) == (pWorkerTask->uReleaseTime % pWorker->uPeriod)))

Related

Allow only a fixed number of agents to pass through a queue block periodically in Anylogic

I am using a Queue and hold block together, where the hold remains blocked until all the agents arrive at the Queue block.
How to change it and want to allow only a fixed number of agents (say 5 agents) at fixed intervals of time(say every 3 minutes)? Current properties of my Queue and hold block:
queue_block_properties
hold_block_properties
Create a cyclic event with recurrence time 3 minutes.
Also create a variable which you can name count of type int.
In the event action field write:
count = 0;
hold.unblock();
Then, in the On enter field of the hold block write the following:
count++;
if( count == 5 ) {
self.block();
}
The only question that I have is whether you want every 3 minutes to have exactly 5 agents leave or whether it's okay if they arrive a bit later. In other words if after 3 minutes, there are only 3 agents in the queue, do they leave and the hold remains unblocked in case another 2 arrive before the next cycle? Or does the hold block blocks again immediately?
In the solution I provided, if there are less than 5 at the cycle time occurrence, and then new agents arrive before the next cycle, they can pass.
Otherwise, create a new variable called target for example and write the following in the event action:
count= 0;
if( queue.size() >= 5 ) {
target = 5;
hold.unblock();
}
else if ( queue.size() > 0 ) {
target = queue.size();
hold.unblock();
}
And in the on enter of the hold, write:
count++;
if( count == target ) {
self.block();
target = 0;
}
I would advise to not use a hold block for such delicate control of releasing agents. Instead, I would propose a simpler solution.
Simply let the agents build up in the queue and then you remove them using an event. The only action for this event is to remove the minimum between your set number of agents and the queue size, from the queue and send them to an enter block. The enter block is where your process flow then continues. See the screenshot below.
Code in the event is
for (int i = 0; i < min(5, queue.size()); i ++){
enter.take(queue.remove(0));
}
On that note, you can also use the Wait block (Which is hidden in the Auxillary section in the PML library
Then you can ditch the enter block and simply call the following code
for (int i = 0; i < min(5, wait.size()); i ++){
wait.free(wait.get(0));
}

Why is a monitor implemented in terms of semaphores this way?

I have trouble understanding the implementation of a monitor in terms of semaphores from Operating System Concepts
5.8.3 Implementing a Monitor Using Semaphores
We now consider a possible implementation of the monitor mechanism
using semaphores.
For each monitor, a semaphore mutex (initialized to 1) is provided. A
process must execute wait(mutex) before entering the monitor and must
execute signal(mutex) after leaving the monitor.
Since a signaling process must wait until the resumed process either leaves or waits, an additional semaphore, next, is introduced,
initialized to 0. The signaling processes can use next to suspend
themselves. An integer variable next_count is also provided to count
the number of processes suspended on next. Thus, each external
function F is replaced by
wait(mutex);
...
body of F
...
if (next count > 0)
signal(next);
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
We can now describe how condition variables are implemented as well.
For each condition x, we introduce a semaphore x_sem and an
integer variable x_count, both initialized to 0. The operation x.wait() can now be implemented as
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x sem);
x_count--;
The operation x.signal() can be implemented as
if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
What does the reason for introducing semaphore next and the count next_count of processes suspended on next mean?
Why are x.wait() and x.signal() implemented the way they are?
Thanks.
------- Note -------
WAIT() and SIGNAL() denote calls on monitor methods
wait() and signal() denote calls to semaphore methods, in the explanation that follows.
------- End of Note -------
I think it is easier if you think in terms of a concrete example. But before that let's first try to understand what a monitor is. As explained in the book a monitor is a Abstract Data Type meaning that it is not a real type which can be used to instantiate a variable. Rather it is like a specification with some rules and guidelines based on which different languages could provide support for process synchronization.
Semaphors were introduced as a software-based solution for achieving synchronization over hardware-based approaches like TestAndSet() or Swap(). Even with semaphores, the programmers had to ensure that they invoke the wait() & signal() methods in the right order and correctly. So, an abstract specification called monitors were introduced to encapsulate all these things related to synchronization as one primitive so simply any process executing inside the monitor will ensure that these methods (semaphore wait and signal) invocations are used accordingly.
With monitors all shared variables and functions (that use the shared variables) are put into the monitor structure and when any of these functions are invoked the monitor implementation takes care of ensuring that the shared resources are protected over mutual exclusion and any issues of synchronization.
Now with monitors unlike semaphores or other synchronization techniques we are not dealing with just one portion of the critical section but many of them in terms of different functions. In addition, we do also have shared variables that are accessed within these functions. For each of the different functions in a monitor to ensure only one of them is executed and no other process is executing on any of the functions, we can use a global semaphore called mutex.
Consider the example of the solution for the dining philosophers problem using monitors below.
monitor dining_philopher
{
enum {THINKING, HUNGRY, EATING} state[5];
condition self[5];
void pickup(int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].WAIT();
}
void putdown(int i) {
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
}
void test(int i) {
if (
(state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING))
{
state[i] = EATING;
self[i].SIGNAL();
}
}
initialization code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
}
Ideally, how a process might invoke these functions would be in the following sequence:
DiningPhilosophers.pickup(i);
...
// do somework
...
DiningPhilosophers.putdown(i);
Now, whilst one process is executing inside the pickup() method another might try to invoke putdown() (or even the pickup) method. In order to ensure mutual exclusion we must ensure only one process is running inside the monitor at any given time. So, to handle these cases we have a global semaphore mutex that encapsulates all the invokable (pickup & putdown) methods. So these two methods will be implemented as follows:
void pickup(int i) {
// wait(mutex);
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].WAIT();
// signal(mutex);
}
void putdown(int i) {
// wait(mutex);
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
// signal(mutex);
}
Now only one process will be able to execute inside the monitor in any of its methods. Now, with this setup, if Process P1 has executed pickup() (but is yet tp putdown the chopsticks) and then Process P2 (say an adjacent diner) tries to pickup(): since his/her chopsticks (shared resource) is in use, it has to wait() for it to be available. Let's look at the WAIT and SIGNAL implementation of the monitor's conditional variables:
WAIT(){
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
}
SIGNAL() {
if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
}
The WAIT implementation of the conditional variables is different from that of the Semaphore's because it has to provide more functionality, like allowing other processes to invoke functions of the monitor (whilst it waits) by releasing the mutex global semaphore. So, when WAIT is invoked by P2 from the pickup() method, it will call signal(mutex) allowing other processes to invoke the monitor methods and call wait(x_sem) on the semaphore specific to the conditional. Now, P2 is blocked here. In addition, the variable x_count keeps track of the number of Processes waiting on the conditional variable (self).
So when P1 invokes putdown(), this will invoke SIGNAL via the test() method. Inside SIGNAL when P1 invokes signal(x_sem) on the chopstick it holds, it must do one additional thing. It must ensure that only one process is running inside the monitor. If it would only call signal(x_sem) then from that point onwards P1 and P2 both would start doing things inside the monitor. To prevent this P1, after releasing its chopstick it will block itself until P2 finishes. To block itself, it uses the semaphore next. And to notify P2 or some other process that there is someone blocked it uses a counter next_count.
So, now P2 would get the chopsticks and before it exits the pickup() method it must release P1 who is waiting on P2 to finish. So now, we must change the pickup() method (and all functions of the monitor) as follows:
void pickup(int i) {
// wait(mutex);
state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].WAIT();
/**************
if (next_count > 0)
signal(next);
else
signal(mutex);
**************/
}
void putdown(int i) {
// wait(mutex);
state[i] = THINKING;
test((i + 4) % 5);
test((i + 1) % 5);
/**************
if (next_count > 0)
signal(next);
else
signal(mutex);
**************/
}
So now, before any process exits a function of the monitor, it checks if there are any waiting processes and if so releases them and not the mutex global semaphore. And the last of such waiting processes will release the mutex semaphore allowing new processes to enter into the monitor functions.
I know it's pretty long, but it took some time for me to understand and wanted to put it in writing. I will post it on a blog soon.
If there are any mistakes please let me know.
Best,
Shabir
I agree its confusing.
Lets first understand the first piece of code:
// if you are the only process on the queue just take the monitor and invoke the function F.
wait(mutex);
...
body of F
...
if (next_count > 0)
// if some process already waiting to take the monitor you signal the "next" semaphore and let it take the monitor.
signal(next);
else
// otherwise you signal the "mutex" semaphore so if some process requested the monitor later.
signal(mutex);
back to your questions:
What does the reason for introducing semaphore next and the count
next_count of processes suspended on next mean?
imagine you have a process that is doing some I/O and it needs to be blocked until it finishes. so you let other processes waiting in the ready queue to take the monitor and invoke the function F.
next_count is only for the purpose to keep track of processes waiting in the queue.
a process suspended on next semaphore is the process who issued wait on condition variable so it will be suspended until some other
process (next process) wake it up and resume work.
Why are x.wait() and x.signal() implemented the way they are?
Lets take the x.wait():
semaphore x_sem; // (initially = 0)
int x_count = 0; // number of process waiting on condition (x)
/*
* This is used to indicate that some process is issuing a wait on the
* condition x, so in case some process has sent a signal x.signal()
* without no process is waiting on condition x the signal will be lost signal (has no effect).
*/
x_count++;
/*
* if there is some process waiting on the ready queue,
* signal(next) will increase the semaphore internal counter so other processes can take the monitor.
*/
if (next_count > 0)
signal(next);
/*
* Otherwise, no process is waiting.
* signal(mutex) will release the mutex.
*/
else
signal(mutex);
/*
* now the process that called x.wait() will be blocked until other process will release (signal) the
* x_sem semaphore: signal(x_sem)
*/
wait(x_sem);
// process is back from blocking.
// we are done, decrease x_count.
x_count--;
Now lets take the x.signal():
// if there are processes waiting on condition x.
if (x_count > 0) {
// increase the next count as new blocked process has entered the queue (the one who called x.wait()). remember (wait(x_sem))
next_count++;
// release x_sem so the process waiting on x condition resume.
signal(x_sem);
// wait until next process is done.
wait(next);
// we are done.
next_count--;
}
Comment if you have any questions.

Incorrect implementation of Barrier Method of synchronisation

Consider the following Barrier method to implement synchronization :
void barrier
{
P(s);
process_arrived++;
V(s);
while(process_arrived != 3);
P(s);
process_left++;
if(process_left == 3)
{
process_Arrived = 0;
process_left = 0;
}
V(s);
}
It is known that this code does not work because of a flaw, but I am not able to find the flaw.
The problem is with the condition : if (process_left == 3)
It may lead to deadlock if two barrier invocations are used in immediate succession.
Initially, process_arrived and process_left will be '0'.
When a process arrives, it increments process_arrived and waits till maximum number of processes have arrived. After that processes are allowed to leave.
Consider the following scenario:
P1 comes and waits till process_arrived becomes 3 ( currently process_arrived = 1)
P2 comes and waits till process_arrived becomes 3 ( currently process_Arrived = 2)
Now P3 comes for execution. The condition in while loop fails and executes further, making process_left = 1 and again enters the function immediately.
It makes process_Arrived = 4 and waits in while loop.
P2 gets a chance to execute makes process_left = 2 and leaves.
P1 executes further and finds that process_left = 3, thereby making both process_arrived and process_left = 0. (Remember that P3 has already entered and is waiting at the barrier, so here a count is lost)
P1 again executes , makes process_arrived = 1 and waits.
P2 also executes again. makes process_arrived = 2 and waits.
Now every process will wait for ever. Hence a deadlock has occurred.

In Rx (or RxJava/RxScala), how to make an auto-resetting stateful latch map/filter for measuring in-stream elapsed time to touch a barrier?

Apologies if the question is poorly phrased, I'll do my best.
If I have a sequence of values with times as an Observable[(U,T)] where U is a value and T is a time-like type (or anything difference-able I suppose), how could I write an operator which is an auto-reset one-touch barrier, which is silent when abs(u_n - u_reset) < barrier, but spits out t_n - t_reset if the barrier is touched, at which point it also resets u_reset = u_n.
That is to say, the first value this operator receives becomes the baseline, and it emits nothing. Henceforth it monitors the values of the stream, and as soon as one of them is beyond the baseline value (above or below), it emits the elapsed time (measured by the timestamps of the events), and resets the baseline. These times then will be processed to form a high-frequency estimate of the volatility.
For reference, I am trying to write a volatility estimator outlined in http://www.amazon.com/Volatility-Trading-CD-ROM-Wiley/dp/0470181990 , where rather than measuring the standard deviation (deviations at regular homogeneous times), you repeatedly measure the time taken to breach a barrier for some fixed barrier amount.
Specifically, could this be written using existing operators? I'm a bit stuck on how the state would be reset, though maybe I need to make two nested operators, one which is one-shot and another which keeps creating that one-shot... I know it could be done by writing one by hand, but then I need to write my own publisher etc etc.
Thanks!
I don't fully understand the algorithm and your variables in the example, but you can use flatMap with some heap-state and return empty() or just() as needed:
int[] var1 = { 0 };
source.flatMap(v -> {
var1[0] += v;
if ((var1[0] & 1) == 0) {
return Observable.just(v);
}
return Observable.empty();
});
If you need a per-sequence state because of multiple consumers, you can defer the whole thing:
Observable.defer(() -> {
int[] var1 = { 0 };
return source.flatMap(v -> {
var1[0] += v;
if ((var1[0] & 1) == 0) {
return Observable.just(v);
}
return Observable.empty();
});
}).subscribe(...);

Which costs more while looping; assignment or an if-statement?

Consider the following 2 scenarios:
boolean b = false;
int i = 0;
while(i++ < 5) {
b = true;
}
OR
boolean b = false;
int i = 0;
while(i++ < 5) {
if(!b) {
b = true;
}
}
Which is more "costly" to do? If the answer depends on used language/compiler, please provide. My main programming language is Java.
Please do not ask questions like why would I want to do either.. They're just barebone examples that point out the relevant: should a variable be set the same value in a loop over and over again or should it be tested on every loop that it holds a value needed to change?
Please do not forget the rules of Optimization Club.
The first rule of Optimization Club is, you do not Optimize.
The second rule of Optimization Club is, you do not Optimize without measuring.
If your app is running faster than the underlying transport protocol, the optimization is over.
One factor at a time.
No marketroids, no marketroid schedules.
Testing will go on as long as it has to.
If this is your first night at Optimization Club, you have to write a test case.
It seems that you have broken rule 2. You have no measurement. If you really want to know, you'll answer the question yourself by setting up a test that runs scenario A against scenario B and finds the answer. There are so many differences between different environments, we can't answer.
Have you tested this? Working on a Linux system, I put your first example in a file called LoopTestNoIf.java and your second in a file called LoopTestWithIf.java, wrapped a main function and class around each of them, compiled, and then ran with this bash script:
#!/bin/bash
function run_test {
iter=0
while [ $iter -lt 100 ]
do
java $1
let iter=iter+1
done
}
time run_test LoopTestNoIf
time run_test LoopTestWithIf
The results were:
real 0m10.358s
user 0m4.349s
sys 0m1.159s
real 0m10.339s
user 0m4.299s
sys 0m1.178s
Showing that having the if makes it slight faster on my system.
Are you trying to find out if doing the assignment each loop is faster in total run time than doing a check each loop and only assigning once on satisfaction of the test condition?
In the above example I would guess that the first is faster. You perform 5 assignments. In the latter you perform 5 test and then an assignment.
But you'll need to up the iteration count and throw in some stopwatch timers to know for sure.
Actually, this is the question I was interested in… (I hoped that I’ll find the answer somewhere to avoid own testing. Well, I didn’t…)
To be sure that your (mine) test is valid, you (I) have to do enough iterations to get enough data. Each iteration must be “long” enough (I mean the time scale) to show the true difference. I’ve found out that even one billion iterations are not enough to fit to time interval that would be long enough… So I wrote this test:
for (int k = 0; k < 1000; ++k)
{
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // to slow down a bit
b = true;
a /= 2; // to slow down a bit more
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tasgn\t" + time);
}
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // the same thing as above
if (!b)
{
b = true;
}
a /= 2;
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tif\t" + time);
}
}
I ran the test three times storing the data in Excel, then I swapped the first (‘asgn’) and second (‘if’) case and ran it three times again… And the result? Four times “won” the ‘if’ case and two times the ‘asgn’ appeared to be the better case. This shows how sensitive the execution might be. But in general, I hope that this has also proven that the ‘if’ case is better choice.
Thanks, anyway…
Any compiler (except, perhaps, in debug) will optimize both these statements to
bool b = true;
But generally, relative speed of assignment and branch depend on processor architecture, and not on compiler. A modern, super-scalar processor perform horribly on branches. A simple micro-controller uses roughly the same number of cycles per any instruction.
Relative to your barebones example (and perhaps your real application):
boolean b = false;
// .. other stuff, might change b
int i = 0;
// .. other stuff, might change i
b |= i < 5;
while(i++ < 5) {
// .. stuff with i, possibly stuff with b, but no assignment to b
}
problem solved?
But really - it's going to be a question of the cost of your test (generally more than just if (boolean)) and the cost of your assignment (generally more than just primitive = x). If the test/assignment is expensive or your loop is long enough or you have high enough performance demands, you might want to break it into two parts - but all of those criteria require that you test how things perform. Of course, if your requirements are more demanding (say, b can flip back and forth), you might require a more complex solution.