I am new to real time programming and I am trying to practice.
In the example I am working on : Task1 must change a variable called frequency periodically and task2 will blink led each time with the new frequency.
Should I use mutex on a shared variable "Freq" or use a queue to send periodically the frequency from task 1 and task 2.
If only one task is changing the variable, and all other tasks (just one in your case) is reading the variable, then you don't need to protect the variable at all provided the variable can be written in one go. So if this is a 32-bit architecture and the variable is 32-bits, then no protection is needed. If on the other hand it was an 8-bit architecture and the variable was 32-bits then it would take 4 writes to update all 32-bits and the variable would need protecting. This is only true when there is only one writer - if more than one task was writing to the variable then it would need protecting.
However, just updating the variable does not signal to the reading task that the variable has changed. For efficiency you might only want the reading task to execute when the variable changes, in which case you could send the updated variable on a queue and have the reading task blocked on the queue and automatically unblocked when there is data in the queue (https://www.freertos.org/Embedded-RTOS-Queues.html) - depending on the frequency of update though it might be more efficient to send the updated variable directly to the task as a direct to task notification, and have the reading task blocked on the notification (https://www.freertos.org/RTOS-task-notifications.html).
Related
Task: Tasks spawn with fixed time intervals (source), each has remaining processing time which is given by uniform random [0 .. x]. Each task is processed by the module (delay). Each module has a fixed processing time. Module substracts it's processing time from the task's remaining processing time. If a task's remaining processing time depleted (less than 0), that task becomes completed reaches (sink). Otherwise it goes to the next module, and the same process repeats. There are N modules, that are linked one after eachother. If the task's remaining processing time has not depleted after processing at the N'th module, it goes to the 1st module with the highest priority and is being processed there until remaining processing time depletes.
Model Image
The issue: I've created the model, the max amount of spawned/sinked agents i could get is 17 for -Xmx8G and 15 for -Xmx4G. Then CPU/RAM usage rises to max and nothing happens.
Task Manager + Simulation Image
Task Manager Image
I've also checked troubleshooting "I got “Out Of Memory” error message. How can I fix it?" page.
Case
Result
Large number of agents, or agents with considerable memory footprints
My agents have 2 parameters that are unique to each agent. One is double (remaining_processing_time), another one is integer (queue_priority). Also all 17 spawned agents reached sink.
System Dynamics delay structures under small numeric time step settings
Not using that function anywhere, besides delay block.
Datasets auto-created for dynamic variables
This option is turned off
Maybe i missing something, but i can't really analyze with such small amount of agents. I'll leave a model here.
This model really had me stumped. Could not figure out where the memory was going and why as you run the model visually it stops yet the memory keeps on increasing exponentially... Until I did a Java profiling and found that you have 100s of Main in the memory...
]
You create a new Main() for every agent that comes from the source - so every agent is a new Main and every Main is a completely new "simulation" if you will..
Simply change it back to the default or in your case create your own agent type, since you want to save the remaining time and queue priority
You will also need to change the agent type in all your other blocks
Now if you run. your app it uses a fraction of memory
I am looking into using a stream buffer in FreeRTOS to transfer CAN frames from multiple tasks to an ISR, which puts them into a CAN transmit buffer as soon as it's ready. The manual here explains that a stream buffer should only be used by one task/isr and read by one task/isr, and if not, then a critical section is required.
Can a mutex be used in place of a critical section for this scenario? Would it make more sense to use?
First, if you are sending short discrete frames, you may want to consider a message buffer instead of a stream buffer.
Yes you could use a mutex.
If sending from multiple tasks the main thing to consider is what happens when the stream buffer becomes full. If you were using a different FreeRTOS object (other than a message buffer, message buffers being built on stream buffers) then multiple tasks attempting to write to the same instance of an object that was full would all block on their attempt to write to the object, and be automatically unblocked when space in the object became available - the highest priority waiting task would be the first to be unblocked no matter the order in which the tasks entered the blocked state. However, with stream/message buffers you can only have one task blocked attempting to write to a full buffer - and if the buffer was protected by a muted - all other tasks would instead block on the mutex. That could mean that a low priority task was blocked on the stream/message buffer while a higher priority task was blocked on the mutex - a kind of priority inversion.
Currently, I am reading about Schedulers and scheduling algorithms.
I am really confused with short-term scheduler and dispatcher.
At some places, it is written that they are same. At some places, it is written that their jobs are different.
From whatever I read I concluded that - "Scheduling" of the scheduler is caused by the code associated with a hardware interrupt, or code associated with a system call. With this a mode switch from user mode to kernel mode took place. Then short-term scheduler selects a process from a queue of the available process to give it control of the CPU. The task of short-term scheduler ends here.
Now dispatcher comes into play. The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves the following: -Switching context -Switching to user mode -Jumping to the proper location in the user program to restart that program.
Is my understanding correct?
Suppose Process A is preempted and process B is scheduled next. What happened during the context switch ? How context data of Process
A, scheduler, dispatcher, Process B is saved and restored?
The various divisions of the process switching steps are system dependent. Operating system books like to make these steps complicated and divide the into multiple steps.
There are really only two steps:
1. pick a the new process.
2. Switch to the new process.
That last step is very simple; so simple that it is probably not worthy of being called a separate step.
Most CPUs define a structure that is usually called the Process Context Block (PBC). The PCB has a slot for every register that defines the state of the process. Switching processes can be as simple as:
SAVEPCTX pcb_address_of_current_process ; Save the state of the running process
LOADPCTX pcb_address_of_new_process ; Load the state of the other process.
REI
Some processors require more steps, like having to save floating point registers separately.
Aren't wait and signal conditional variables to signify request and release?
This link states that semaphores do not have coniditonal variables while monitors do.
According to the same site,
The conditional variable allows a process to wait inside the monitor
and allows a waiting process to resume immediately when the other
process releases the resources.
Isn't that the same procedure in a semaphore?
The difference here is that semaphore is a stateful object, while condition variable is stateless.
The idea is that sometime you have a very complex state (that cannot be represented by a simple counter like a semaphore) and you want to wait for that state to change. That is the reason why condition variables are used with mutexes - a mutex is required to protect the change of that state and allows waiting for a change without losing notifications.
Internally some semaphore implementations are based on condition variables - in this case counter is a protected state that is going to change. But such implementations are not very effecient as modern OS have better ways to implement semaphores.
If you want to know how condition variables and semaphores can be implemented, you can read my answer here.
I have a doubt regarding UVM. Let's think I have a DUT with two interfaces, each one with its agent, generating transactions with the same clock. These transactions are handled with analysis imports (and write functions) on the scoreboard. My problem is that both these transactions read/modify shared variables of the scoreboard.
My questions are:
1) Have I to guarantee mutual exclusion explicitly though a semaphore? (i suppose yes)
2) Is this, in general, a correct way to proceed?
3) and the main problem, can in some way the order of execution be fixed?
Depending on that order the values of shared variables can change, generating inconsistency. Moreover, that order is fixed by specifications.
Thanks in advance.
While SystemVerilog tasks and functions do run concurrently, they do not run in parallel. It is important to understand the difference between parallelism and concurrency and it has been explained well here.
So while a SystemVerilog task or function could be executing concurrently with another task or function, in reality it does not actually run at the same time (run time context). The SystemVerilog scheduler keeps a list of all the tasks and functions that need to run on the same simulation time and at that time it executes them one-by-one (sequentially) on the same processor (concurrency) and not together on multiple processors (parallelism). As a result mutual exclusion is implicit and you do not need to use semaphores on that account.
The sequence in which two such concurrent functions would be executed is not deterministic but it is repeatable. So when you execute a testbench multiple times on the same simulator, the sequence of execution would be same. But two different simulators (or different versions of the same simulator) could execute these functions in a different order.
If the specifications require a certain order of execution, you need to ensure that order by making one of these tasks/functions wait on the other. In your scoreboard example, since you are using analysis port, you will have two "write" functions (perhaps using uvm_analysis_imp_decl macro) executing concurrently. To ensure an order, (since functions can not wait) you can fork out join_none threads and make one of the threads wait on the other by introducing an event that gets triggered at the conclusion of the first thread and the other thread waits for this event at the start.
This is a pretty difficult problem to address. If you get 2 transactions in the same time step, you have to be able to process them regardless of the order in which they get sent to your scoreboard. You can't know for sure which monitor will get triggered first. The only thing you can do is collect the transactions and at the end of the time step do your modeling/checking/etc.
Semaphores only help you if you have concurrent threads that take (simulation) time that are trying to access a shared resource. If you get things from an analysis port, then you get them in 0 time, so semaphores won't help you here.
So to my understanding, the answer is: compiler/vendor/uvm cannot ensure the order of execution. If you need to ensure the order which actually happen in same time step, you need to use semaphore correctly to make it work the way you want.
Another thing is, only you yourself know which one must execute after the other if they are in same simulation time.
this is a classical race condition where the result depends upon the actual thread order...
first of all you have to decide if the write race is problematic for you and/or if there is a priority order in this case. if you dont care the last access would win.
if the access isnt atomic you might need a semaphore to ensure only one access is handled at a time and the next waits till the first has finished.
you can also try to control order by changing the structure or introducing thread ordering (wait_order) or if possible you remove timing at all (here instead of directly operating with the data you get you simply store the data for some time and then later you operate on it.