How to attach an UVM sequence with a particular sequencer? - system-verilog

I have 3 sequences, and 4 sequencers.
I want
sequencer 1 to run sequence1,
sequencer 2 to run sequence1,
sequencer 3 to run sequence2, sequence3 in serial order.
sequencer 4 to run sequence1, sequence2 in serial order.
One method to do so is inside the test class
task run_phase(uvm_phase phase);
fork
sequence1.start(sequencer1);
sequence1.start(sequencer2);
begin
sequence2.start(sequencer3);
//wait for request....
sequence3.start(sequencer3);
end
begin
sequence2.start(sequencer4);
//wait for req....
sequence1.start(sequencer4);
end
join
endtask
How can I do the same inside each of the sequencers, than doing inside test?

What you have written is the best method of doing what you want (after raising an objection before the fork and dropping it after the join). All other methods make it difficult to add additional sequences before the fork or after the join.
You can use the uvm_config_db to set the "default_sequence" of each sequencer, but you will need to create another sequence layer for sequencer3 and 4 that starts sequence1 and 2 in the desired order. You will also need to deal with raising/lowering objections inside each default sequence.
Another option is instead of using generic sequencers, you can define a sequencer and override the run_phase to start each sequence or series of sequences.

Related

How to guarantee checker runs after monitor each timestep

I have several agents each with their own monitor and analysis ports connected to a checker. The checker is organized like below where it calls each check() function every cycle in a specific order. This is done this way to handle the case where we get an input and output txn in the same cycle (design has "bypass" logic to immediately output the txn it sees on its input in the same cycle).
If we go with design #2 (below), there is no guarantee that we will process the input_txn first, so if we happen to process the output_txn first, the assertion could fire because it doesn't know that there was an input_txn in the same cycle. I have had success using Design #1 to handle the case where we get an input and output txn in the same cycle; however I now realize this is still not guaranteed to work correctly because it's possible that the simulator could execute the checker's run_phase() after the output_agent's run_phase() but before the input_agent's run_phase(), and I could get the same issue.
What I really want is almost a "check_phase" for each timestep, so I can guarantee all agents monitors' have finished executing in the current timestep before the checker starts executing. Is there any way to guarantee the checker executes after all other processes in the current timestep?
P.S. I'm not looking for advice on how to improve my checker, this is just a very dumbed down version of my actual testbench I made to easily convey the problem I have.
## Design 1 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
forever begin
check_inputs();
check_outputs();
#(posedge vinft.clk);
end
endtask
function check_inputs();
input_txn_c txn;
if (input_analysis_fifo.try_get(txn)) begin // non-blocking try_get()
//do check
pending_txn_cnt++;
end
endfunction
function check_outputs();
output_txn_c txn;
if (output_analysis_fifo.try_get(txn)) begin //non-blocking try_get()
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endfunction
endclass
## Design 2 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
fork
check_inputs();
check_outputs();
join_none
endtask
task check_inputs();
input_txn_c txn;
forever begin
input_analysis_fifo.get(txn); //blocking get()
//do check
pending_txn_cnt++;
end
endtask
task check_outputs();
output_txn_c txn;
forever begin
output_analysis_fifo.get(txn); //blocking get
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endtask
endclass
Since you use a FIFO for both the input and output, you should be able to use this simple design:
class my_checker extends uvm_component;
//boiler plate uvm...
input_txn_c txni;
output_txn_c txno;
task run_phase();
forever begin
// Wait until there is an input transaction: txni
input_analysis_fifo.get(txni);
// Wait until there is an output transaction: txno
output_analysis_fifo.get(txno);
// Now there is a pair of transactions to compare: txni vs. txno
// call compare function...
end
endtask
// compare function...
endclass
Since the get calls are blocking, you just need to wait until you have an input transaction, then wait until you have an output transaction. It does not matter if they arrive in the same timestep. Once you have an in/out pair, you can call your compare function.
I don't think you need to check the transaction count for every pair. If you want, you could check if the FIFO's still have anything in them at the end of the test.

How can I make an atomic statement in Twincat3 PLC?

I am working with a fast loop (0.5 ms cycle time) and slow loop (10 ms cycle time) which communicate with each other.
How can I make the in- and outputs to be consistent?
Consider the example below, I want the assignments in the SlowLoop to be atomic, to be sure that both referenced inputs from the FAST loop correspond with values from the same cycle.
Example
FastLoop [0.5 ms]
FAST_CNT = some rising edge detection
FAST_RUNIDX += 1
SlowLoop [10 ms]
<-- Atomic Operation
pulseCount = FAST_CNT
elapsedTicks = FAST_RUNIDX
Atomic Operation -->
Anytime that anything needs to be 'Atomic', you need to handle an object (STRUCT or FUNCTION_BLOCK).
In this case as there is no associated logic, a STRUCT should do the job nicely.
TYPE st_CommUnit :
STRUCT
Count : UINT;
Index : UINT;
END_STRUCT
END_TYPE
You can then have this STRUCT be presented as either an Input or Output between tasks using the %Q* and %I* addressing.
- Fast Task -
SourceData AT %Q* : st_CommUnit
- Slow Task -
TargetData AT %I* : st_CommUnit
Using this you end up with a linkable object such that you can link:
The entire unit
Each individual component
If you use two different tasks with different cycle times maybe also running on different cores, you need a way to synchronize the two tasks when doing read/write operations.
In order to access the data in an atomic way use the synchronization FBs that Beckhoff provides like for example FB_IecCriticalSection.
More info here on the infosys Website:
https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_plc_intro/45844579955484184843.html&id=

recomend the way to write a monitor in UVM with defferent event polarity

I am trying to implement a monitor for VDU(Video display unit) and the way the VDU can be programmed says that sync signals have controllable polarity. This means than according to VDU settings monitor should react on #posedge or #negedge event. Is there any way to pass the type (means posesge or negedge) via configuration data base or do something like this. Instead of write if(truth) #posedge else #negedge. And assertion also needs to be controlled this way but assertion at list designed to take event type as an argument but I am no sure config data base calls are allowed inside interface.
On option is to conditionally trigger an event. For example, you can have the bellow in you interface:
event mon_clk_ev;
bit mon_polarity;
always #(posedge clk) if ( mon_polarity) ->mon_clk_ev;
always #(negedge clk) if (!mon_polarity) ->mon_clk_ev;
Then you can use mon_clk_ev are the clock event in your monitor, interface, clocking block, or assertion.
mon_polarity could be assigned by your monitor, uvm_config_db, or other logic.
Example using uvm_config_db (Note using uvm_bitstream_t so it can be assigned with the uvm_set_config_int plusarg):
initial begin
start_of_simulation_ph.wait_for_state( UVM_PHASE_STARTED, UVM_GTE );
if (!uvm_config_db#(uvm_bitstream_t)::exists(null,"","mon_polarity")) begin
// default if not in database
uvm_config_db#(uvm_bitstream_t)::set(null,"*","mon_polarity",1'b1);
end
forever begin
void'(uvm_config_db#(uvm_bitstream_t)::get(null,"","mon_polarity",mon_polarity));
uvm_config_db#(uvm_bitstream_t)::wait_modified(null,"","mon_polarity");
end
end
You should write your code assuming positive polarity, but feed them through an xor operator.
logic signal; // your signal from DUT
logic signal_corrected; // signal with positive polarity
bit signal_polarity; // 0 = positive ; 1 = negative
assign signal_corrected = signal ^ signal_polarity;
Now you can use signal_corrected in your assertions. You can certainly call uvm_config_db#(bit)::get() from the interface if it has been set in your testbench. You might need to use uvm_config_db#(bit)::wait_modified() to wait for it to be set before you get it.

Concurrent Akka Agents in Scala

I'm working on a scala project right now, and I've decided to use Akka's agent library over the actor model, because it allows a more functional approach to concurrency.However, I'm having a problem running many different agents at a time. It seems like I'm capping at only having three or four agents running at once.
import akka.actor._
import akka.agent._
import scala.concurrent.ExecutionContext.Implicits.global
object AgentTester extends App {
// Create the system for the actors that power the agents
implicit val system = ActorSystem("ActorSystem")
// Create an agent for each int between 1 and 10
val agents = Vector.tabulate[Agent[Int]](10)(x=>Agent[Int](1+x))
// Define a function for each agent to execute
def printRecur(a: Agent[Int])(x: Int): Int = {
// Print out the stored number and sleep.
println(x)
Thread.sleep(250)
// Recur the agent
a sendOff printRecur(a) _
// Keep the agent's value the same
x
}
// Start each agent
for(a <- agents) {
Thread.sleep(10)
a sendOff printRecur(a) _
}
}
The above code creates an agent holding each integer between 1 and 10. The loop at the bottom sends the printRecur function to every agent. The output of the program should show the numbers 1 through 10 being printed out every quarter of a second (although not in any order). However, for some reason my output only shows the numbers 1 through 4 being outputted.
Is there a more canonical way to use agents in Akka that will work? I come from a clojure background and have used this pattern successfully there before, so I naively used the same pattern in Scala.
My guess is that you are running on a 4 core box and that is part of the reason why you only ever see the numbers 1-4. The big thing at play here is that you are using the default execution context which I'm guessing on your system uses a thread pool with only 4 threads on it (one for each core). With the way you've coded this in this sort of recursive manner, my guess is that the first 4 agents never relinquish the threads and they are the only ones that will ever print anything.
You can easily fix this by removing this line:
import scala.concurrent.ExecutionContext.Implicits.global
And adding this line after you create the ActorSystem
import system.dispatcher
This will use the default dispatcher for the actor system which is a fork join dispatcher which does not seem to have the same issue as the default execution context you imported in your sample.
You could also consider using send as opposed to sendOff as that will use the execution context that was available when you constructed the agent. I would think one would use sendOff when they had a case where they explicitly wanted to use another execution context.

System Verilog fork confusion, statements executed between fork and begin

See the simplified example code here:
process job[num_objs];
// assume also, arr_obj1s (array of type obj1) and
// arr_obj2s (array of type obj2) are arrays of size
// num_objs, and the objects define a run() function
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // these run forever loops
begin
job[j] = process::self();
arr_obj2s[j].run(); // these run finite logic
end
join_none
end
foreach (job[i]) begin
wait (job[i] != null);
job[i].await();
end
// How do we ever reach here?
My confusion is that the calls to arr_obj1s[j].run() will never return (they run forever loops) and I don't quite follow the meaning of that call's placement outside the begin/end block. Which process is that forever run() executed on, and how can it be that each call to await() will return if some process is running a run() which won't return?
EDIT: Here is some more information. Posting the full code would be pages and pages, but I hope this extra bit helps.
obj1's run() function looks like this:
virtual task run;
fork
run_a(); // different logically separated tasks
run_b();
run_c();
join
endtask: run
And as an example, run_a looks basically like this (they are all similar):
virtual task run_a;
// declare some local variables
forever begin
#(posedge clk)
// ...
end
endtask: run_a
But obj2's run() function looks basically like this:
virtual task run;
fork
run_d(); // different logically separated tasks
run_e();
join
endtask: run
And as an example run_d() looks like this:
virtual task run_d;
while ((data_que.size() > 0)) begin
// process a pre-loaded queue,
// data will not be pushed on during the simulation
end
endtask:run_d
This code fragment looks like it is demonstrating process control so here's my guess as to what's going on. There is a group of processes in arr_obj1s and arr_obj2s:
Those in arr_obj1s run forever so they only need to be spawned once and forgotten about.
Those in arr_obj2s accomplish some task and return, so the parent process needs to know when this happens.
All processes have the same parent
My confusion is that the calls to arr_obj1s[j].run() will never return
(they run forever loops) and I don't quite follow the meaning of that
call's placement outside the begin/end block
So all that's needed to spawn all processes are three lines of code in the fork..join_none block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i; // Spawns process
arr_obj1s[j].run(); // Spawns process
arr_obj2s[j].run(); // Spawns process
join_none
end
The join_none keyword indicates that execution will continue after the parallel block completes, thus the entire foreach loop will execute and then the parent process will continue on to the next foreach loop. Further, the join_none also means that the child processes will not start until the parent process reaches a blocking statement.
However this won't allow us to detect when the child processes complete, unless they have some sort of shared variable they modify. To get around having to code that, SystemVerilog allows a handle to a process so it can schedule an event when the process completes. It doesn't, however, provide the ability to get the handle of a single statement. You must use process::self() inside a procedural context to get the process handle. Thus this won't work right if added directly to the fork-join block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run();
job[j] = process::self(); // Will return parent process
arr_obj2s[j].run();
join_none
end
To fix this we need to create a new sequential procedural context that we can get the process handle of, then run the function from there:
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // Spawns a new process for those that don't complete
begin // Spawns a new process for those that complete
job[j] = process::self(); // Saves handle to this begin..end process
arr_obj2s[j].run(); // Process continues though here
end
join_none
end
The final foreach loop only waits on processes for which we have a handle of. The processes that run forever are ignored.
First off, the way fork/join works in Verilog, each statement in the fork/join block executes concurrently. Absent a begin/end, each line is a statement in itself.
So your example is forking off at least two processes for each iteration of the loop.
fork
automatic int j = i; <= Statement 1 ??
arr_obj1s[j].run(); // these run forever loops <= Statement 2
begin \
job[j] = process::self(); | <= Statement 3
arr_obj2s[j].run(); // these run finite logic |
end /
join_none
I say at least, because I don't fully understand how the automatic int j is treated in this case.
So here is what happens.
For each iteration of the loop:
arr_obj1s[j].run() is started. (This runs a forever loop and will never end.)
arr_obj2s[j].run() is started. (This will end after running for some finite time.) The process ID for the process that started this is stored in job[j].
The code which is calling await is only waiting on the processes which started the calls to arr_obj2s[j].run(). They will complete since they are running a finite task.
The forever loops will still be running, even after the await calls have all completed.