How to guarantee checker runs after monitor each timestep - system-verilog

I have several agents each with their own monitor and analysis ports connected to a checker. The checker is organized like below where it calls each check() function every cycle in a specific order. This is done this way to handle the case where we get an input and output txn in the same cycle (design has "bypass" logic to immediately output the txn it sees on its input in the same cycle).
If we go with design #2 (below), there is no guarantee that we will process the input_txn first, so if we happen to process the output_txn first, the assertion could fire because it doesn't know that there was an input_txn in the same cycle. I have had success using Design #1 to handle the case where we get an input and output txn in the same cycle; however I now realize this is still not guaranteed to work correctly because it's possible that the simulator could execute the checker's run_phase() after the output_agent's run_phase() but before the input_agent's run_phase(), and I could get the same issue.
What I really want is almost a "check_phase" for each timestep, so I can guarantee all agents monitors' have finished executing in the current timestep before the checker starts executing. Is there any way to guarantee the checker executes after all other processes in the current timestep?
P.S. I'm not looking for advice on how to improve my checker, this is just a very dumbed down version of my actual testbench I made to easily convey the problem I have.
## Design 1 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
forever begin
check_inputs();
check_outputs();
#(posedge vinft.clk);
end
endtask
function check_inputs();
input_txn_c txn;
if (input_analysis_fifo.try_get(txn)) begin // non-blocking try_get()
//do check
pending_txn_cnt++;
end
endfunction
function check_outputs();
output_txn_c txn;
if (output_analysis_fifo.try_get(txn)) begin //non-blocking try_get()
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endfunction
endclass
## Design 2 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
fork
check_inputs();
check_outputs();
join_none
endtask
task check_inputs();
input_txn_c txn;
forever begin
input_analysis_fifo.get(txn); //blocking get()
//do check
pending_txn_cnt++;
end
endtask
task check_outputs();
output_txn_c txn;
forever begin
output_analysis_fifo.get(txn); //blocking get
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endtask
endclass

Since you use a FIFO for both the input and output, you should be able to use this simple design:
class my_checker extends uvm_component;
//boiler plate uvm...
input_txn_c txni;
output_txn_c txno;
task run_phase();
forever begin
// Wait until there is an input transaction: txni
input_analysis_fifo.get(txni);
// Wait until there is an output transaction: txno
output_analysis_fifo.get(txno);
// Now there is a pair of transactions to compare: txni vs. txno
// call compare function...
end
endtask
// compare function...
endclass
Since the get calls are blocking, you just need to wait until you have an input transaction, then wait until you have an output transaction. It does not matter if they arrive in the same timestep. Once you have an in/out pair, you can call your compare function.
I don't think you need to check the transaction count for every pair. If you want, you could check if the FIFO's still have anything in them at the end of the test.

Related

recomend the way to write a monitor in UVM with defferent event polarity

I am trying to implement a monitor for VDU(Video display unit) and the way the VDU can be programmed says that sync signals have controllable polarity. This means than according to VDU settings monitor should react on #posedge or #negedge event. Is there any way to pass the type (means posesge or negedge) via configuration data base or do something like this. Instead of write if(truth) #posedge else #negedge. And assertion also needs to be controlled this way but assertion at list designed to take event type as an argument but I am no sure config data base calls are allowed inside interface.
On option is to conditionally trigger an event. For example, you can have the bellow in you interface:
event mon_clk_ev;
bit mon_polarity;
always #(posedge clk) if ( mon_polarity) ->mon_clk_ev;
always #(negedge clk) if (!mon_polarity) ->mon_clk_ev;
Then you can use mon_clk_ev are the clock event in your monitor, interface, clocking block, or assertion.
mon_polarity could be assigned by your monitor, uvm_config_db, or other logic.
Example using uvm_config_db (Note using uvm_bitstream_t so it can be assigned with the uvm_set_config_int plusarg):
initial begin
start_of_simulation_ph.wait_for_state( UVM_PHASE_STARTED, UVM_GTE );
if (!uvm_config_db#(uvm_bitstream_t)::exists(null,"","mon_polarity")) begin
// default if not in database
uvm_config_db#(uvm_bitstream_t)::set(null,"*","mon_polarity",1'b1);
end
forever begin
void'(uvm_config_db#(uvm_bitstream_t)::get(null,"","mon_polarity",mon_polarity));
uvm_config_db#(uvm_bitstream_t)::wait_modified(null,"","mon_polarity");
end
end
You should write your code assuming positive polarity, but feed them through an xor operator.
logic signal; // your signal from DUT
logic signal_corrected; // signal with positive polarity
bit signal_polarity; // 0 = positive ; 1 = negative
assign signal_corrected = signal ^ signal_polarity;
Now you can use signal_corrected in your assertions. You can certainly call uvm_config_db#(bit)::get() from the interface if it has been set in your testbench. You might need to use uvm_config_db#(bit)::wait_modified() to wait for it to be set before you get it.

Possible workaround for async negedge reset?

I'd like to have a register with async reset signal, like following:
always #(posedge clk or negedge rst_n)
begin
if(!rst_n)
out <= 1'b0
else
out <= in
end
I have tried class AsyncReset() and withReset(). However, the generated code uses a posedge reset and the variable of AsyncReset() does not accept !.
Is there any workaround for this?
While you cannot invert the AsyncReset type directly (generally applying logic to an AsyncReset is bad because it can glitch), you can cast to a Bool and back:
val reset_n = (!reset.asBool).asAsyncReset
val reg = withReset(reset_n)(RegInit(0.U(8.W)))
Runnable example: https://scastie.scala-lang.org/ERy0qHt2Q3OvWIsp9qiiNg
I thought Jack's quick comment about avoiding glitches deserved a longer explanation.
Using an asynchronous reset creates a second timing arc in the design, from the reset to the end flop. The reset signal can be asserted at any time but needs to be de-asserted synchronous to the clock otherwise the flop can become metastable.
A common technique to do this is to use a reset synchronizer.
https://scastie.scala-lang.org/hutch31/EPozcu39QBOmaB5So6fyeA/13
The synchronizer shown in the above code is coded directly in Verilog as I do not know a way to keep the FIRRTL optimizer from pruning through constant optimization. The logic downstream of the reset sync can be either sync or async reset.

How to attach an UVM sequence with a particular sequencer?

I have 3 sequences, and 4 sequencers.
I want
sequencer 1 to run sequence1,
sequencer 2 to run sequence1,
sequencer 3 to run sequence2, sequence3 in serial order.
sequencer 4 to run sequence1, sequence2 in serial order.
One method to do so is inside the test class
task run_phase(uvm_phase phase);
fork
sequence1.start(sequencer1);
sequence1.start(sequencer2);
begin
sequence2.start(sequencer3);
//wait for request....
sequence3.start(sequencer3);
end
begin
sequence2.start(sequencer4);
//wait for req....
sequence1.start(sequencer4);
end
join
endtask
How can I do the same inside each of the sequencers, than doing inside test?
What you have written is the best method of doing what you want (after raising an objection before the fork and dropping it after the join). All other methods make it difficult to add additional sequences before the fork or after the join.
You can use the uvm_config_db to set the "default_sequence" of each sequencer, but you will need to create another sequence layer for sequencer3 and 4 that starts sequence1 and 2 in the desired order. You will also need to deal with raising/lowering objections inside each default sequence.
Another option is instead of using generic sequencers, you can define a sequencer and override the run_phase to start each sequence or series of sequences.

Convert unsigned int to Time in System-verilog

I have in large part of my System-Verilog code used parameters to define different waiting times such as:
int unsigned HALF_SPI_CLOCK = ((SYSTEM_CLK_PERIOD/2)*DIVISION_FACTOR); //DEFINES THE TIME
Now since I define a timescale in my files I can directly use these params to introduce waiting cycles:
`timescale 1ns/1ns
initial begin
#HALF_SPI_CLOCK;
end
Now I want to have time-specified delays everywhere. Means that the simulation will still respect all the timings even if I change the timescale. I would like to keep the parameters but wherever I have a wait statement, I need to specify the time. Something like
#(HALF_SPI_CLOCK) ns;
But this is not accepted by Modelsim. Is there a way to cast a parameter or an Unsigned int to a variable of type time in System-Verilog? Is there a way to specify the time unit? I have looked around but could not find any workaround.
The reason why I want to have control over the time and make it independent from the timescale is because I intend to change thetimescale later to try to make my simulation faster.
Other recommendations or thoughts are very welcome*
It is possible to pass time as a parameter in SystemVerilog, like:
module my_module #(time MY_TIME = 100ns);
initial begin
#MY_TIME;
$display("[%t] End waiting", $time);
end
endmodule
Or use multiplication to get the right time units, like:
module my_module2 #(longint MY_TIME = 100);
initial begin
# (MY_TIME * 1us);
$display("[%t] End waiting 2", $time);
end
endmodule
See runnable example on EDA Playground: http://www.edaplayground.com/x/m2
This will simulate and do what you want. While not the most elegant, it works.
task wait_ns(int num);
repeat (num) #1ns;
endtask
...
wait_ns(HALF_SPI_CLOCK);
This could have a negative impact simulation speed depending on how the timescale, clock events, and the unit of delay relate to each other.

System Verilog fork confusion, statements executed between fork and begin

See the simplified example code here:
process job[num_objs];
// assume also, arr_obj1s (array of type obj1) and
// arr_obj2s (array of type obj2) are arrays of size
// num_objs, and the objects define a run() function
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // these run forever loops
begin
job[j] = process::self();
arr_obj2s[j].run(); // these run finite logic
end
join_none
end
foreach (job[i]) begin
wait (job[i] != null);
job[i].await();
end
// How do we ever reach here?
My confusion is that the calls to arr_obj1s[j].run() will never return (they run forever loops) and I don't quite follow the meaning of that call's placement outside the begin/end block. Which process is that forever run() executed on, and how can it be that each call to await() will return if some process is running a run() which won't return?
EDIT: Here is some more information. Posting the full code would be pages and pages, but I hope this extra bit helps.
obj1's run() function looks like this:
virtual task run;
fork
run_a(); // different logically separated tasks
run_b();
run_c();
join
endtask: run
And as an example, run_a looks basically like this (they are all similar):
virtual task run_a;
// declare some local variables
forever begin
#(posedge clk)
// ...
end
endtask: run_a
But obj2's run() function looks basically like this:
virtual task run;
fork
run_d(); // different logically separated tasks
run_e();
join
endtask: run
And as an example run_d() looks like this:
virtual task run_d;
while ((data_que.size() > 0)) begin
// process a pre-loaded queue,
// data will not be pushed on during the simulation
end
endtask:run_d
This code fragment looks like it is demonstrating process control so here's my guess as to what's going on. There is a group of processes in arr_obj1s and arr_obj2s:
Those in arr_obj1s run forever so they only need to be spawned once and forgotten about.
Those in arr_obj2s accomplish some task and return, so the parent process needs to know when this happens.
All processes have the same parent
My confusion is that the calls to arr_obj1s[j].run() will never return
(they run forever loops) and I don't quite follow the meaning of that
call's placement outside the begin/end block
So all that's needed to spawn all processes are three lines of code in the fork..join_none block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i; // Spawns process
arr_obj1s[j].run(); // Spawns process
arr_obj2s[j].run(); // Spawns process
join_none
end
The join_none keyword indicates that execution will continue after the parallel block completes, thus the entire foreach loop will execute and then the parent process will continue on to the next foreach loop. Further, the join_none also means that the child processes will not start until the parent process reaches a blocking statement.
However this won't allow us to detect when the child processes complete, unless they have some sort of shared variable they modify. To get around having to code that, SystemVerilog allows a handle to a process so it can schedule an event when the process completes. It doesn't, however, provide the ability to get the handle of a single statement. You must use process::self() inside a procedural context to get the process handle. Thus this won't work right if added directly to the fork-join block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run();
job[j] = process::self(); // Will return parent process
arr_obj2s[j].run();
join_none
end
To fix this we need to create a new sequential procedural context that we can get the process handle of, then run the function from there:
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // Spawns a new process for those that don't complete
begin // Spawns a new process for those that complete
job[j] = process::self(); // Saves handle to this begin..end process
arr_obj2s[j].run(); // Process continues though here
end
join_none
end
The final foreach loop only waits on processes for which we have a handle of. The processes that run forever are ignored.
First off, the way fork/join works in Verilog, each statement in the fork/join block executes concurrently. Absent a begin/end, each line is a statement in itself.
So your example is forking off at least two processes for each iteration of the loop.
fork
automatic int j = i; <= Statement 1 ??
arr_obj1s[j].run(); // these run forever loops <= Statement 2
begin \
job[j] = process::self(); | <= Statement 3
arr_obj2s[j].run(); // these run finite logic |
end /
join_none
I say at least, because I don't fully understand how the automatic int j is treated in this case.
So here is what happens.
For each iteration of the loop:
arr_obj1s[j].run() is started. (This runs a forever loop and will never end.)
arr_obj2s[j].run() is started. (This will end after running for some finite time.) The process ID for the process that started this is stored in job[j].
The code which is calling await is only waiting on the processes which started the calls to arr_obj2s[j].run(). They will complete since they are running a finite task.
The forever loops will still be running, even after the await calls have all completed.