How to prevent new threads of SVA - system-verilog

Lets assume, I have a button in my design. I want to increment counter between next two clock when button has been pressed three times and I want to check this behaviour with SVA.
I have wrote this one:
`timescale 1ns / 1ps
module tb();
parameter NUMBER_OF_PRESSES = 10;
parameter CLK_SEMI_PERIOD = 5;
bit clk;
always #CLK_SEMI_PERIOD clk = ~clk;
bit button_n;
bit reset_n;
logic [7:0] counter;
property p;
logic[7:0] val;
disable iff(!reset_n) #(posedge clk) (($fell(button_n)[=3]),val=counter) |=> ##[0:2] (counter== val+1);
endproperty
assert property(p);
initial begin
automatic bit key_d;
automatic byte key_lat;
automatic byte key_press_count;
reset_n = 1;
button_n = 1;
counter = 0;
fork
begin
repeat(NUMBER_OF_PRESSES) begin
repeat(5)begin
#(negedge clk);
end
button_n = 0;
key_lat = $urandom_range(1,4);
repeat(key_lat) begin
#(negedge clk);
end
button_n = 1;
end
end
begin
forever begin
#(posedge clk);
if(!button_n && key_d) begin
key_press_count++;
end
if(key_press_count == 3) begin
counter++;
key_press_count = 0;
end
key_d = button_n;
end
end
join_any
end
endmodule
This works good at first three press, but then it will always throw assertion error, because it has been started new thread of assertion at each button press. So, I need to prevent testbench from doing this. When repetitition has been started I don't need to start new threads.
How can I do this?

I am not confident I fully understand your question. Let me first state my understanding and where I think your problem is. Apologise if I am mistaken.
You intend to detect negedges on button_n ("presses"), and on the third one, you increment "counter".
The problem here is that your stated objective (which actually matches the SVA) and your design do different things.
Your SVA will check that the counter has the expected value 1-3 cycles after every third negedge. This holds for press 0, 1 and 2. But it must also hold for press 1, 2 and 3. And press 2, 3 and 4 etc. I suspect the assertion passes on press 2 and then fails on press 3. I.e. you check that you increment your counter on every press after the third.
Your design, on the other hand does something different. It counts 3 negedges, increments counter, and it then starts counting from scratch.
I would advise against the use of local variables in assertions unless you are certain that it is what you need - I don't think this is the case here. You can have your SVA trigger on key_press_count == 3 (assuming you ofc define key_press_count appropriately and not as an automatic var).
If you insist on using your local SVA variable you can slightly modify your trigger condition to include counter. For example something along the lines of (though may be slightly wrong, have not tested):
(counter == 0 || $changed(counter)) ##1 ($fell(button_n)[=3], val = counter)
IMO that's a bad idea and having supporting RTL is the better way to go here to document your intention as well as check exactly the behaviour you are after.
Hope this helps

Related

How does always_ff works?

I do have a problem understanding how always_ff works in a way of creating a mesh of logic gates.
What do I mean ? When I use always_comb like here :
module gray_koder_dekoder(i_data, i_oper, o_code);
parameter LEN = 4;
input logic [LEN-1:0] i_data;
input logic i_oper;
output logic [LEN-1:0] o_code;
int i;
always_comb
begin
o_code = '0;
i = LEN-1;
if (i_oper == 1'b1) // 1'b1 - operacja
begin // kodowania
o_code = i_data ^ (i_data >> 1);
end
else // dla kazdej innej wartosci
begin // realizuj dokodowanie
o_code = i_data;
for (i=LEN-1; i>0; i=i-1)
begin
o_code[i-1] = o_code[i]
^ i_data[i-1];
end
end
end
endmodule
So how do I see it.
At the beggining the program sees that output is 0000,
now if the i_oper is equal 1 so the input is 1 then it checks changes the o_code to i_data ^ (i_data >> 1) so now the program want's to do combination of logic gates for this operation but if the i_oper is equal 0 then the program makes another set of logic gates to get different o_code.
So the always_comb gives the final result for every bite in the i_data that results in o_code.
So my teacher said that always_comb is "blocking" but always_ff is not "blocking" I don't get it ...
So the always_ff doesn't give the final result of logic gates for the input to get a specific output ?
Another example of always_comb :
module gray_dekoder (i_gray, o_data);
parameter LEN = 4;
input logic [LEN-1:0] i_gray;
output logic [LEN-1:0] o_data;
always_comb
begin
o_data = i_gray;
for (int i=LEN-1; i>0; i=i-1)
o_data[i-1] = o_data[i] ^ i_gray[i-1];
end
endmodule
So at the beggining the program sees that the output is 0000 so it will make a set of logic combination to have 0 at the end. Then he sees loop for that modifies the output so the program checks every bit of the input like bit nr 3 then nr 2 then nr 1 etc. and creates for every input specific output so now the output is not 0000 anymore but set of instructions that modifies the output made from loop "for"
so the always comb gives a final result from the analizing the whole code from top to bottom of "always_comb" and creates a set of instructions/set of logic gates that helps it. Because always_comb overwrites the previous instructions like 0000 it was a basic instruction but then was overwrited by the loop "for"
But maybe I think wrongly because if instruction doesn't overwrite the 0000 instruction like here :
module replace(i_a, i_b, o_replaced, o_error);
parameter BITS = 4;
input logic signed [BITS-1:0] i_a, i_b;
output logic signed [BITS-1:0] o_replaced;
output logic o_error;
int i;
always_comb
begin
o_replaced = '0;
if(i_b < 0 || i_b > BITS)
begin
o_error = 1;
o_replaced = 'x;
end
else
begin
i = i_b;
o_replaced = i_a;
o_replaced[i-1] = 1;
o_error = 0;
end
end
endmodule
I have here 0000 output that isn't overwrited for "else" So I don't know what happens then.
I think of always_comb as an "final result" that gives a set of instructions how to create logic gates. But the final result is at the end, so if something changes then the beggining result doesn't matter "overwrited" but with if loop it doesn't work with my mind set.
So always_ff I heard that it doesn't give a final result that it can stop at any point not like in always_comb that the program analysis from top to bottom.
Verilog is designed to represent behavior of hardware and it is not a regular programming language. It operates different semantics.
At a very top glance, hardware consists of combinational logic and flops (and latches). From the other point of view hardware is a set of parallel functions which are synchronized across design by clocks. This means that at a clock edge a lot of hardware devices start working in parallel and they should produce results by the next clock edge. Those results could be used by other functions at the next clock cycle.
Roughly, combinational logic defines a function, flops provide synchronization.
In verilog all those devices are described with always blocks. TUse of edges, e.g., #(posedge clk) provides synchronization points and usually defines flops in the code. A simple function and a flop look like the following.
// combinational logic
always #* // you can use always_comb instead
val = in1 & in2; // a combinational function
// flop
always #(posedge clk) // you can use always_ff instead
out <= val; // synchronization
So, in the example val is calculated by the combinational function and its synchronous out is made available to other functions to be used by a flop. You can see progression of clocks and results in a waveform.
So, this is what always_ff is doing, just providing synchronization and expressing flops for synthesis.
In general, always, always_comb, always_ff and always_latch are identical. The last three are system verilog blocks and just provide additional hints to the compiler which can run additional checks on them. I intentionally used just always blocks in my example to show that. There are some other conditions which need to be programmed to cleanly express the intention. So, your assertion about different working of always_ff has no base. It works the same as other always blocks.
What I think confuses you, is use of blocking (=) and non-blocking (<=) assignments. It does not matter for synthesis which one you use, but it matters for simulation. The difference is described and numerous documents and examples. To understand it properly you need to look into verilog simulation scheduling semantics.
But the rule of thumb is that you should use non-blocking assignments (<=) in flops and use blocking (=) in combinational logic. In flops '<=' allows simulating real behavior of flops. Remember that hardware is a massively-parallel evaluation engine. Consider the following example:
always_ff #(posedge clk) begin
out1 <= in1;
out2 <= out1;
end
The above example defines at least two flops working at posedge clk. out1 and out2 must be synchronized at this clock. It means, the flops have to catch values which existed before the edge and present them after the edge. So, for out1 the value existed before the edge is in1, evaluated by a combinational logic. What would be the value of out2? Which value existed before the edge? Apparently, the value of the out1 before it gets changed to the new value of in1.
clk ___|---|___|---|___
in1 0
out1 x 0 << new value of in1
out2 x << old value of ou1
.
in1 1 .
out1 . 1 << new value of in1
out2 . 0 << old value of ou1
So, after evaluation of the block, the at the first edge the value of out2 will be 'x' (previous value of out1), at the second clock edge it will finally get value of 'in1' as it existed at the previous clock cycle.
I hope it would make your understanding a bit better.

How to get latch with blocking assignment?

My question emphases the amend of the struct's element!
struct packed {
logic word;
logic [31:0] test;
} a;
logic [32:0] a_input;
logic a_ff;
always_latch begin
if (enable) begin
a = a_input; // map the bus `a_input` to the struct `a`
a.test = a.test[1:0]; // change the `test` child
end
end
enable and a_input are flip-flops on the same clock (then can reach the latch at different moment in physical hardware)
a is the modified comb/latch version of a_input
Vivado doesn't synthetize this as a latch.
I want to change only the a.test, a_input isn't a struct then I can't use a_input.test. Then that code describes well what I want to do.
How can I get a latch?
Edit: I can use a mix of always_comb, always_ff and assign.
struct packed {
logic word;
logic [31:0] test;
} a, a_comb;
logic [32:0] a_input;
logic a_ff;
always_ff #(posedge clk)
if (enable) begin
a_ff <= a_comb;
end
always_comb begin
a_comb = a_input; // map the bus `a_input` to the struct `a_comb`
a_comb.test = a_comb.test[1:0]; // change the `test` child
end
assign a = (enable)? a_comb: a_ff;
I'd like to avoid these extra lines and temporary logics, it should be possible using a simple always_latch.
Edit #2:
I really want to amend only the test element of my struct, and let all the other element being assigned from a_input.
If it was a FF, it'd do:
always_ff #(posedge clk)
if (enable) begin
a <= a_input; // map the bus `a_input` to the struct `a`
a.test <= a.test[1:0]; // change the `test` child
end
end
I want to convert that logic to a latch instead of FF.
First make sure a simple latch will synthesize as a latch. If it doesn't, then that a separate problem and you will need to figure out; check your synthesizer version and target device.
always_latch
if (enable)
a <= a_input;
I'm not sure why your latch does not work. Latches should be kept simple to prevent unintended complex feedback. Your code looks simple enough as the only thing it is doing beyond a basic D-latch is mask some bits.
Assuming the basic latch synthesizes correctly, you can do combinational logic before or after the latch. Ex:
always_comb begin
a_comb = a_input;
a_comb.test = a_comb.test[1:0];
end
always_latch
if (enable)
a <= a_comb;
or:
always_latch
if (enable)
a_lat <= a_input;
always_comb begin
a = a_lat;
a.test = a.test[1:0];
end
If you are only masking bits, you should be able to use:
always_latch
if (enable)
a <= a_input & 32'h100000003;
If the above three doesn't work, but a simple latch does work properly, then it might be optimization related. In this case make sure a_input and enable are FF and are functionally toggleable (ie not optimized away due do something else upstream). Also check how a is begin used down stream.
I tried this code in vivado, it able to synthesize it to LDCE latch .
module latch (
input logic a_input,enable,
output logic a
);
always_latch if(enable) a = a_input;
endmodule
A latch is effectively a level sensitive register rather than edge sensitive. The driver of enable isn't shown, so that may be getting driven to 0 by Vivado. A quick example, if you are trying to make a latch the following would should do it.
always#(enable) begin
a = enable ? a_input : a;
end
Generally, we tend to want to avoid latches, but if you are dead set on making one, you can start by getting rid of the posedge clk from the sensitivity list and changing it to *. That would mean that on any change of the signals, you would get a re-evaluation of the block. The fact that enable would control loading of the a_ff term while high, and then hold the value while low, should be seen as a latch provided that enable is driven from some source and not optimized to a constant 0 state because it is a undriven signal.

Avoiding support code for SVA sequence to handle pipelined transaction

Let's say we have a protocol that says the following. Once the master sets req to fill, the slave will signal 4 transfers via rsp:
An SVA sequence for this entire transaction would be (assuming that the slave can insert idle cycles between the trans cycles):
req == fill ##1 (trans [->1]) [*4];
Now, assume that the master is allowed to pipeline requests. This would mean that the next fill is allowed to start before the 4 trans cycles are done:
The SVA sequence from above won't help, because for the second fill it's going to wrongly match 4 trans cycles, leaving the last trans "floating". It would need to start matching trans cycles only after the ones for the previous fill have been matched.
The sequence needs global information not available in a single evaluation. Basically it needs to know that another instance of it is running. The only way I can think of implementing this is using some RTL support code:
int num_trans_seen;
bit trans_ongoing;
bit trans_done;
bit trans_queued;
always #(posedge clk or negedge rst_n)
if (!rst_n) begin
num_trans_seen;
trans_ongoing <= 0;
trans_done <= 0;
trans_queued <= 0;
end
else begin
if (trans_ongoing)
if (num_trans_seen == 3 && req == trans) begin
trans_done <= 1;
if (req == fill || trans_queued)
trans_queued <= 0;
else
trans_ongoing <= 0;
num_trans_seen == 0;
end
else
if (trans_queued) begin
trans_queued <= 0;
trans_ongoing <= 1;
end
if (trans_done)
trans_done <= 0;
end
The code above should raise the trans_ongoing bit while a transaction is ongoing and pulse trans_done in the clock cycle when the last trans for a fill is sent. (I say should because I didn't test it, but this isn't the point. Let's assume that it works.)
Having something like this, one could rewrite the sequence to be:
req == fill ##0 (trans_ongoing ##0 trans_done [->1]) [*0:1]
##1 (trans [->1]) [*4];
This should work, but I'm not particularly thrilled about the fact that I need the support code. There is a lot of redundancy in it, because I basically re-described a good chunk of what a transaction is and how pipelining works. It's also not as easily reusable. A sequence can be placed in a package and imported somewhere else. The support code can only be placed in some module and reused, but it's a different logical entity than the package that would store the sequence.
The question here is: is there any way to write the pipelined version of the sequence while avoiding the need for support code?
It looks like rsp is always idle before the trans starts. If rsp's idle is a constant value and it is a value that trans will never be, then you could use:
req == fill ##0 (rsp==idle)[->1] ##1 trans[*4];
The above should work when the pipeline supports 1 to 3 stages.
For a 4+ deep pipeline, I think you need some auxiliary code. The success/fail blocks of the assertion can be used to incompetent the count of completed trans; this saves you from having write additional RTL. A local variable in the property can be used to sample the fill's count value. The sampled value will be used as a criteria to start sampling the expected trans pattern.
int fill_req;
int trans_rsp;
always #(posedge clk, negedge rst_n) begin
if(!rst_n) begin
fill_req <= '0;
trans_rsp <= '0;
end
else begin
if(req == fill) begin
fill_req <= fill_req + 1; // Non-blocking to prevent risk of race condition
end
end
end
property fill_trans();
int id;
#(posedge clk) disable iff(!rst_n)
(req == fill, id = fill_req) |-> (rsp==idle && id==trans_rsp)[->1] ##1 trans[*4];
endproperty
assert property (fill_trans()) begin
// SUCCESS
trans_rsp <= trans_rsp + 1; // Non-blocking to prevent risk of race condition
end
else begin
// FAIL
// trans_rsp <= trans_rsp + 1; // Optional for supporting pass after fail
$error("...");
end
FYI: I haven't had time to fully test this. It should at least get you in the right direction.
I experimented a bit more and found a solution that might be more to your liking; no support code.
The equivalent of trans[->4] is (!trans[*] ##1 trans)[*4] per IEEE Std 1800-2012 § 16.9.2 Repetition in sequences. Therefore we can use the local variables to detect new fill requests with the expanded form. For example the following sequence
sequence fill_trans;
int cnt; // local variable
#(posedge clk)
(req==FILL,cnt=4) ##1 ( // initial request set to 4
(rsp!=TRANS,cnt+=4*(req==FILL))[*] // add 4 if new request
##1 (rsp==TRANS,cnt+=4*(req==FILL)-1) // add 4 if new request, always minus 1
)[*] ##1 (cnt==0); // sequence ends when cnt is zero
endsequence
Unless there is another qualifier not mentioned, you cannot use a typical assert property(); because it will start new assertion threads each time there is a fill request. Instead use an expect statement, which allows waiting on property evaluations (IEEE Std 1800-2012 § 16.17 Expect statement).
always #(posedge clk) begin
if(req==FILL) begin
expect(fill_trans);
end
end
I tried recreating your describe behavior for testing https://www.edaplayground.com/x/5QLs
One possible solution can be achieved with 2 assertions as below.
For 1st image -
(req == fill) && (rsp == idle) |=> ((rsp == trans)[->1])[*4]
For 2nd image -
(req == fill) && (rsp == trans) |=> ((rsp == trans)[->1])[*0:4] ##1 (rsp == idle) ##1 ((rsp == trans)[->1])[*4]
One issue is that if there are continuous "fill" requests on each cycle (consecutive 4 "fill" requests, without any intermediate "idle"), then the 2nd assertion will not calculate "trans" cycles for each "fill" requests (instead it'll only be completed on the 2nd set of "trans" cycles itself).
I could not modify the assertion for the given bug, as of now.

LFSR description right?

I wrote a 32-bit LFSR based on the taps from [1]. I want to ask if the following description is right for the 32-bit LFSR with the taps 32,22,2 and 1.
module lfsr (
input logic clk_i,
input logic rst_i,
output logic [31:0] rand_o
);
logic[31:0] lfsr_value;
assign rand_o = lfsr_value;
always_ff #(posedge clk_i, negedge rst_i) begin
if(~rst_i) begin
lfsr_value <= '0;
end else begin
lfsr_value[31:1] <= lfsr_value[30:0];
lfsr_value[0] <= ~(lfsr_value[31] ^ lfsr_value[21] ^ lfsr_value[1] ^ lfsr_value[0]);
end
end
endmodule
[1] http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
Looks Ok. you could also use an XOR, instead of an XNOR, as long as you reset to something else (the XOR version locks up at all 0's, the XNOR at all 1's).
For many apps the pseudo-random output is the single bit you shift out (31 in your case), rather than the entire register. It's also (more?) common to shift right, and put the XOR data in the top bit, and use the bit 0 output as your PR data.
Your code is specifically SystemVerilog, and not Verilog, so I've removed the Verilog tag.

wait($time >1000); cannot work in system-verilog?

I use this code to wait for a specific simulation time
initial begin
$display("A");
wait($time>1000);
$display("B");
end
the simulation result is:
A
I didnot see B printed.
If I use following code, it works.
while($time <1000) #1;
Is it because vcs needs to judge the wait condition once any viriable in the condition statement changes, $time is changing too frequently so vcs doesnot allow this usage?
#Tudor 's answer enlighten me. I tried #Tudor 's code with some modification. It turns out when wait(func(arglist)); vcs only retry to evaluate the function when arglist changes. Because $time has no args, vcs will only evaluate $time the 1st time, won't retry.
module top;
int the_time = 0;
int in_arg = 0;
function int the_time_f(int in);
return the_time;
endfunction // the_time_f
initial begin
$display("A");
// This works because 'the_time' is a variable
//wait(the_time > 10);
// This doesn't work because 'the_time_f' is a function
wait(the_time_f(in_arg) >10);
$display("B at %t", $time);
end
initial begin
#10ns;
the_time = 11;
#10ns;
in_arg = 1;
#20ns;
$finish();
end
endmodule // top
got following result
A
B at 20ns
This seems to be a gray area in the standard. In section 9.4 Procedural timing controls of the IEEE Std 1800-2012 Standard, it's mentioned that event control can be either implicit (changed of nets or variables) or explicit (fields of type event). $time is a system function, not a variable. I've also tried using a function for the wait and it also doesn't work:
module top;
int the_time = 0;
function int the_time_f();
return the_time;
endfunction // the_time_f
initial begin
$display("A");
// This works because 'the_time' is a variable
//wait(the_time > 10);
// This doesn't work because 'the_time_f' is a function
wait(the_time_f() > 10);
$display("B");
end
initial begin
#10ns;
the_time = 11;
#20ns;
$finish();
end
endmodule // top
Waiting on a change of a variable works fine, but waiting for a change on a function's return value doesn't work. IMHO, the compiler should have flagged this as a compile error (same for using $time) since it seems to just ignore the statement.
In an event control #(expression) or wait(expression) that suspends a process, SystemVerilog scheduling semantics requires an event to evaluate the expression (called an evaluation event. See section 4.3 Event Simulation of the 1800-2012 LRM) If an expression includes a function, only the arguments to that function are visible to cause an event evaluation (There is an exception for class methods in at a write to any member of the object in the method call will cause an event) See section 9.4.2 Event control
In an event driven simulation, the value of time is just an attribute of the current time slot, it is never an event. The simulator processes all events for the current time slot in a queue, and when that queue is empty, it advances time to the next time slot queue. So it might simulate time slots 0,5,7,10, skipping over the unmentioned times. Using your while loop, that would create a time sot for every consecutive time unit between 0 and 1000 - extremely inefficient.
So just use
#(1000); // wait for 1000 relative time units