Can someone explain this strange systemverilog constraint behavior? - system-verilog

I am having hard time understanding this behavior with randomization in vcs simulation. can someone please help me?
class c;
rand bit [31:0] base;
constraint base_c {
base +32'h40_0000 < 32'hFFFF_FFFF;
}
endclass
module m;
c c_obj = new();
initial begin
c_obj.randomize() with {
base == 'hffe6_f6e2;
};
end
endmodule
I am not getting constraint error. base is of type bit which is unsigned. If I change the constraint as follows
constraint base_c {
base < 32'hFFFF_FFFF - 32'h40_0000;
}
I get constraint error now. How does this work?

It's very simple to work out the math here. All you need to do is plug in the values.
In the first case, you have
32'hffe6_f6e2 + 32'h40_0000 < 32'hFFFF_FFFF
which reduces down to
32'h0026_f6e2 < 32'hFFFF_FFFF
1'b1 // True
But when you have
32'hffe6_f6e2 < 32'hFFFF_FFFF - 32'h40_0000
This reduces down to
32'hffe6_f6e2 < 32'hffbf_ffff
1'b0 // False
It's not clear from your question what you are expecting, but you need to take underflow and overflow into account when writing constraints.
I just wrote a related blog post on these kind of math issues here.

Related

system verilog 2 dimensional dynamic array randomization

I am trying to use system verilog constraint solver to solve the following problem statement :
We have N balls each with unique weight and these balls need to be distributed into groups , such that weight of each group does not exceed a threshold ( MAX_WEIGHT) .
Now i want to find all such possible solutions . The code I wrote in SV is as follows :
`define NUM_BALLS 5
`define MAX_WEIGHT_BUCKET 100
class weight_distributor;
int ball_weight [`NUM_BALLS];
rand int unsigned solution_array[][];
constraint c_solve_bucket_problem
{
foreach(solution_array[i,j]) {
solution_array[i][j] inside {ball_weight};
//unique{solution_array[i][j]};
foreach(solution_array[ii,jj])
if(!((ii == i) & (j == jj))) solution_array[ii][jj] != solution_array[i][j];
}
foreach(solution_array[i,])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
function new();
ball_weight = {10,20,30,40,50};
endfunction
function void post_randomize();
foreach(solution_array[i,j])
$display("solution_array[%0d][%0d] = %0d", i,j,solution_array[i][j]);
$display("solution_array size = %0d",solution_array.size);
endfunction
endclass
module top;
weight_distributor weight_distributor_o;
initial begin
weight_distributor_o = new();
void'(weight_distributor_o.randomize());
end
endmodule
The issue i am facing here is that i want the size of both the dimentions of the array to be randomly decided based on the constraint solution_array[i].sum() < `MAX_WEIGHT_BUCKET; . From my understanding of SV constraints i believe that the size of the array will be solved before value assignment to the array .
Moreover i also wanted to know if unique could be used for 2 dimentional dynamic array .
You can't use the random number generator (RNG) to enumerate all possible solutions of your problem. It's not built for this. An RNG can give you one of these solutions with each call to randomize(). It's not guaranteed, though, that it gives you a different solution with each call. Say you have 3 solutions, S0, S1, S2. The solver could give you S1, then S2, then S1 again, then S1, then S0, etc. If you know how many solutions there are, you can stop once you've seen them all. Generally, though, you don't know this beforehand.
What an RNG can do, however, is check whether a solution you provide is correct. If you loop over all possible solutions, you can filter out only the ones that are correct. In your case, you have N balls and up to N groups. You can start out by putting each ball into one group and trying if this is a correct solution. You can then put 2 balls into one group and all the other N - 2 into a groups of one. You can put two other balls into one group and all the others into groups of one. You can start putting 2 balls into one group, 2 other balls into one group and all the other N - 4 into groups of one. You can continue this until you put all N balls into the same group. I'm not really sure how you can easily enumerate all solutions. Combinatorics can help you here. At each step of the way you can check whether a certain ball arrangement satisfies the constraints:
// Array describing an arrangement of balls
// - the first dimension is the group
// - the second dimension is the index within the group
typedef unsigned int unsigned arrangement_t[][];
// Function that gives you the next arrangement to try out
function arrangement_t get_next_arrangement();
// ...
endfunction
arrangement = get_next_arrangement();
if (weight_distributor_o.randomize() with {
solution.size() == arrangement.size();
foreach (solution[i]) {
solution[i].size() == arrangement[i].size();
foreach (solution[i][j])
solution[i][j] == arrangement[i][j];
}
})
all_solutions.push_back(arrangement);
Now, let's look at weight_distributor. I'd recommend you write each requirement in an own constraint as this makes the code much more readable.
You can shorten you uniqueness constraint that you wrote as a double loop to using the unique operator:
class weight_distributor;
// ...
constraint unique_balls {
unique { solution_array };
}
endclass
You already had a constraint that each group can have at most MAX_WEIGHT in it:
class weight_distributor;
// ...
constraint max_weight_per_group {
foreach (solution_array[i])
solution_array[i].sum() < `MAX_WEIGHT_BUCKET;
}
endclass
Because of the way array sizes are solved, it's not possible to write constraints that will ensure that you can compute a valid solution using simple calls randomize(). You don't need this, though, if you want to check whether a solution is valid. This is due to the constraints on array sizes in the between arrangement and solution_array.
Try this.!
class ABC;
rand bit[3:0] md_array [][]; // Multidimansional Arrays with unknown size
constraint c_md_array {
// First assign the size of the first dimension of md_array
md_array.size() == 2;
// Then for each sub-array in the first dimension do the following:
foreach (md_array[i]) {
// Randomize size of the sub-array to a value within the range
md_array[i].size() inside {[1:5]};
// Iterate over the second dimension
foreach (md_array[i][j]) {
// Assign constraints for values to the second dimension
md_array[i][j] inside {[1:10]};
}
}
}
endclass
module tb;
initial begin
ABC abc = new;
abc.randomize();
$display ("md_array = %p", abc.md_array);
end
endmodule
https://www.chipverify.com/systemverilog/systemverilog-foreach-constraint

Overflow in SystemVerilog constraints

Integer types in SystemVerilog, as in most languages, wrap around on overflow. I was wondering if this is also true in constraints. For example:
class Test;
rand bit [3:0] a;
rand bit [3:0] b;
constraint c { a + b <= 4'h6; }
endclass;
When randomizing an object of this class, is it possible to get a solution where a == 7 and b == 12, which would satisfy the constraint since 7 + 12 = 19 which wraps around to 3, and 3 is less than 6?
If so, would it help to formulate the constraint as
constraint c { a + b <= 6; }
where 6 is a 32-bit signed int and the sum is forced to be calculated with 32-bit precision? (This of course is not a solution if the random variables are of type int)
You are correct. Expression evaluation is identical whether you are inside a constraint or not. In addition to overflow, you also need to be concerned about truncation and sign conversion. Integral expressions are weakly typed in SystemVerilog.

SystemVerilog Constraint, Fixing value every nth iteration

class Base
rand bit b;
// constraint c1 { every 5th randomization should have b =0;}
endclass
I know I can make a static count variable and update that count variable and then, in constraint I can check if count%5 is zero, then make b=0, but is there a better way to do that? Thanks.
There's no need to make count static, just non-random.
class Base;
rand bit b;
int count;
constraint c1 { count%5 -> b==0;}
function post_randomize();
count++;
endfunction
endclass
If you know the upper limit of b, then you can write a constraint like follow.
constraint abc
{
b dist {0:=20, 1:=80}
}
This will make weight of 0 to 20 and, weight of 1 to 80. So in this way, 0 will occur once in every 5 randomization.

Which costs more while looping; assignment or an if-statement?

Consider the following 2 scenarios:
boolean b = false;
int i = 0;
while(i++ < 5) {
b = true;
}
OR
boolean b = false;
int i = 0;
while(i++ < 5) {
if(!b) {
b = true;
}
}
Which is more "costly" to do? If the answer depends on used language/compiler, please provide. My main programming language is Java.
Please do not ask questions like why would I want to do either.. They're just barebone examples that point out the relevant: should a variable be set the same value in a loop over and over again or should it be tested on every loop that it holds a value needed to change?
Please do not forget the rules of Optimization Club.
The first rule of Optimization Club is, you do not Optimize.
The second rule of Optimization Club is, you do not Optimize without measuring.
If your app is running faster than the underlying transport protocol, the optimization is over.
One factor at a time.
No marketroids, no marketroid schedules.
Testing will go on as long as it has to.
If this is your first night at Optimization Club, you have to write a test case.
It seems that you have broken rule 2. You have no measurement. If you really want to know, you'll answer the question yourself by setting up a test that runs scenario A against scenario B and finds the answer. There are so many differences between different environments, we can't answer.
Have you tested this? Working on a Linux system, I put your first example in a file called LoopTestNoIf.java and your second in a file called LoopTestWithIf.java, wrapped a main function and class around each of them, compiled, and then ran with this bash script:
#!/bin/bash
function run_test {
iter=0
while [ $iter -lt 100 ]
do
java $1
let iter=iter+1
done
}
time run_test LoopTestNoIf
time run_test LoopTestWithIf
The results were:
real 0m10.358s
user 0m4.349s
sys 0m1.159s
real 0m10.339s
user 0m4.299s
sys 0m1.178s
Showing that having the if makes it slight faster on my system.
Are you trying to find out if doing the assignment each loop is faster in total run time than doing a check each loop and only assigning once on satisfaction of the test condition?
In the above example I would guess that the first is faster. You perform 5 assignments. In the latter you perform 5 test and then an assignment.
But you'll need to up the iteration count and throw in some stopwatch timers to know for sure.
Actually, this is the question I was interested in… (I hoped that I’ll find the answer somewhere to avoid own testing. Well, I didn’t…)
To be sure that your (mine) test is valid, you (I) have to do enough iterations to get enough data. Each iteration must be “long” enough (I mean the time scale) to show the true difference. I’ve found out that even one billion iterations are not enough to fit to time interval that would be long enough… So I wrote this test:
for (int k = 0; k < 1000; ++k)
{
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // to slow down a bit
b = true;
a /= 2; // to slow down a bit more
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tasgn\t" + time);
}
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // the same thing as above
if (!b)
{
b = true;
}
a /= 2;
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tif\t" + time);
}
}
I ran the test three times storing the data in Excel, then I swapped the first (‘asgn’) and second (‘if’) case and ran it three times again… And the result? Four times “won” the ‘if’ case and two times the ‘asgn’ appeared to be the better case. This shows how sensitive the execution might be. But in general, I hope that this has also proven that the ‘if’ case is better choice.
Thanks, anyway…
Any compiler (except, perhaps, in debug) will optimize both these statements to
bool b = true;
But generally, relative speed of assignment and branch depend on processor architecture, and not on compiler. A modern, super-scalar processor perform horribly on branches. A simple micro-controller uses roughly the same number of cycles per any instruction.
Relative to your barebones example (and perhaps your real application):
boolean b = false;
// .. other stuff, might change b
int i = 0;
// .. other stuff, might change i
b |= i < 5;
while(i++ < 5) {
// .. stuff with i, possibly stuff with b, but no assignment to b
}
problem solved?
But really - it's going to be a question of the cost of your test (generally more than just if (boolean)) and the cost of your assignment (generally more than just primitive = x). If the test/assignment is expensive or your loop is long enough or you have high enough performance demands, you might want to break it into two parts - but all of those criteria require that you test how things perform. Of course, if your requirements are more demanding (say, b can flip back and forth), you might require a more complex solution.

In what circumstances can a compiler change the execution order of programme statements?

If this is not a real question then feel free to close ;)
Not only the compiler can reorder execution (mostly for optimization), most modern processors do so, too. Read more about execution reordering and memory barriers.
The compiler can change the execution order of statements when it sees fit for optimization purposes, and when such changes wouldn't alter the observable behavior of the code.
A very simple example -
int func (int value)
{
int result = value*2;
if (value > 10)
{
return result;
}
else
{
return 0;
}
}
A naive compiler can generate code for this in exactly the sequence shown. First calculate "result" and return it only if the original value is larger than 10 (if it isn't, "result" would be ignored - calculated needlessly).
A sane compiler, though, would see that the calculation of "result" is only needed when "value" is larger than 10, so may easily move the calculation "value*2" inside the first braces and only do it if "value" is actually larger than 10 (needless to mention, the compiler doesn't really look at the C code when optimizing - it works in lower levels).
This is only a simple example. Much more complicated examples can be created. It is very possible that a C function would end up looking almost nothing like its C representation in compiled form, with aggressive enough optimizations.
Many compilers use something called "common subexpression elimination". For example, if you had the following code:
for(int i=0; i<100; i++) {
x += y * i * 15;
}
the compiler would notice that y * 15 is invariant (its value doesn't change). So it would compute y * 15, stick the result in a register and change the loop statement to "x += r0 * i". This is kind of a contrived example, but you often see expressions like this when working with array indexes or any other base + offset type of situation.