concatenation of arrays in system verilog - system-verilog

I wrote a code for concatenation as below:
module p2;
int n[1:2][1:3] = {2{{3{1}}}};
initial
begin
$display("val:%d",n[2][1]);
end
endmodule
It is showing errors.
Please explain?

Unpacked arrays require a '{} format. See IEEE Std 1800-2012 § 5.11 (or search for '{ in the LRM for many examples).
Therefore update your assignment to:
int n[1:2][1:3] = '{2{'{3{1}}}};

int n[1:2][1:3] = {2{{3{1}}}};
Just looking at {3{1}} this is a 96 bit number 3 integers concatenated together.
It is likely that {3{1'b1}} was intended.
The main issue looks to be the the left hand side is an unpacked array, and the left hand side is a packed array.
{ 2 { {3{1'b1}} } } => 6'b111_111
What is required is [[3'b111],[3'b111]],
From IEEE std 1800-2009 the array assignments section will be of interest here
10.9.1 Array assignment patterns
Concatenation braces are used to construct and deconstruct simple bit vectors.
A similar syntax is used to support the construction and deconstruction of arrays. The expressions shall match element for element, and the braces shall match the array dimensions. Each expression item shall be evaluated in the context of an
assignment to the type of the corresponding element in the array. In other words, the following examples are not required to cause size warnings:
bit unpackedbits [1:0] = '{1,1}; // no size warning required as
// bit can be set to 1
int unpackedints [1:0] = '{1'b1, 1'b1}; // no size warning required as
// int can be set to 1’b1
A syntax resembling replications (see 11.4.12.1) can be used in array assignment patterns as well. Each replication shall represent an entire single dimension.
unpackedbits = '{2 {y}} ; // same as '{y, y}
int n[1:2][1:3] = '{2{'{3{y}}}}; // same as '{'{y,y,y},'{y,y,y}}

Related

Minizinc: declare explicit set in decision variable

I'm trying to implement the 'Sport Scheduling Problem' (with a Round-Robin approach to break symmetries). The actual problem is of no importance. I simply want to declare the value at x[1,1] to be the set {1,2} and base the sets in the same column upon the first set. This is modelled as in the code below. The output is included in a screenshot below it. The problem is that the first set is not printed as a set but rather some sort of range while the values at x[2,1] and x[3,1] are indeed printed as sets and x[4,1] again as a range. Why is this? I assume that in the declaration of x that set of 1..n is treated as an integer but if it is not, how to declare it as integers?
EDIT: ONLY the first column of the output is of importance.
int: n = 8;
int: nw = n-1;
int: np = n div 2;
array[1..np, 1..nw] of var set of 1..n: x;
% BEGIN FIX FIRST WEEK $
constraint(
x[1,1] = {1, 2}
);
constraint(
forall(t in 2..np) (x[t,1] = {t+1, n+2-t} )
);
solve satisfy;
output[
"\(x[p,w])" ++ if w == nw then "\n" else "\t" endif | p in 1..np, w in 1..nw
]
Backend solver: Gecode
(Here's a summarize of my comments above.)
The range syntax is simply a shorthand for contiguous values in a set: 1..8 is a shorthand of the set {1,2,3,4,5,6,7,8}, and 5..6 is a shorthand for the set {5,6}.
The reason for this shorthand is probably since it's often - and arguably - easier to read the shorthand version than the full list, especially if it's a long list of integers, e.g. 1..1024. It also save space in the output of solutions.
For the two set versions, e.g. {1,2}, this explicit enumeration might be clearer to read than 1..2, though I tend to prefer the shorthand version in all cases.

systemverilog unpacked array concatenation

I'm trying to create an unpacked array like this:
logic [3:0] AAA[0:9];
I'd like to initialize this array to the following values:
AAA = '{1, 1, 1, 1, 2, 2, 2, 3, 3, 4};
For efficiency I'd like to use repetition constructs, but that's when things are falling apart.
Is this not possible, or am I not writing this correctly? Any help is appreciated.
AAA = { '{4{1}}, '{3{2}}, '{2{3}}, 4 };
Firstly, the construct you are using is actually called the replication operator. This might help you in future searches, for example in the SystemVerilog LRM.
Secondly, you are using an array concatenation and not an array assignment in your last block of code (note the missing apostrophe '). The LRM gives the following (simple) example in Section 10.10.1 (Unpacked array concatenations compared with array assignment patterns) to explain the difference:
int A3[1:3];
A3 = {1, 2, 3}; // unpacked array concatenation
A3 = '{1, 2, 3}; // array assignment pattern
The LRM says in the same section that
...unpacked array concatenations forbid replication, defaulting, and
explicit typing, but they offer the additional flexibility of
composing an array value from an arbitrary mix of elements and arrays.
int A9[1:9];
A9 = {9{1}}; // illegal, no replication in unpacked array concatenation
Lets also have a look at the alternative: array assignment. In the same section, the LRM mentions that
...items in an assignment pattern can be replicated using syntax, such as '{ n{element} }, and can be defaulted using the default: syntax. However, every element item in an array assignment pattern must be of the same type as the element type of the target array.
If transforming it to an array assignment (by adding an apostrophe), your code actually translates to:
AAA = '{'{1,1,1,1}, '{2,2,2}, '{3,3}, 4};
This means that the SystemVerilog interpreter will only see 4 elements and it will complain that too few elements were given in the assignment.
In Section 10.9.1 (Array assignment patterns), the LRM says the following about this:
Concatenation braces are used to construct and deconstruct simple bit vectors. A similar syntax is used to support the construction and deconstruction of arrays. The expressions shall match element for element, and the braces shall match the array dimensions. Each expression item shall be evaluated in the context of an assignment to the type of the corresponding element in the array.
[...]
A syntax resembling replications (see 11.4.12.1) can be used in array assignment patterns as well. Each replication shall represent an entire single dimension.
To help interprete the bold text in the quote above, the LRM gives the following example:
int n[1:2][1:3] = '{2{'{3{y}}}}; // same as '{'{y,y,y},'{y,y,y}}
You can't do arbitrary replication of unpacked array elements.
If your code doesn't need to be synthesized, you can do
module top;
typedef logic [3:0] DAt[];
logic [3:0] AAA[0:9];
initial begin
AAA = {DAt'{4{1}}, DAt'{3{2}}, DAt'{2{3}}, 4};
$display("%p",AAA);
end
endmodule
I had another solution but I'm not sure if it is synthesizable. Would a streaming operator work here? I'm essentially taking a packed array literal and streaming it into the data structure AAA. I've put it on EDA Playground
module tb;
logic [3:0] AAA[0:9];
initial begin
AAA = { >> int {
{4{4'(1)}},
{3{4'(2)}},
{2{4'(3)}},
4'(4)
} };
$display("%p",AAA);
end
endmodule
Output:
Compiler version P-2019.06-1; Runtime version P-2019.06-1; Mar 25 11:20 2020
'{'h1, 'h1, 'h1, 'h1, 'h2, 'h2, 'h2, 'h3, 'h3, 'h4}
V C S S i m u l a t i o n R e p o r t
Time: 0 ns
CPU Time: 0.580 seconds; Data structure size: 0.0Mb
Wed Mar 25 11:20:07 2020
Done

Why does the Streaming-Operator in SystemVerilog reverse the byte order?

I simulated the following example:
shortint j;
byte unsigned data_bytes[];
j = 16'b1111_0000_1001_0000;
data_bytes = { >>{j}};
`uvm_info(get_type_name(), $sformatf("j data_bytes: %b_%b", data_bytes[1], data_bytes[0]), UVM_LOW)
Result:
UVM_INFO Initiator.sv(84) # 0: uvm_test_top.sv_initiator [Initiator] j data_bytes: 10010000_11110000
However, this seems strange to me, since the byte-order is reversed, as I expect the LSB to be at index 0 of data_byte[0] and the MSB at index 7 of data_byte[1]. Why does this happen? According to documentation (Cadence Help) this should not be the case.
As defined in section 6.24.3 Bit-stream casting of the IEEE 1800-2017 LRM, the [0] element of an unpacked dynamic array is considered the left index, and streaming >> goes from left to right indexes. To get the results you want, write
data_bytes = { << byte {j}};
This reverses the stream, but keeps the individual bytes in right to left order.

What is the point of writing integer in hexadecimal, octal and binary?

I am well aware that one is able to assign a value to an array or constant in Swift and have those value represented in different formats.
For Integer: One can declare in the formats of decimal, binary, octal or hexadecimal.
For Float or Double: One can declare in the formats of either decimal or hexadecimal and able to make use of the exponent too.
For instance:
var decInt = 17
var binInt = 0b10001
var octInt = 0o21
var hexInt = 0x11
All of the above variables gives the same result which is 17.
But what's the catch? Why bother using those other than decimal?
There are some notations that can be way easier to understand for people even if the result in the end is the same. You can for example think in cases like colour notation (hexadecimal) or file permission notation (octal).
Code is best written in the most meaningful way.
Using the number format that best matches the domain of your program, is just one example. You don't want to obscure domain specific details and want to minimize the mental effort for the reader of your code.
Two other examples:
Do not simplify calculations. For example: To convert a scaled integer value in 1/10000 arc minutes to a floating point in degrees, do not write the conversion factor as 600000.0, but instead write 10000.0 * 60.0.
Chose a code structure that matches the nature of your data. For example: If you have a function with two return values, determine if it's a symmetrical or asymmetrical situation. For a symmetrical situation always write a full if (condition) { return A; } else { return B; }. It's a common mistake to write if (condition) { return A; } return B; (simply because 'it works').
Meaning matters!

Converting a 2D array into 3D array in systemverilog

I have an 2 D dynamic array as
logic [511:0] array[];
I wan to convert it into a 3 D dynamic array defined as
logic [32][16]M[];
eg.
array[0]= 1110110000111000...512 bits....
M[0][0]= 1110110000111000...32 bits....
M[0][1]= next 32 bits....
and so on.
Can some please suggest how to accomplish this task.Did I declare my 3D array properly.I know dynamic array can only be defined in unpacked array. Can I define array as
logic [31:0] M[16][]; ?
Any suggestion or correction would be helpful.
Based on the example you gave it seems as if you want a dynamic array of an unpacked array of 16 32-bit packed words. That would be:
logic [31:0] M[16][];
You can use a bit-stream cast to assign one type shape to another type shape as long as the number of bits in the source can be fit into an exact match number of bits into the target. You need a typedef identifier for the target type (and it's a good practice to use typedefs in general when declaring variables).
typedef [31:0] my_3d_t[16][];
my_3d_t M;
M = my_3d_t'(array);
That does the assignment as
M[0][0][31:0] = array[0][511:480];
M[0][1][31:0] = array[0][479:448];
...
M[0][15][31:0] = array[0][31:0];
M[1][0][31:0] = array[1][511:480];
...