Boolean size in Ada - boolean

In my ada's project I have 2 different libraries with base types. I found two different definition for a boolean :
Library A :
type Bool_Type is new Boolean;
Library B :
type T_BOOL8 is new Boolean;
for T_BOOL8'Size use 8;
So I have a question, what is the size used for Bool_Type ?

Bool_Type will inherit the 'Size of Boolean, which is required to be 1,
see RM 13.3(49)

Compile with switch -gnatR2 to see its representation clause. For example:
main.adb
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
type Bool_Type is new Boolean;
type T_BOOL8 is new Boolean;
for T_BOOL8'Size use 8;
begin
Put_Line ("Bool_Type'Object_Size = " & Integer'Image (Bool_Type'Object_Size));
Put_Line ("Bool_Type'Value_Size = " & Integer'Image (Bool_Type'Value_Size));
Put_Line ("Bool_Type'Size = " & Integer'Image (Bool_Type'Size));
New_Line;
Put_Line ("T_BOOL8'Object_Size = " & Integer'Image (T_BOOL8'Object_Size));
Put_Line ("T_BOOL8'Value_Size = " & Integer'Image (T_BOOL8'Value_Size));
Put_Line ("T_BOOL8'Size = " & Integer'Image (T_BOOL8'Size));
New_Line;
end Main;
compiler output (partial):
Representation information for unit Main (body)
-----------------------------------------------
for Bool_Type'Object_Size use 8;
for Bool_Type'Value_Size use 1;
for Bool_Type'Alignment use 1;
for T_Bool8'Size use 8;
for T_Bool8'Alignment use 1;
program output
Bool_Type'Object_Size = 8
Bool_Type'Value_Size = 1
Bool_Type'Size = 1
T_BOOL8'Object_Size = 8
T_BOOL8'Value_Size = 8
T_BOOL8'Size = 8
As can be seen, the number returned by the 'Size / 'Value_Size attribute for Bool_Type is equal to 1 (as required by the RM; see egilhh's answer). The attribute 'Size / 'Value_Size states the number of bits used to represent a value of the type. The 'Object_Size attribute, on the other hand, equals 8 bits (1 byte) and states the amount of bits used to store a value of the given type in memory (see Simon Wright's comment). See here and here for details.
Note that the number of bits indicated by 'Size / 'Value_Size must be sufficient to uniquely represent all possible values within the (discrete) type. For Boolean derived types, at least 1 bit is required, for an enumeration type with 3 values, for example, you need at least 2 bits.
An effect of explicitly setting the 'Size / 'Value_Size attribute can be observed when defining a packed array (as mentioned in G_Zeus’ answer):
type Bool_Array_Type is
array (Natural range 0 .. 7) of Bool_Type with Pack;
type T_BOOL8_ARRAY is
array (Natural range 0 .. 7) of T_BOOL8 with Pack;
compiler output (partial):
Representation information for unit Main (body)
-------------------------------------------------
[...]
for Bool_Array_Type'Size use 8;
for Bool_Array_Type'Alignment use 1;
for Bool_Array_Type'Component_Size use 1;
[...]
for T_Bool8_Array'Size use 64;
for T_Bool8_Array'Alignment use 1;
for T_Bool8_Array'Component_Size use 8;
Because the number of bits used to represent a value of type T_BOOL8 is forced to be 8, the size of a single component of a packed array of T_BOOL8s will also be 8, and the total size of T_BOOL8_ARRAY will be 64 bits (8 bytes). Compare this to the total length of 8 bits (1 byte) for Bool_Array_Type.

You should find your answer (or enough information to find the answer to your specific question) in the Ada wikibooks entry for 'Size attribute.
Most likely Bool_Type has a the same size as Boolean, or 1 bit for the type (meaning you can pack Bool_Type elements in an array, for example) and 8 bits for instances (rounded up to full byte).

Whatever size the compiler wants, unless you override as in library B. Probably 8 bits but on some 32 bit RISC targets, 32 bits may be faster than 8. On a tiny microcontroller, 1 bit may save space.
The other answers let you find out for the specific target you compile for.
As your booleans are separate types, you need type conversions between them, providing hooks for the compiler to handle any format or size conversion without any further ado.

Related

Minizinc: declare explicit set in decision variable

I'm trying to implement the 'Sport Scheduling Problem' (with a Round-Robin approach to break symmetries). The actual problem is of no importance. I simply want to declare the value at x[1,1] to be the set {1,2} and base the sets in the same column upon the first set. This is modelled as in the code below. The output is included in a screenshot below it. The problem is that the first set is not printed as a set but rather some sort of range while the values at x[2,1] and x[3,1] are indeed printed as sets and x[4,1] again as a range. Why is this? I assume that in the declaration of x that set of 1..n is treated as an integer but if it is not, how to declare it as integers?
EDIT: ONLY the first column of the output is of importance.
int: n = 8;
int: nw = n-1;
int: np = n div 2;
array[1..np, 1..nw] of var set of 1..n: x;
% BEGIN FIX FIRST WEEK $
constraint(
x[1,1] = {1, 2}
);
constraint(
forall(t in 2..np) (x[t,1] = {t+1, n+2-t} )
);
solve satisfy;
output[
"\(x[p,w])" ++ if w == nw then "\n" else "\t" endif | p in 1..np, w in 1..nw
]
Backend solver: Gecode
(Here's a summarize of my comments above.)
The range syntax is simply a shorthand for contiguous values in a set: 1..8 is a shorthand of the set {1,2,3,4,5,6,7,8}, and 5..6 is a shorthand for the set {5,6}.
The reason for this shorthand is probably since it's often - and arguably - easier to read the shorthand version than the full list, especially if it's a long list of integers, e.g. 1..1024. It also save space in the output of solutions.
For the two set versions, e.g. {1,2}, this explicit enumeration might be clearer to read than 1..2, though I tend to prefer the shorthand version in all cases.

System Verilog inside operator operands bit length

How "inside" sub-expressions bit lengths are supposed to be computed in System Verilog?
It appears that the type of an expression depends on whether an operand is a numeric literal or a variable.
The following System Verilog program:
`define M( x ) $display( `"x -- %b`", x )
module top ();
bit [3:0] a, b;
initial begin
// (a+b) could be evaluated either to 0 or 16, depending on the bit length of the expression.
a = 15;
b = 1;
`M(4'd1 inside { [(a+b):17] } ); // 0
`M(4'd1);
`M( b inside { [(a+b):17] } ); // 1
`M( b);
end
endmodule
outputs:
Chronologic VCS simulator copyright 1991-2019
Contains Synopsys proprietary information.
Compiler version P-2019.06-SP1-1_Full64; Runtime version P-2019.06-SP1-1_Full64; May 8 20:27 2020
4'd1 inside { [(a+b):17] } -- 0
4'd1 -- 0001
b inside { [(a+b):17] } -- 1
b -- 0001
V C S S i m u l a t i o n R e p o r t
Time: 0
CPU Time: 0.250 seconds; Data structure size: 0.0Mb
Fri May 8 20:27:08 2020
PS: Verific interprets the standard differently from Synopsys.
It's definitely a tool bug that it produces different results for two different operands of the same value and width.
However, the correct answer is slightly ambiguous because the inside operator was left out of Table 11-21—Bit lengths resulting from self-determined expressions in the IEEE 1800-2017 LRM. This is a reported issue for the LRM, and the consensus is that is should behave the same as the case inside statement in that all operands get sized to the largest size operand before doing any comparisons. In this example, the literal 17 is 32-bits wide and has the largest width. So the result of the inside operator should be 1'b0.

Why does the Streaming-Operator in SystemVerilog reverse the byte order?

I simulated the following example:
shortint j;
byte unsigned data_bytes[];
j = 16'b1111_0000_1001_0000;
data_bytes = { >>{j}};
`uvm_info(get_type_name(), $sformatf("j data_bytes: %b_%b", data_bytes[1], data_bytes[0]), UVM_LOW)
Result:
UVM_INFO Initiator.sv(84) # 0: uvm_test_top.sv_initiator [Initiator] j data_bytes: 10010000_11110000
However, this seems strange to me, since the byte-order is reversed, as I expect the LSB to be at index 0 of data_byte[0] and the MSB at index 7 of data_byte[1]. Why does this happen? According to documentation (Cadence Help) this should not be the case.
As defined in section 6.24.3 Bit-stream casting of the IEEE 1800-2017 LRM, the [0] element of an unpacked dynamic array is considered the left index, and streaming >> goes from left to right indexes. To get the results you want, write
data_bytes = { << byte {j}};
This reverses the stream, but keeps the individual bytes in right to left order.

need concept to understand declaration of array in system verilog

I am always confusing while declaring an array and Array Querying Function in SystemVerilog. Can you explain me in details for given example:
Example-1
integer matrix[7:0][0:31][15:0];
// 3-dimensional unpacked array of integers i am confuse in size
// and dimension of given array for 1 and 2 dimension its easy to
// understand but for 3 and 4-dimension its little bit confusing...
Example-2
//bit [1:5][10:16] foo [21:27][31:38];
Example-3
//module array();
bit [1:5][10:16] foo1 [21:27][31:38],foo2 [31:27][33:38];
initial
begin
$display(" dimensions of foo1 is %d foo2 is %d",$dimensions(foo1),$dimensions(foo2) );
end
Output ...
dimensions of foo1 is 4 foo2 is 4
I am not getting this also...
See Sec: 7.4.5 Multidimensional arrays of IEEE 1800-2009
The dimensions preceding the identifier set the packed dimensions.
The dimensions following the identifier set the unpacked dimensions.
bit [3:0] [7:0] joe [1:10]; // 10 elements of 4 8-bit bytes
In a multidimensional declaration, the dimensions declared following the type and before the name
([3:0][7:0] in the preceding declaration) vary more rapidly than the dimensions following the name
([1:10] in the preceding declaration).
When referenced, the packed dimensions ([3:0], [7:0]) follow
the unpacked dimensions ([1:10]).
i.e. In a list of dimensions, the rightmost one varies most rapidly, as in C.
However, a packed dimension varies more rapidly than an unpacked one.
bit [1:10] v1 [1:5]; // 1 to 10 varies most rapidly
bit v2 [1:5] [1:10]; // 1 to 10 varies most rapidly
bit [1:5] [1:10] v3 ; // 1 to 10 varies most rapidly
bit [1:5] [1:6] v4 [1:7] [1:8]; // 1 to 6 varies most rapidly, followed by 1 to 5, then 1 to 8 and then 1 to 7
Example 1: You can view the setup like this:
Example 2:
bit [1:5][10:16] foo [21:27][31:38];
This is similar as example 1.
Example 3:
module array();
bit [1:5][10:16] foo1 [21:27][31:38],foo2 [31:27][33:38];
initial
begin
$display(" dimensions of foo1 is %d foo2 is %d",$dimensions(foo1),$dimensions(foo2) );
end
The declaration in the above module is as same as
bit [1:5][10:16] foo1 [21:27][31:38];
bit [1:5][10:16] foo2 [31:27][33:38];
As Dave has mentioned, $dimensions function gives you the total number of dimensions packed and unpacked. Sice both foo1 and foo2 are 4 dimensional the displayed value is 4.
For more on this topic please go though the following link. This would clear your all doubts. A nice representation is provided here.
http://testbench.in/SV_09_ARRAYS.html
There are a couple of things that may be confusing you.
From Verilog, the packed type is part of the data type of all the variables that follow
reg [7:0] rega, regb, regc[0:9]; // rega, regb are 8-bit variables, regc is an unpacked array of 10 8-bit variables
SystemVerilog added multiple packed dimensions, but it is still part of the basic data type
reg [7:0][0:3] rega, regb, regc[0:9]; // rega, regb are 32-bit variables, regc is an unpacked array of 10 32-bit variables
The $dimensions function gives you the total number of dimensions packed and unpacked, $unpacked_dimensions just gives you then number of unpacked dimensions.
integer is a shortcut for reg [31:0], int is a shortcut for bit [31:0]. So
integer matrix[7:0][0:31][15:0];
is a 4-dimensional array with 1 packed dimension (also called a vector) and 3 unpacked dimensions.

How to unpack (64-bit) unsigned long in 64-bit Perl?

I'm trying to unpack an unsigned long value that is passed from a C program to a Perl script via SysV::IPC.
It is known that the value is correct (I made a test which sends the same value into two queues, one read by Perl and the second by the C application), and all predecessing values are read correctly (used q instead of i! to work with 64-bit integers).
It is also known that PHP had something similar in bugs (search for "unsigned long on 64 bit machines") (seems to be similar:
Pack / unpack a 64-bit int on 64-bit architecture in PHP)
Arguments tested so far:
..Q ( = some value that is larger than expected)
..L ( = 0)
..L! ( = large value)
..l ( = 0)
..l! ( = large value)
..lN! ( = 0)
..N, ..N! ( = 0)
use bigint; use bignum; -- no effect.
Details:
sizeof(unsigned long) = 8;
Data::Dumper->new([$thatstring])->Useqq(1)->Dump(); a lot of null bytes along some meaningful..
byteorder='12345678';
Solution:
- x4Q with padding four bytes.
Unpacking using Q in the template works out of the box if you have 64-bit Perl:
The TEMPLATE is a sequence of characters that give the order
and type of values, as follows:
...
q A signed quad (64-bit) value.
Q An unsigned quad value.
(Quads are available only if your system supports 64-bit
integer values _and_ if Perl has been compiled to support those.
Causes a fatal error otherwise.)
For a more robust solution, unpack the value into an 8-byte string and use the Math::Int64 module to convert it to an integer:
use Math::Int64 qw( :native_if_available int64 );
...
$string_value = unpack("A8", $longint_from_the_C_program);
# one of these two functions will work, depending on your system's endian-ness
$int_value = Math::Int64::native_to_int64($string_value);
$int_value = Math::Int64::net_to_int64($string_value);
The solution was simple: added x4Q to skip four bytes before actual value; need to more visually think of padding/alignment..