I have a specific question and a request for more general guidance.
My question is what is the cleanest way to multiply a signed number by an unsigned number in SystemVerilog?
Below is a little test code that illustrates the problem. 'a' is the unsigned number. 'b' is the signed number. In order to produce a correct signed result SystemVerilog seems to require that both operands of the multiplication be signed. To make that work here I had to add an extra '0' to the front of the unsigned number to make it a valid signed number. I'm thinking there must be a cleaner way.
More generally, I am just getting started doing fixed point math in SystemVerilog. In VHDL there is very concrete syntax and even a standard package to support signed and unsigned fixed point math, with rounding, etc. Is there something like that in the SystemVerilog world?
Thanks,
module testbench ();
localparam int Wa = 8;
localparam int Wb = 8;
logic [Wa-1:0] a; // unsigned
logic [Wa:0] a_signed; // signed word with extra bit to hold unsigned number.
logic [Wb-1:0] b; // signed
logic [Wa+Wb-1:0] c; // result
localparam clk_period = 10;
assign a_signed = {1'b0, a};
assign c = $signed(a_signed)*$signed(b);
initial begin
a = +5;
b = +10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +5;
b = -10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +255;
b = +10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +255;
b = -10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
$stop;
end
endmodule
system verilog rules say
If any operand is unsigned, the result is unsigned, regardless of the operator
Propagate the type and size of the expression (or self-determined subexpression) back down to the context-determined operands of the expression
When propagation reaches a simple operand as defined in 11.5, then that operand shall be converted to the propagated type and size.
So, in other words, when you multily signed and unsigned numbers, the expression type will be determined as unsigned. This will be propagated back to operands and all signed will be treated as unsigned as well.
so, your result will be identical to the one of multiplying two unsigned numbers. So, the cleanest way, if you need signed result, is to convert all operands to signed number.
You will also need an extra bit in your operands to have a place for the sign bit. Otherwise 255 will be treated as -1 in 8-bit sign conversion.
Related
I am writing a CRC16 function in C to use in System Verilog.
Requirement as below:
Output of CRC16 has 16 bits
Input of CRC16 has bigger than 72 bits
The difficulty is that I don't know whether DPI-C can support map data type with reg/wire in System Verilog to C or not ?
how many maximum length of reg/wire can support to use DPI-C.
Can anybody help me ?
Stay with compatible types across the language boundaries. For output use shortint For input, use an array of byte in SystemVerilog which maps to array of char in C.
Dpi support has provision for any bit width, converting packed arrays into c-arrays. The question is: what are you going to do with 72-bit data at c side?
But, svBitVecVal for two-state bits and svLogicVecVal for four-stat logics could be used at 'c' side to retrieve values. Look at H.7.6/7 of lrm for more info.
Here is an example from lrm H.10.2 for 4-state data (logic):
SystemVerilog:
typedef struct {int x; int y;} pair;
import "DPI-C" function void f1(input int i1, pair i2, output logic [63:0] o3);
C:
void f1(const int i1, const pair *i2, svLogicVecVal* o3)
{
int tab[8];
printf("%d\n", i1);
o3[0].aval = i2->x;
o3[0].bval = 0;
o3[1].aval = i2->y;
o3[1].b = 0;
...
}
I am trying to do a Shift Right, Arithmetic (keep sign) on the signal in.
When I set the value in[0] to 16'hbb00, I expect in_sign_extend[0] to be 16'hf760 after it is signed right shifted. But, I notice that the actual result I see on in_sign_extend[0] is 16'h0680.
localparam CHANNELS = 8;
localparam AXI_M_DATA_WIDTH = 32;
logic signed [0:CHANNELS-1] [AXI_M_DATA_WIDTH/2-1:0] in;
logic signed [0:CHANNELS-1] [AXI_M_DATA_WIDTH/2-1:0] in_sign_extend;
assign in_sign_extend[0] = (in[0] >>> 3);
I am trying to understand if the in is actually correctly signed. Or am I missing something here.
The part select of a packed array is always unsigned, even when selecting the entire array. Only the variable as a whole reference (in) is signed. You can change your code to
assign in_sign_extend[0] = (signed'(in[0]) >>> 3);
Or you might prefer to use an unpacked array
logic signed [AXI_M_DATA_WIDTH/2-1:0] in[CHANNELS];
logic signed [AXI_M_DATA_WIDTH/2-1:0] in_sign_extend[CHANNELS];
assign in_sign_extend[0] = (in[0] >>> 3);
When I review some codes, I found something strange.
It seems that it comes from expansion and operation priority.
(I know that because "sig" is declared with 'signed', $signed is not necessary and '-sig' is correct one, anyway..)
reg signed [9:0] sig;
reg signed [11:0] out;
initial
begin
$monitor ("%0t] sig=%0d, out=%0h", $time, sig, out);
sig = 64;
out = $signed(-sig);
#1
out = -$signed(sig);
#1
sig = -512;
out = $signed(-sig);
#1
out = -$signed(sig);
#1
$finish;
end
Simulation result for above codes is,
0] sig=64, out=-64
2] sig=-512, out=-512
3] sig=-512, out=512
When sig=-512, I expected that 10 bits sig would be expanded to 12bits before negation, but it was expanded after negation.
Therefore negation of -512 was still -512, and after expansion, it had a -512.
I guess "$signed() blocks expansion..Any idea what happens??
First of all, -512 and 512 are identical numbers in 10-bit represenntation. 10 bits can actually only hold signed values from -512 to 511. In this scheme negation of -512 should work weirdly, not mentioned in lrm, at least i was not able to locate anything related. This is probably an undefined behavior.
However, it is logical to assume that in this scheme in order to represent a negated value of '-512' just removing signess is sufficient. It seems that all commercial compilers in eda playground do this. So, a result of the unaray - operator in this case will be unsigned value of 512.
So, in out = $signed(-(-512)) the negation operator returns an unsigned value of 512 and it gets converted to a signed by the system task. Therefore, it gets sign extended in out.
out = -$signed(-512) for the same reason the outermost negation operator returns an unsigned value of 512. No sign extension happens here.
You can again make it signed by enclosing in yet another $signed as out = $signed(-$signed(-512))
Some languages do not have support for unsigned integers, like dart or Java.
I have two integer numbers int a, b that are really unsigned (basically hashes or bitfields), but have to be stored in the signed data types.
A comparison function is needed. The usual a < b will not work here, as it would wrongly interpret negative values to be smaller, while they are (in the desired unsigned interpretation) actually larger. Each of those two ranges are handled correctly if considered alone.
A working solution I came up with (in dart, but language shouldn't really matter) is
int compareAsUnsigned(int a, int b) {
final signA = a.sign;
final signB = b.sign;
if (signA == signB) return a.compareTo(b);
if (signA == -1 || signB == -1) return b.compareTo(a);
return a.compareTo(b);
}
Are there any efficent and / or elegant ways to get the unsigned compare for values stored in signed data types (a longer type is not available and all bits are used)?
Im not sure if Im missing something simple but the following code fails (a and b are meant to be the same):
a=single(2147483584)
f=fopen('test','wb');
fwrite(f,a,'int32')
fclose(f);
f=fopen('test','rb');
b=fread(f,inf,'int32');
fclose(f)
a
b
with output:
a =
2.1475e+009
b =
-2.1475e+009
and the following code succeeds:
a=single(2147483583)
f=fopen('test','wb');
fwrite(f,a,'int32')
fclose(f);
f=fopen('test','rb');
b=fread(f,inf,'int32');
fclose(f)
a
b
with output:
a =
2.1475e+009
b =
2.1475e+009
Does anyone know why?
I don't know Matlab well, but it seems fairly clear what's happening here. You're converting a to a float and then storing the result of that conversion as a 32-bit signed integer. But the nearest single-precision IEEE 754 float to the integer 2147483584 is 2147483648.0, or 2**31. A 32-bit integer can only represent values in the range [-2**31, 2**31-1], so it looks as though when you write this value as an integer, it gets wrapped modulo 2**32 to give -2**31 instead of 2**31.
In contrast, the nearest single-precision float to 2147483583 is 2147483520.0, which does fit in a 32-bit integer.