I want to cast logic packed array into longint unsigned in systemverilog and then I can export it using DPI-C to C++ unsigned long. The simulator I am using is Verilator. Check the example below.
logic[31:0] v1;
logic[63:0] v2;
int a = signed'(v1); //cast to signed int
int b = int'(v1); //cast to signed int
int unsigned c = unsigned'(v1); //cast to unsigned int
longint d = longint'(v2); //cast to signed long
//longint unsigned e = longint unsigned'(v2); //This doesn't work. I need to cast to unsigned long.
You need to create a SystemVerilog type without a space in it using typedef. Here's an example:
// ..
typedef longint unsigned uint64_t;
uint64_t e = uint64_t'(v2);
There's no need for any kind of cast unless sign extension is required. There is already implicit casting between 4-state and 2-state types.
You can just write:
longint d = v2;
Related
Are there macros that provide the correct printf format specifiers for IV, UV, STRLEN, Size_t and SSize_t? None are listed in perlapi.
C provides macros for the format specifiers for the types provided by stdint.h, such as uint32_t.
#include <inttypes.h>
#include <stdint.h>
uint32_t i = ...;
printf("i = %" PRIu32 "\n", i);
Is there something similar to PRIu32 for IV, UV, STRLEN, Size_t and SSize_t?
The larger problem is that I'm trying to suggest a fix for the following compilation warnings produced when installing Sort::Key on Ubuntu on Windows Subsystem for Linux:
Key.xs: In function ‘_keysort’:
Key.xs:237:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘IV {aka long int}’ [-Wformat=]
croak("unsupported sort type %d", type);
^~~~~~~~~~~~~~~~~~~~~~~~~~
Key.xs: In function ‘_multikeysort’:
Key.xs:547:9: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘STRLEN {aka long unsigned int}’ [-Wformat=]
croak("wrong number of results returned "
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Key.xs:547:9: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘IV {aka long int}’ [-Wformat=]
For UV, the following macros exist:
UVuf (decimal)
UVof (octal)
UVxf (lc hex)
UVXf (uc hex)
For IV, the following macro exists:
IVdf (decimal)
For NV, the following macros exist:
NVef ("%e-ish")
NVff ("%f-ish")
NVgf ("%g-ish")
For Size_t and STRLEN, use the builtin z length modifier.[1]
%zu (decimal)
%zo (octal)
%zx (lc hex)
%zX (uc hex)
For SSize_t, use the builtin z length modifier.[1]
%zd (decimal)
For example,
IV iv = ...;
STRLEN len = ...;
croak("iv=%" IVdf " len=%zu", iv, len);
While Size_t and SSize_t are configurable, they're never different from size_t and ssize_t in practice, and STRLEN is a typedef for Size_t.
If Size_t is the same as size_t, then %zu is correct.
STRLEN is likely, but not certain, to be the same as size_t.
If SSize_t is the same as ssize_t, then %zd is probably correct (it's complicated).
For other types, if you don't know what predefined type they correspond to, convert to a known type. Knowing the signedness helps. For example:
some_unknown_signed_integer_type n = 42;
some_unknown_unsigned_integer_type x = 128;
printf("n = %jd\n", (intmax_t)n);
printf("x = %ju\n", (uintmax_t)x);
intmax_t and uintmax_t are defined in <stdint.h>.
You can get away with converting to long or unsigned long and using %ld or %lu, for example, if you happen to know that the type is no wider than long or unsigned long.
I have a specific question and a request for more general guidance.
My question is what is the cleanest way to multiply a signed number by an unsigned number in SystemVerilog?
Below is a little test code that illustrates the problem. 'a' is the unsigned number. 'b' is the signed number. In order to produce a correct signed result SystemVerilog seems to require that both operands of the multiplication be signed. To make that work here I had to add an extra '0' to the front of the unsigned number to make it a valid signed number. I'm thinking there must be a cleaner way.
More generally, I am just getting started doing fixed point math in SystemVerilog. In VHDL there is very concrete syntax and even a standard package to support signed and unsigned fixed point math, with rounding, etc. Is there something like that in the SystemVerilog world?
Thanks,
module testbench ();
localparam int Wa = 8;
localparam int Wb = 8;
logic [Wa-1:0] a; // unsigned
logic [Wa:0] a_signed; // signed word with extra bit to hold unsigned number.
logic [Wb-1:0] b; // signed
logic [Wa+Wb-1:0] c; // result
localparam clk_period = 10;
assign a_signed = {1'b0, a};
assign c = $signed(a_signed)*$signed(b);
initial begin
a = +5;
b = +10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +5;
b = -10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +255;
b = +10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
a = +255;
b = -10;
#(clk_period*1);
$display("Hex: a=%x,b=%x, c=%x; Dec: a=%d, b=%d, c=%d", a, b, c, a, $signed(b), $signed(c));
$stop;
end
endmodule
system verilog rules say
If any operand is unsigned, the result is unsigned, regardless of the operator
Propagate the type and size of the expression (or self-determined subexpression) back down to the context-determined operands of the expression
When propagation reaches a simple operand as defined in 11.5, then that operand shall be converted to the propagated type and size.
So, in other words, when you multily signed and unsigned numbers, the expression type will be determined as unsigned. This will be propagated back to operands and all signed will be treated as unsigned as well.
so, your result will be identical to the one of multiplying two unsigned numbers. So, the cleanest way, if you need signed result, is to convert all operands to signed number.
You will also need an extra bit in your operands to have a place for the sign bit. Otherwise 255 will be treated as -1 in 8-bit sign conversion.
#include <stdio.h>
int main(void){
char c[8];
*c = "hello";
printf("%s\n",*c);
return 0;
}
I am learning pointers recently. above code gives me an error - assignment makes integer from pointer without a cast [enabled by default].
I read few post on SO about this error but was not able to fix my code.
i declared c as any array of 8 char, c has address of first element. so if i do *c = "hello", it will store one char in one byte and use as many consequent bytes as needed for other characters in "hello".
Please someone help me identify the issue and help me fix it.
mark
i declared c as any array of 8 char, c has address of first element. - Yes
so if i do *c = "hello", it will store one char in one byte and use as many consequent bytes as needed for other characters in "hello". - No. Value of "hello" (pointer pointing to some static string "hello") will be assigned to *c(1byte). Value of "hello" is a pointer to string, not a string itself.
You need to use strcpy to copy an array of characters to another array of characters.
const char* hellostring = "hello";
char c[8];
*c = hellostring; //Cannot assign pointer to char
c[0] = hellostring; // Same as above
strcpy(c, hellostring); // OK
#include <stdio.h>
int main(void){
char c[8];//creating an array of char
/*
*c stores the address of index 0 i.e. c[0].
Now, the next statement (*c = "hello";)
is trying to assign a string to a char.
actually if you'll read *c as "value at c"(with index 0),
it will be more clearer to you.
to store "hello" to c, simply declare the char c[8] to char *c[8];
i.e. you have to make array of pointers
*/
*c = "hello";
printf("%s\n",*c);
return 0;
}
hope it'll help..:)
I'm trying to write a method which takes in a hex value such as 0xD2691E for the purpose of returning a UIColor object.
I found this macro which I want to convert into a method, but I don't know how to specify the data type other than void *.
#define UIColorFromRGB(rgbValue) [UIColor \
colorWithRed:((float)((rgbValue & 0xFF0000) >> 16))/255.0 \
green:((float)((rgbValue & 0xFF00) >> 8))/255.0 \
blue:((float)(rgbValue & 0xFF))/255.0 alpha:1.0]
//Then use any Hex value
self.view.backgroundColor = UIColorFromRGB(0xD2691E);
What is the data type of a hex value like 0xD2691E?
According to C standard, the type of an hexadecimal constant is the first of this list in which its value can be represented:
C11 (n1570), § 6.4.4.1 Integer constants
int
unsigned int
long int
unsigned long int
long long int
unsigned long long int
Since D2691E (b16) is equal to 13789470 (b10), the type of your constant depends on your implementation.
C standard only guarantees that INT_MAX >= +32767, whereas LONG_MAX >= +2147483647.
C11 (n1570), 5.2.4.2.1 Sizes of integer types
INT_MAX +32767
LONG_MAX +2147483647
Therefore, (unsigned) long int could be a suitable choice.
from what i remembered they are something like int or unsigned int.
Please try to use this one.....
unsigned long long
unsigned long int
In this method, they perform bitwise AND operation, so it must be unsigned of int OR long
How to convert Char array to long in obj c
unsigned char *composite[4];
composite[0]=spIndex;
composite[1]= minor;
composite[2]=shortss[0];
composite[3]=shortss[1];
i need to convert this to Long int..Anyone please help
If you are looking at converting what is essentially already a binary number then a simple type cast would suffice but you would need to reverse the indexes to get the same result as you would in Java: long value = *((long*)composite);
You might also consider this if you have many such scenarios:
union {
unsigned char asChars[4];
long asLong;
} value;
value.asChar[3] = 1;
value.asChar[2] = 9;
value.asChar[1] = 0;
value.asChar[0] = 10;
// Outputs 17367050
NSLog(#"Value as long %ld",value.asLong);