Perl: Decimal to 32bits hex convert - perl

I want to convert decimal number 64 into hex representation: 0x00000040. I am using
printf("0x%X", 64);
but it gives output: 0x40. Can anyone please help me how to represent the decimal number in 0x00000000 format?

You can specify the length of the field between the % and the X (e.g. %8X). By default, the number will be padded with spaces, but using a leading zero for the length (e.g. %08X) will cause printf to pad with zeroes instead. Therefore, the following can be used:
printf("0x%08X", 64);

Related

Discrepancy in !objsize in hex and decimal

I use !objsize command to get the true value of an object. For example when I run the command below, it tells me that size of object at address 00000003a275f218 is 18 hex which translates to 24 in decimal.
0:000> !ObjSize 00000003a275f218
sizeof(00000003a275f218) = 24 (0x18) bytes
So far so good. I am running same command on an object and its size seem to have a discrepancy between hex and decimal.
So the size in hex is 0xafbde200. When I convert it to decimal using my calc, this comes to be 2948456960 whereas the output of command is showing decimal size to be -1346510336. Can someone help me understand why there is difference in sizes?
It's a bug in SOS. If you look at the source code, you'll find the method declared as
DECLARE_API(ObjSize)
It uses the following format as output
ExtOut("sizeof(%p) = %d (0x%x) bytes (%S)\n", SOS_PTR(obj), size, size, methodTable.GetName());
As you can see it uses %d as the format specifier, which is for signed decimal integers. That should be %u for unsigned decimal integers instead, since obviously you can't have objects using negative amount of memory.
If you know how to use Git, you may provide a patch.
You can use ? in WinDbg to see the unsigned value:
0:000> ? 0xafbde200
Evaluate expression: 2948456960 = 00000000`afbde200
Difference is the sign. It seems to be interpreting the first bit (which is 1 since the first hex byte is "A") as a negative sign. These two numbers are otherwise the same.
Paste -1346510336 on calc.exe (programmer mode), switch to Hex:
FFFFFFFFAFBDE200
Paste 2948456960, switch to Hex:
AFBDE200

Matlab convert form hex to float

I'm working with a device that send to me hex values, and I need convert those values to his real float value. Someone know how to convert from hex values to float in matlab?
Thx
Take a look at hex2dec, to convert your hex to decimal.
Hex format is inherently integer (the floating point position is not defined), so you will have to give more info: Does the hex represent a mantissa-exponent floating point number? Does it represent a fixed-point number?
the hex represent a mantissa-exponent floating point number. for exemple 0x44ADE000 equal to 1391.0

how to remove last zero from number in matlab

If I set a variable in Matlab, say var1 = 2.111, after running the script, Matlab returns var1 = 2.1110. I want Matlab to return the original number, with no trailing zero. Anyone know how to do this. Thanks in advance.
By default Matlab displays results in Short fixed decimal format, with 4 digits after the decimal point.
You can change that to various other format such as:
long
Long fixed decimal format, with 15 digits after the decimal point for double values, and 7 digits after the decimal point for single values.
3.141592653589793
shortE
Short scientific notation, with 4 digits after the decimal point.
Integer-valued floating-point numbers with a maximum of 9 digits do not display in scientific notation.
3.1416e+00
longE
Long scientific notation, with 15 digits after the decimal point for double values, and 7 digits after the decimal point for single values.
Integer-valued floating-point numbers with a maximum of 9 digits do not display in scientific notation.
3.141592653589793e+00
shortG
The more compact of short fixed decimal or scientific notation, with 5 digits.
3.1416
longG
The more compact of long fixed decimal or scientific notation, with 15 digits for double values, and 7 digits for single values.
3.14159265358979
shortEng
Short engineering notation, with 4 digits after the decimal point, and an exponent that is a multiple of 3.
3.1416e+000
longEng
Long engineering notation, with 15 significant digits, and an exponent that is a multiple of 3.
3.14159265358979e+000
However I don't think other options are available. If you absolutely want to remove those zeros you would have to cast you result in a string and remove the trailing 0 characters and then display your result as a string and not a number.

num2hex vs dec2hex in MATLAB

I don't understand the difference between hex2dec and hex2num and their opposites in MATLAB.
Say I had a hex value, 3FD3B502C055FE00. When I use hex2dec, I get 4.5992e+018
. When I use hex2num, I get 0.3079. What's going on?
These functions work very differently, as you noticed. hex2dec converts a hexadecimal string to a floating-point number by raw byte conversion, and I think you found that this works as you were expecting. However, hex2num converts a hexadecimal string to its IEEE double-precision representation.
The IEEE 754 double precision standard calls for a one-bit sign, a 11-bit exponent, and a 52-bit fraction. So hex2num parses the hexadecimal in this format, yielding a very different result from hex2dec.
hex2dec -
Convert hexadecimal number string to decimal number
Description
d = hex2dec('hex_value') converts hex_value to its floating-point integer representation. The argument hex_value is a hexadecimal integer stored in a MATLAB string. The value of hex_value must be smaller than hexadecimal 10,000,000,000,000.
If hex_value is a character array, each row is interpreted as a hexadecimal string.
hex2num -
Convert hexadecimal number string to double-precision number
Description
n = hex2num(S), where S is a 16 character string representing a hexadecimal number, returns the IEEE® double-precision floating-point number n that it represents. Fewer than 16 characters are padded on the right with zeros. If S is a character array, each row is interpreted as a double-precision number.
NaNs, infinities and denorms are handled correctly.
Knowing that 3FD3B502C055FE00 is bigger than (10,000,000,000,000)16, out of range.

Perl pack and unpack functions

I am trying to unpack a variable containing a string received from a spectrum analyzer:
#42404?û¢-+Ä¢-VÄ¢-oÆ¢-8æ¢-bÉ¢-ôÿ¢-+Ä¢-?Ö¢-sÉ¢-ÜÖ¢-¦ö¢-=Æ¢-8æ¢-uô¢-=Æ¢-\Å¢-uô¢-?ü¢-}¦¢-=Æ¢-)...
The format is real 32 which uses four bytes to store each value. The number #42404 represents 4 extra bytes present and 2404/4 = 601 points collected. The data starts after #42404. Now when I receive this into a string variable,
$lp = ibqry($ud,":TRAC:DATA? TRACE1;*WAI;");
I am not sure how to convert this into an array of numbers :(... Should I use something like the followin?
#dec = unpack("d", $lp);
I know this is not working, because I am not getting the right values and the number of data points for sure is not 601...
First, you have to strip the #42404 off and hope none of the following binary data happens to be an ASCII number.
$lp =~ s{^#\d+}{};
I'm not sure what format "Real 32" is, but I'm going to guess that it's a single precision floating point which is 32 bits long. Looking at the pack docs. d is "double precision float", that's 64 bits. So I'd try f which is "single precision".
#dec = unpack("f*", $lp);
Whether your data is big or little endian is a problem. d and f use your computer's native endianness. You may have to force endianness using the > and < modifiers.
#dec = unpack("f*>", $lp); # big endian
#dec = unpack("f*<", $lp); # little endian
If the first 4 encodes the number of remaining digits (2404) before the floats, then something like this might work:
my #dec = unpack "x a/x f>*", $lp;
The x skips the leading #, the a/x reads one digit and skips that many characters after it, and the f>* parses the remaining string as a sequence of 32-bit big-endian floats. (If the output looks weird, try using f<* instead.)