Getting the first and last 32 bits of a uint64 - matlab

How to get the first 32 bits and last 32 bits of a uint64, and save them to two uint32 variables, using low-level operations such as bitshift, and, xor...? It seems like an easy problem but Matlab has some limitations on bit manipulation (e.g. only support up to 53 bit).

You can typecast() it into 'uint32' and convert to binary:
x64 = uint64(43564);
x32 = typecast(x64,'uint32');
x32 =
43564 0
dec2bin(x32)
ans =
1010101000101100
0000000000000000

This is supplementary to #Oleg's correct answer, in response to #Ruofeng's comment.
By doing hex2dec you are converting to double which doesn't have enough precision to store your hex number aaaaaaaaaaaaaaaa exactly. If you stick to uint64 you are OK.
See http://www.mathworks.com/matlabcentral/fileexchange/26005-convert-a-number-in-hex-to-uint64/content/hex2uint64.m.
Then x64=hex2uint64('aaaaaaaaaaaaaaaa'); followed by Oleg's answer [i.e. x32 = typecast(x64,'uint32');] gives the two parts identical:
x32 =
2863311530 2863311530

Related

How does Matlab store this large of a number with only 64 bits?

The largest number in double precision (that is, 64 bit) floating point arithmetic is
1.111...110 x 2^(512) (where there are 51 1's after the radix point). This number is less than 2 x 2^(512) == 2^(513) == 8^(171) < 10^(171). Therefore, when I assign x = 10^(171), I expect that x will be stored as Inf. However, this is not the case. Calling x in the interactive console displays 1.0000e+171. The only explanation I could think of is that Matlab uses more than 64 bits to store x. But a quick check of whos x reveals that x is stored in 8 bytes.
In fact, the largest power of 10 which will not be stored as Inf is 10^308.
Can someone please explain what is going on here?
I'm sorry, I made a simple mistake here. In 64 bit arithmetic, 11 bits are used to encode the exponent. Therefore we have 2^(11) = 2048 possible exponents, and so they range from -1023 to 1024, not from -511 to 512 like I thought. Therefore the largest number in 64 bit arithmetic is $1.111...110 x 2^(1024)$, which is in fact (with the exponent having 3 significant digits) 10^(308.6), corroborating my experimental results.

converting 64 bits binary 1D vector into corresponding floating and signed decimal number

Please let me know how to achieve this as I tried a lot but didn't get the desired result for vector, u=[0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
desired output=-108.209
Regards
Nitin
First off, I think your expectation for a correct answer is off. The first bit in a double is the sign. So if you're expecting a negative number, the first bit should be 1. Even if you had your bits backward, it's still a leading 0. There are all sorts of binary to float calculators if you search for them. Here's an example:
http://www.binaryconvert.com/result_double.html?hexadecimal=4C378941600D5BC0
To answer your question for how to do this in Matlab, Matlab's built in function for converting from binary is bin2dec. However, it's expecting a char array as an input, so you'll need to convert to char with num2str. The other trick here is that bin2dec only supports up to 53 bits. So you'll need to break it into two 32 bit numbers. The last piece of the puzzle is to use typecast to convert your pair of 32bit integers into a double. Put it all together, and it looks like this:
bits = [0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
int1 = uint32(bin2dec(num2str(bits(1:32)')));
int2 = uint32(bin2dec(num2str(bits(33:64)')));
double_final = typecast([int2 int1],'double')

Convert from int to binary or hex in mikroc

I got an int value from 0 to 255 and I want to convert that value to hex or binary so i can use it into an 8 bit register(PIC18F uC).
How can i do this conversion?
I tried to use IntToHex function from Conversion Library but the output of this function is a char value, and from here i got stuck.
I'm using mikroc for pic.
Where should I start?
Thanks!
This is a common problem. Many don't understand that, Decimal 15 is same as Hex F is same as Octal 17 is same as Binary 1111.
Different number systems are for Humans, for CPU, it's all Binary!
When OP says,
I got an int value from 0 to 255 and I want to convert that value to
hex or binary so i can use it into an 8 bit register(PIC18F uC).
It reflects this misunderstanding. Probably, because debugger is configured to show "decimal" values and sample code/datasheet shows Hex value for register operations.
So, when you get "int" value from 0 to 255, you can directly write that number to 8-bit register. You don't have to convert it to hex. Hex is just representation which makes Human's life easy.
What you can do is - this is good practise --
REG_VALUE = (unsigned char) int_value;

Discrepancy in !objsize in hex and decimal

I use !objsize command to get the true value of an object. For example when I run the command below, it tells me that size of object at address 00000003a275f218 is 18 hex which translates to 24 in decimal.
0:000> !ObjSize 00000003a275f218
sizeof(00000003a275f218) = 24 (0x18) bytes
So far so good. I am running same command on an object and its size seem to have a discrepancy between hex and decimal.
So the size in hex is 0xafbde200. When I convert it to decimal using my calc, this comes to be 2948456960 whereas the output of command is showing decimal size to be -1346510336. Can someone help me understand why there is difference in sizes?
It's a bug in SOS. If you look at the source code, you'll find the method declared as
DECLARE_API(ObjSize)
It uses the following format as output
ExtOut("sizeof(%p) = %d (0x%x) bytes (%S)\n", SOS_PTR(obj), size, size, methodTable.GetName());
As you can see it uses %d as the format specifier, which is for signed decimal integers. That should be %u for unsigned decimal integers instead, since obviously you can't have objects using negative amount of memory.
If you know how to use Git, you may provide a patch.
You can use ? in WinDbg to see the unsigned value:
0:000> ? 0xafbde200
Evaluate expression: 2948456960 = 00000000`afbde200
Difference is the sign. It seems to be interpreting the first bit (which is 1 since the first hex byte is "A") as a negative sign. These two numbers are otherwise the same.
Paste -1346510336 on calc.exe (programmer mode), switch to Hex:
FFFFFFFFAFBDE200
Paste 2948456960, switch to Hex:
AFBDE200

How do I interpret two 32-bit unsigned integers as a 64-bit floating-point ("double") value?

My knowledge of matlab is very limited, so I'll use more general terms to explain my problem:
We have a recording system that samples variables in an embedded system in realtime, and delivers the recorded data as matlab files for analysis.
My problem is that if a recorded variable is a "double" (more specifically a 64-bit, IEEE 754-1985, floating point value), the result is delivered as two unsigned 32-bit integers, and I have no idea how to turn it back into a floating-point value in matlab.
For example, if I record the variable SomeFloat, which is a double, I will get the recorded data as two sets of data, SomeFloat1 and SomeFloat2. Both are unsigned, 32-bit integers. SomeFloat1 contains the 32 most significant bits of SomeFloat, and SomeFloat2 contains the 32 least significant bits.
I was hoping to find an existing function for converting it back do a double, I mean something like:
MyDouble = MyDreamFunction(SomeFloat1, SomeFloat2)
I have not been able to find MyDreamFunction, but being new to matlab, I'm not really sure where to look...
So, does anyone know of a simple way to do this?
I think you want typecast (convert datatypes without changing underlying data):
>> x1 = uint32(7346427); %// example uint32 value
>> x2 = uint32(1789401); %// example uint32 value
>> typecast([x2 x1],'double')
ans =
1.4327e-306
>> typecast([x1 x2],'double')
ans =
3.7971e-308