Convert from int to binary or hex in mikroc - type-conversion

I got an int value from 0 to 255 and I want to convert that value to hex or binary so i can use it into an 8 bit register(PIC18F uC).
How can i do this conversion?
I tried to use IntToHex function from Conversion Library but the output of this function is a char value, and from here i got stuck.
I'm using mikroc for pic.
Where should I start?
Thanks!

This is a common problem. Many don't understand that, Decimal 15 is same as Hex F is same as Octal 17 is same as Binary 1111.
Different number systems are for Humans, for CPU, it's all Binary!
When OP says,
I got an int value from 0 to 255 and I want to convert that value to
hex or binary so i can use it into an 8 bit register(PIC18F uC).
It reflects this misunderstanding. Probably, because debugger is configured to show "decimal" values and sample code/datasheet shows Hex value for register operations.
So, when you get "int" value from 0 to 255, you can directly write that number to 8-bit register. You don't have to convert it to hex. Hex is just representation which makes Human's life easy.
What you can do is - this is good practise --
REG_VALUE = (unsigned char) int_value;

Related

How does an "int" represent a byte (8 bits) when an int is normally 32 or 64 bits?

I'm curious how in the Dart programming language a byte is represented by the int type. I am is confusing because in Java, which Dart closely resembles, an int is 32 bits.
I ask because the leading flutter ble library, Flutter Blue, seems to handle List<int> while handling the ble bytes.
However according to the official documentation:
https://flutter.dev/docs/development/platform-integration/platform-channels#codec
Uint8List is used - which is what makes sense the byte[] equivalent.
It seems as those the unsigned 8 bit integers are just then converted to a 32 bit signed int going from Uint8List -> List<int> ? i.e. decimal 2 is is then converted from 00000010 to 00000000000000000000000000000010?
It seems this has ramifications if one would like to write a byte stream. Would need to cast the int's to Uint8's.
The Dart int type is a 64-bit two's complement number—except when compiled for the web, there it's a 64-bit floating point number with no fractional part (a JavaScript number which has an integer value).
How those values are represented internally depends on optimizations, they can be represented as something smaller if the runtime system knows for sure that the value will fit. That's an optimization, you won't be able to tell the difference.
A "byte" value is an integer in the range 0..255. You can obviously store a byte value in an int.
The most efficient way to store multiple bytes is a Uint8List, which implements List<int>. It stores each element as a single octet.
When you read a value out of a Unit8List, its byte value is represented by an int. When you store an int in the Uint8List, only the low 8 bites are stored.
So it does expanding reads and truncating writes to move values between an octet and a 64-bit value.
The size of an int in dart is not completely predictable for local variables as Irn mentions here. The size of the int may be reduced as an optimization if it is seen as possible.
If you do an explicit conversion from Uint8List to List<int> that creates a new List object, the new object will have int elements that are a larger size than 8 bits. Possibly 32 bits, maybe 64 bits, maybe less. The compiler will choose based on what it sees. According to the int class, the default is a signed 64 bit integer.
If you are trying to get from ints to a Uint8List, each int will be truncated to 8 bits.
First, let's clear up one thing. An int is not inherently 32-bits or 64-bits universally. That's just a convention put in place by common languages, including Java. In C, for example, the size of int is an implementation detail that depends on the compiler, the architecture, and the size of a memory address, so it could be 8, 16, 32, or 64 bits (or on more esoteric platforms, something else entirely, like 24 bits). So the notion that Dart is doing something "wrong" by not having int be a 32-bit integer type is somewhat absurd.
Now that that, is out of the way, an int in Dart is not a fixed data type. Like C, it depends on the platform that it is running on.
Mobile: int is a 64-bit integer
Web: int is mapped to the JavaScript Number which is a 64-bit floating point (i.e. double)
Other: int can be defined as an implementation detail
And that's it. Dart has no concept of any other integral type. (i.e. There's no such thing as a byte, short, char, long, long long, etc. as primitive types.)
What you are seeing in Uint8List is an abstraction over a list of 64-bit integers to make it appear like a list of bytes. I'm not sure how it is represented internally (either each "byte" is its own int, using bit-flags to store 8 bytes worth of information in a single int, or it's a native implementation doing something else entirely), but at the end of the day it doesn't really matter.
Uint8List derives from List<int>; passing a Uint8List where a List<int> is expected does not change the representation of the data.
When you read a single int from a Uint8List (e.g. with operator []), you'll get a copy of the octet widened to whatever int is.
When you write an int to a Uint8List (e.g. with operator []=), the stored value will be truncated to include only the lower 8-bits.
There shouldn't be any confusion as to how an int can be used to represent bytes because it all comes down to bits and how you store and manipulate them.
For the sake of simplicity, let us assume that an int is larger than a byte - an int is 32 bits, and a byte is 8 bits. Since, we are dealing with just bits at this point, you can see it is possible for an int to contain 4 bytes because 32/8 is 4 i.e. we can use an int to store bytes without loosing information.
It seems as those the unsigned 8 bit integers are just then converted to a 32 bit signed int going from Uint8List -> List ? i.e. decimal 2 is is then converted from 00000010 to 00000000000000000000000000000010?
You could do it that way, but from I said previously, you can store multiple bytes in an int.
Consider if you have the string Hello, World!, which comes up to 13 bytes and you want to store these bytes in a list of ints, given each int is 32 bits; We only need to use 4 ints to represent this string because 13 * 8 is 104 bits, and four 32bit ints can hold 128 bits of information.
It seems this has ramifications if one would like to write a byte stream. Would need to cast the int's to Uint8's.
Not necessarily.
A byte stream consisting of the bytes 'H', 'e', 'l', 'l', ',', ' ', 'W', 'o', 'r', 'l', 'd', '!' can be written into a data structure known as a Bit Array. The bit array for the above stream would look like:
[01001000011001010110110001101100011011110010110000100000010101110110111101110010011011000110010000100001]
Or a list of 32 bit ints:
[1214606444,1865162839,1869769828,33]
If we wanted to convert this list of ints back to bytes, we just need to read the data in chunks of 8 bits to retrieve the original bytes. I will demonstrate it with this simply written dart program:
String readInts(List<int> bitArray) {
final buffer = StringBuffer();
for (var chunk in bitArray) {
for (var offset = 24; offset >= 0; offset -= 8) {
if (chunk < 1 << offset) {
continue;
}
buffer.writeCharCode((chunk >> offset) & 0xff);
}
}
return buffer.toString();
}
void main() {
print(readInts([1214606444,1865162839,1869769828,33]));
}
The same process can be followed to convert the bytes to integers - you just combine every 4 bytes to form a 32 bit integer.
Output
Hello, World!
Of course, you should not need to write such a code by yourself because dart already does this for you in the Uint8List class

converting 64 bits binary 1D vector into corresponding floating and signed decimal number

Please let me know how to achieve this as I tried a lot but didn't get the desired result for vector, u=[0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
desired output=-108.209
Regards
Nitin
First off, I think your expectation for a correct answer is off. The first bit in a double is the sign. So if you're expecting a negative number, the first bit should be 1. Even if you had your bits backward, it's still a leading 0. There are all sorts of binary to float calculators if you search for them. Here's an example:
http://www.binaryconvert.com/result_double.html?hexadecimal=4C378941600D5BC0
To answer your question for how to do this in Matlab, Matlab's built in function for converting from binary is bin2dec. However, it's expecting a char array as an input, so you'll need to convert to char with num2str. The other trick here is that bin2dec only supports up to 53 bits. So you'll need to break it into two 32 bit numbers. The last piece of the puzzle is to use typecast to convert your pair of 32bit integers into a double. Put it all together, and it looks like this:
bits = [0;1;0;0;1;1;0;0;0;0;1;1;0;1;1;1;1;0;0;0;1;0;0;1;0;1;0;0;0;0;0;1;0;1;1;0;0;0;0;0;0;0;0;0;1;1;0;1;0;1;0;1;1;0;1;1;1;1;0;0;0;0;0;0];
int1 = uint32(bin2dec(num2str(bits(1:32)')));
int2 = uint32(bin2dec(num2str(bits(33:64)')));
double_final = typecast([int2 int1],'double')

Encoding a value in gray code with floating point with negatives

My objective here is to be able to convert any number between -4.0 and 4.0 into a 5 bit binary string using gray code. I also need to be able to convert back to decimal.
Thanks for any help you can provide.
If it helps, the bigger picture here is that i'm taking the weights from a neural network and mutating them as a binary string.
If you have only 5 bits available, you can only encode 2^5 = 32 different input values.
The Gray code is useful, if while the input values change slowly, only a single bit each changes in the coded value.
So maybe the most straightforward implementation is to map your input range -4.0 to 4.0 to the integer range 0 … 31, and then to represent these integers by a standard Gray code, which can easily be converted back to 0 … 31 and then to -4.0 to 4.0.

Discrepancy in !objsize in hex and decimal

I use !objsize command to get the true value of an object. For example when I run the command below, it tells me that size of object at address 00000003a275f218 is 18 hex which translates to 24 in decimal.
0:000> !ObjSize 00000003a275f218
sizeof(00000003a275f218) = 24 (0x18) bytes
So far so good. I am running same command on an object and its size seem to have a discrepancy between hex and decimal.
So the size in hex is 0xafbde200. When I convert it to decimal using my calc, this comes to be 2948456960 whereas the output of command is showing decimal size to be -1346510336. Can someone help me understand why there is difference in sizes?
It's a bug in SOS. If you look at the source code, you'll find the method declared as
DECLARE_API(ObjSize)
It uses the following format as output
ExtOut("sizeof(%p) = %d (0x%x) bytes (%S)\n", SOS_PTR(obj), size, size, methodTable.GetName());
As you can see it uses %d as the format specifier, which is for signed decimal integers. That should be %u for unsigned decimal integers instead, since obviously you can't have objects using negative amount of memory.
If you know how to use Git, you may provide a patch.
You can use ? in WinDbg to see the unsigned value:
0:000> ? 0xafbde200
Evaluate expression: 2948456960 = 00000000`afbde200
Difference is the sign. It seems to be interpreting the first bit (which is 1 since the first hex byte is "A") as a negative sign. These two numbers are otherwise the same.
Paste -1346510336 on calc.exe (programmer mode), switch to Hex:
FFFFFFFFAFBDE200
Paste 2948456960, switch to Hex:
AFBDE200

Perl pack and unpack functions

I am trying to unpack a variable containing a string received from a spectrum analyzer:
#42404?û¢-+Ä¢-VÄ¢-oÆ¢-8æ¢-bÉ¢-ôÿ¢-+Ä¢-?Ö¢-sÉ¢-ÜÖ¢-¦ö¢-=Æ¢-8æ¢-uô¢-=Æ¢-\Å¢-uô¢-?ü¢-}¦¢-=Æ¢-)...
The format is real 32 which uses four bytes to store each value. The number #42404 represents 4 extra bytes present and 2404/4 = 601 points collected. The data starts after #42404. Now when I receive this into a string variable,
$lp = ibqry($ud,":TRAC:DATA? TRACE1;*WAI;");
I am not sure how to convert this into an array of numbers :(... Should I use something like the followin?
#dec = unpack("d", $lp);
I know this is not working, because I am not getting the right values and the number of data points for sure is not 601...
First, you have to strip the #42404 off and hope none of the following binary data happens to be an ASCII number.
$lp =~ s{^#\d+}{};
I'm not sure what format "Real 32" is, but I'm going to guess that it's a single precision floating point which is 32 bits long. Looking at the pack docs. d is "double precision float", that's 64 bits. So I'd try f which is "single precision".
#dec = unpack("f*", $lp);
Whether your data is big or little endian is a problem. d and f use your computer's native endianness. You may have to force endianness using the > and < modifiers.
#dec = unpack("f*>", $lp); # big endian
#dec = unpack("f*<", $lp); # little endian
If the first 4 encodes the number of remaining digits (2404) before the floats, then something like this might work:
my #dec = unpack "x a/x f>*", $lp;
The x skips the leading #, the a/x reads one digit and skips that many characters after it, and the f>* parses the remaining string as a sequence of 32-bit big-endian floats. (If the output looks weird, try using f<* instead.)