convert 16 bits signed (x2) to 32bits unsigned - type-conversion

I've got a problem with a modbus device :
The device send data in modbus protocol.
I read 4 bytes from the modbus communication that represent a pressure value
I have to convert theses 4 bytes in a unsigned 32bits integer.
There is the modbus documentation :
COMBINING 16bit REGISTERS TO 32bit VALUE
Pressure registers 2 & 3 in SENSOR INPUT REGISTER MAP of this guide are stored as u32 (UNSIGNED 32bit INTEGER)
You can calculate pressure manually :
1) Determine what display you have - if register values are positive skip to step 3.
2) Convert negative register 2 & 3 values from Signed to Unsigned (note: 65536 = 216 ):
(reg 2 value) + 65536* = 35464 ; (reg 3 value) + 65536 = 1
3) Shift register #3 as this is the upper 16 bits: 65536 * (converted reg 3 value) = 65536
4) Put two 16bit numbers together: (converted reg 2 value) + (converted reg 3 value) = 35464 + 65536 = 101000 Pa
Pressure information is then 101000 Pascal.
I don't find it very clear... For exemple, we don't have the 4 bytes that gives this calcul.
So, if anybody has a formula to convert my bytes into a 32bits unsigned int it could be very helpful

You should be able to read your bytes in some kind of type representation (hex, dec, bin, oct...)
let's assume you're receiving the following bytes frame:
in hex:
0x00, 0x06, 0x68, 0xA0
in bin:
0000 0000, 0000 0110, 0110 1000, 1010 0000
all of these are different representation of the same 4 bytes values.
Another thing that you should know is the bytes position (endianess):
If you're frame is transmitted in big endian, you're going to read the bytes in the order that you have them ( so 0x00, 0x06, 0x68, 0xA0 is correct).
If the frame is transmitted in little endian, you need to perform the following operation:
Switch the first 2 bytes with the last 2:
0x68, 0xA0, 0x00, 0x06
and then switch the position between the first and the second byte and the third and the fourth byte:
0xA0, 0x68, 0x06, 0x00
so if your frame is in little endian, the correct frame will be 0xA0, 0x68, 0x06, 0x00.
If you don't know the endianess, assume it's in big endian.
Now you simply have to 'put' your values togheter:
0x00, 0x06, 0x68, 0xA0 will become 0x000668A0
or
0000 0000, 0000 0110, 0110 1000, 1010 0000 will become 00000000000001100110100010100000
Once you have your hex or bin, you can convert your bin to an integer or convert your hex to an integer
Here you can find an interesting tool for converting HEX to float, unit32, int32, int16 in all endianess.
TL;DR
if you can use python, you should use struct:
import struct
frame = [0x00, 0x06, 0x68, 0xA0] # or [0, 6, 104, 160] in dec or [0b00000000, 0b00000110, 0b01101000, 0b10100000] in bin
print struct.unpack('>L', ''.join(map(chr, frame)))[0]

Related

why did not fill with zeros

Allocated array for 10000 bits = 1250 bytes(10000/8):
mov edi, 1250
call malloc
tested the pointer:
cmp rax, 0
jz .error ; error handling at label down the code
memory was allocated:
(gdb) p/x $rax
$3 = 0x6030c0
attempted to fill that allocated memory with zeros:
mov rdi, rax
xor esi, esi
mov edx, 1250 ; 10000 bits
call memset
checked first byte:
(gdb) p/x $rax
$2 = 0x6030c0
(gdb) x/xg $rax + 0
0x6030c0: 0x0000000000000000
checked last byte(0 - first byte, 1249 - last byte)
(gdb) p/x $rax + 1249
$3 = 0x6035a1
(gdb) x/xg $rax + 1249
0x6035a1: 0x6100000000000000
SOLVED QUESTION
Should have typed x/1c $rax + 1249
You interpreted memory as a 64 bit integer, but you forgot that endianness of intel is little endian. So bytes were reversed.
0x6100000000000000 is the value that the CPU reads when de-serializing the memory at this address. Since it's little endian, the 0x61 byte is last in memory (not very convenient to dump memory in this format, unless you have a big endian architecture)
Use x /10bx $rax + 1249 you'll see that it's zero at the correct location. The rest is garbage (happens to be zero for a while, then garbage)
0x00 0x00 0x00 0x00 0x00 0x00 0x61

Two byte report count for hid report descriptor

I'm trying to create an HID report descriptor for USB 3.0 with a report count of 1024 bytes.
The documentation at usb.org for HID does not seem to mention a two byte report count. Nonetheless, I have seen some people use 0x96 (instead of 0x95) to enter a two byte count, such as:
0x96, 0x00, 0x02, // REPORT_COUNT (512)
which was taken from here:
Custom HID device HID report descriptor
Likewise, from this same example, 0x26 is used for a two byte logical maximum.
Where did this 0x96 and 0x26 field come from? I don't see any documentation for it.
REPORT_COUNT is defined in the Device Class Definition for HID 1.11 document in section 6.2.2.7 Global Items on page 36 as:
Report Count 1001 01 nn Unsigned integer specifying the number of data
fields for the item; determines how many fields are included in the
report for this particular item (and consequently how many bits are
added to the report).
The nn in the above code is the item length indicator (bSize) and is defined earlier in section 6.2.2.2 Short Items as:
bSize Numeric expression specifying size of data:
0 = 0 bytes
1 = 1 byte
2 = 2 bytes
3 = 4 bytes
Rather confusingly, the valid values of bSize are listed in decimal. So, in binary, the bits for nn would be:
00 = 0 bytes (i.e. there is no data associated with this item)
01 = 1 byte
10 = 2 bytes
11 = 4 bytes
Putting it all together for REPORT_COUNT, which is an unsigned integer, the following alternatives could be specified:
1001 01 00 = 0x94 = REPORT_COUNT with no length (can only have value 0?)
1001 01 01 = 0x95 = 1-byte REPORT_COUNT (can have a value from 0 to 255)
1001 01 10 = 0x96 = 2-byte REPORT_COUNT (can have a value from 0 to 65535)
1001 01 11 = 0x97 = 4-byte REPORT_COUNT (can have a value from 0 to 4294967295)
Similarly, for LOGICAL_MAXIMUM, which is a signed integer (usually, there is an exception):
0010 01 00 = 0x24 = LOGICAL_MAXIMUM with no length (can only have value 0?)
0010 01 01 = 0x25 = 1-byte LOGICAL_MAXIMUM (can have a values from -128 to 127)
0010 01 10 = 0x26 = 2-byte LOGICAL_MAXIMUM (can have a value from -32768 to 32767)
0010 01 11 = 0x27 = 4-byte LOGICAL_MAXIMUM (can have a value from -2147483648 to 2147483647)
The specification is unclear on what value a zero-length item defaults to in general. It only mentions, at the end of section 6.2.2.4 Main Items, that MAIN item types and, within that type, INPUT item tags, have a default value of 0:
Remarks - The default data value for all Main items is zero (0).
- An Input item could have a data size of zero (0) bytes. In this case the value of
each data bit for the item can be assumed to be zero. This is functionally
identical to using a item tag that specifies a 4-byte data item followed by four
zero bytes.
It would be reasonable to assume 0 as the default for other item types too, but for REPORT_COUNT (a GLOBAL item) a value of 0 is not really a sensible default (IMHO). The specification doesn't really say.

SPI fails to read first 6 bytes

I'm having a lot of issues with SPI module on my STM32F051 MCU. I've got it configured as a master to drive a slave flash memory module (that doesn't really matter).
I'm trying to read 8 bytes from memory, this is how the 'read data' message is structured:
First 4 bytes of the message are transmitted, next 8 are received.
First byte is 'read data' opcode, three following are data address and equal 0 in this case.
Code:
memset(out, 0x00, 256);
memset(in, 0x00, 256);
out[0] = OPCODE_READ;
out[1] = 0x00;
out[2] = 0x00;
out[3] = 0x00;
uint32_t len = 4 + size; // size == 8
spi_select(M25P80);
HAL_SPI_TransmitReceive(&hspi1, out, in, len, TIMEOUT);
delay_ms(BYTE_SPEED_MS * 5); // Needed because ^ finishes before physically
// transmitting the data. Nevermind the 5, it
// was picked experimentally
spi_deselect(M25P80);
Signal (yellow - clock, red - miso):
At 488 bits/s transmitting 4 bytes takes 4 * 1E3 / (488 / 8) = 65.5 ms. Then the reception starts. Memory starts transmitting [0xFF...0xFF] right away, but contents of the 'in' buffer are:
[0x00 0x00 0x00 0x00] [0x00 0x00 0x00 0x00 0x00] 0xFF 0xFF 0x00...0x00
^ zero because this ^ should be 0xFF ^ correct data
is the part where
data was being sent
to the memory
So first six bytes of data are just lost. Am I the only one who's having such a hard time with STM's SPI module?
EDIT:
I've gotten myself a different eval board with a slightly different MCU (STM32F030) and it gets even weirder:
[0x02 0x02 0x02 0x02]
0x00 0x02 0x00 0x00 0xFF 0x00 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0x00...0x00
Although I must mention that I'm using a different compiler with this MCU.
EDIT 2:
The way I partially got it to work is using 16-bit mode with SPI. This fixed this particular bug, but there are more similar oddities with STM32's SPI.
EDIT 3:
SPI initialisation code:
void MX_SPI1_Init(void)
{
hspi1.Instance = SPI1;
hspi1.Init.Mode = SPI_MODE_MASTER;
hspi1.Init.Direction = SPI_DIRECTION_2LINES;
hspi1.Init.DataSize = SPI_DATASIZE_16BIT;
hspi1.Init.CLKPolarity = SPI_POLARITY_LOW;
hspi1.Init.CLKPhase = SPI_PHASE_1EDGE;
hspi1.Init.NSS = SPI_NSS_SOFT;
hspi1.Init.BaudRatePrescaler = SPI_BAUDRATEPRESCALER_2;
hspi1.Init.FirstBit = SPI_FIRSTBIT_MSB;
hspi1.Init.TIMode = SPI_TIMODE_DISABLED;
hspi1.Init.CRCCalculation = SPI_CRCCALCULATION_DISABLED;
hspi1.Init.NSSPMode = SPI_NSS_PULSE_DISABLED;
HAL_SPI_Init(&hspi1);
}
Are you sure that the initialization of SPI is right?
Maybe your Clock polarity or phase settings does not match between Master and Slave?
Take a watch to ClockSettings.
Please show your SPI-Initialization-Code!

How to set sockaddr_in6::sin6_addr byte order to network byte order?

I developing a network application and using socket APIs.
I want to set sin6_addr byte order of sockaddr_in6 structure.
For 16 bits or 32 bits variables, it is simple: Using htons or htonl:
// IPv4
sockaddr_in addr;
addr.sin_port = htons(123);
addr.sin_addr.s_addr = htonl(123456);
But for 128 bits variables, I dont know how to set byte order to network byte order:
// IPv6
sockaddr_in6 addr;
addr.sin6_port = htons(123);
addr.sin6_addr.s6_addr = ??? // 16 bytes with network byte order but how to set?
Some answers may be using htons for 8 times (2 * 8 = 16 bytes), or using htonl for 4 times (4 * 4 = 16 bytes), but I don't know which way is correct.
Thanks.
The s6_addr member of struct in6_addr is defined as:
uint8_t s6_addr[16];
Since it is an array of uint8_t, rather than being a single 128-bit integer type, the issue of endianness does not arise: you simply copy from your source uint8_t [16] array to the destination. For example, to copy in the address 2001:888:0:2:0:0:0:2 you would use:
static const uint8_t myaddr[16] = { 0x20, 0x01, 0x08, 0x88, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02 };
memcpy(addr.sin6_addr.s6_addr, myaddr, sizeof myaddr);
The usual thing would be to use one of the hostname lookup routines and use the result of that, which is already in network byte order. How come you're dealing with hardcoded numeric IP addresses at all?

UTF-8 & Unicode, what's with 0xC0 and 0x80?

I've been reading about Unicode and UTF-8 in the last couple of days and I often come across a bitwise comparison similar to this :
int strlen_utf8(char *s)
{
int i = 0, j = 0;
while (s[i])
{
if ((s[i] & 0xc0) != 0x80) j++;
i++;
}
return j;
}
Can someone clarify the comparison with 0xc0 and checking if it's the most significant bit ?
Thank you!
EDIT: ANDed, not comparison, used the wrong word ;)
It's not a comparison with 0xc0, it's a logical AND operation with 0xc0.
The bit mask 0xc0 is 11 00 00 00 so what the AND is doing is extracting only the top two bits:
ab cd ef gh
AND 11 00 00 00
-- -- -- --
= ab 00 00 00
This is then compared to 0x80 (binary 10 00 00 00). In other words, the if statement is checking to see if the top two bits of the value are not equal to 10.
"Why?", I hear you ask. Well, that's a good question. The answer is that, in UTF-8, all bytes that begin with the bit pattern 10 are subsequent bytes of a multi-byte sequence:
UTF-8
Range Encoding Binary value
----------------- -------- --------------------------
U+000000-U+00007f 0xxxxxxx 0xxxxxxx
U+000080-U+0007ff 110yyyxx 00000yyy xxxxxxxx
10xxxxxx
U+000800-U+00ffff 1110yyyy yyyyyyyy xxxxxxxx
10yyyyxx
10xxxxxx
U+010000-U+10ffff 11110zzz 000zzzzz yyyyyyyy xxxxxxxx
10zzyyyy
10yyyyxx
10xxxxxx
So, what this little snippet is doing is going through every byte of your UTF-8 string and counting up all the bytes that aren't continuation bytes (i.e., it's getting the length of the string, as advertised). See this wikipedia link for more detail and Joel Spolsky's excellent article for a primer.
An interesting aside by the way. You can classify bytes in a UTF-8 stream as follows:
With the high bit set to 0, it's a single byte value.
With the two high bits set to 10, it's a continuation byte.
Otherwise, it's the first byte of a multi-byte sequence and the number of leading 1 bits indicates how many bytes there are in total for this sequence (110... means two bytes, 1110... means three bytes, etc).