While reading data in socket its important either keep a message terminator symbol or add the Packet size information at the begening of the message.
If a terminator symbol is used and a binary message is sent there is no guarantee that the terminator symbol would not appear in the middle of the message (unless some special encoding is used).
On the other hand if size information is attached. size information is unsigned and if one byte is used for it it cannot be used to transfer messages longer than 256 bytes. if 4 byte integer is used. its not even guaranteed that 4 bytes will come a s whole. just 2 bytes of the size information may come can assuming the size information has arrived it may use that 2 bytes and rest of the integer data will be discarded. waiting for 4 bytes to be available on read buffer may cause infinite awaiting if only 3 bytes are available on the buffer (e.g. if total buffer is 7 bytes or 4077 bytes long).
here comes two possible ways
sizeInfo separator chunk
read until the separator is found once found read until sizeInfo bytes passed
keep an unreadyBytes initialized at 4 upon receiving the sizeInfo change it accordingly
which one of these two is safer to use ? Please Criticize
Edit
My central question is how to make sure that the size bytes has arrived properly. assuming messages are of variable size.
its not even guaranteed that 4 bytes will come a s whole. just 2 bytes of the size information may come can assuming the size information has arrived it may use that 2 bytes and rest of the integer data will be discarded. waiting for 4 bytes to be available on read buffer may cause infinite awaiting if only 3 bytes are available on the buffer (e.g. if total buffer is 7 bytes or 4077 bytes long).
If you have a 4 bytes length descriptor you should always read at least 4 bytes, because the sender should have written this bytes in every message your server is receiving. If you can't get them, maybe there has been a problem in transmission. I really can't understand your problem.
Anyway I'll suggest to you not to use any separator chunk.
Put an header at data blocks you are transmitting and use a buffer to reconstruct the packet flow.
You must at least read the header of a packet to determine its length.
You can define a basic structure for a packet:
struct packet{
uint32 id;
char payload[MAX_PAYLOAD_SIZE];
};
The you read data from socket storing them into a buffer:
struct packet buffer;
Then you can read the data from the socket:
int n;
n = read(newsocket, &buffer, sizeof(uint32) + MAX_PAYLOAD_SIZE);
read returns the number of bytes read. If you read exactly a packet from the sender, then n = id. Otherwise maybe you read more data (es. the sender sent to you more packets). If you are receiving a stream of data split into unit (represented by packet structures), then you may use an array of packet to store the complete packet received and a temporarily buffer to manage incoming fragments.
Something like:
struct packet buffer[MAX_PACKET_STORED];
char temp_buffer[MAX_PAYLOAD_SIZE + 4];
int n;
n = read(newsocket, &buffer, sizeof(uint32) + MAX_PAYLOAD_SIZE);
//here suppose have received a packet of 100 Byte payload + 32 bit of length + 100 Byte
//fragments of the next packet.
//then:
int first_pack_len, second_pack_len;
first_pack_len = *((uint32 *)&temp_buffer[0]); //retrieve packet length
memcpy(&packet_buffer[0], temp_buffer, first_pack_len + sizeof(uint32)) //store the first packet into the array
second_pack_data_available_in_buffer = n - (first_pack_len + sizeof(uint32)); //total bytes read minus length of the first packet read
second_pack_len = *((int *)&temp_buffer[first_pack_len + sizeof(uint32)]);
I hope to have been clear enough. But maybe I'm misunderstanding your question.
Pay attention also that if the 2 end-systems communicating could have different endiannes, so it's a better idea use htonl/ntohl function on length when sending/receving length value. But this is another issue)
Related
I found a strange mismatch in the reference manual for the OTP memory size:
Accordingly to the manual (RM0385, page 78) the start-address of the OTP memory begins at 0x1FF0 F000 and ends at address: 0x1FF0 F41F (in AXIM mode) and is declared as a memory area with the size of 1024 bytes:
But if I calculate 0x1FF0F41F - 0x1FF0F000 + 1 I get a total of 1056 bytes OTP memory?!
Same addresses are set in the latest CMSIS header V1.17.0 (https://raw.githubusercontent.com/STMicroelectronics/STM32CubeF7/master/Drivers/CMSIS/Device/ST/STM32F7xx/Include/stm32f746xx.h):
#define FLASH_OTP_BASE 0x1FF0F000UL
#define FLASH_OTP_END 0x1FF0F41FUL
Is this a typo in the manual or is my calculation wrong?
I don't think it's a typo, and your calculation is correct.
My explanation is as follows:
Chapter 3.6 of the same Reference Manual shows the organization of the one-time programmable (OTP) part of the OTP area, and explains:
The OTP area is divided into 16 OTP data blocks of 64 bytes and one lock OTP block of 16 bytes. The OTP data and lock blocks cannot be erased. The lock block contains 16 bytes LOCKBi (0 ≤ i ≤ 15) to lock the corresponding OTP data block (blocks 0 to 15). (...)
So it consists of 1024 data bytes and 16 lock bytes. I guess the lock bytes are implemented as special "registers" and are not considered part of the sector. It's more a matter of definition and does not really matter that much.
There probably is a 1024-byte sector to write the (one-time programmable) data to, but the address range is slightly larger because the lock bits have to be addressed as well.
I'm trying to write a driver for the MPU-6050 and I'm stuck on how to proceed regarding reading the raw accelerometer/gyroscope/temperature readings. For instance, the MPU-6050 has the accelerometer X readings in 2 registers: ACCEL_XOUT[15:8] at address 0x3B and ACCEL_XOUT[7:0] at address 0x3C. Of course to read the raw value I need to read both registers and put them together.
BUT
In the description of the registers (in the register map and description sheet, https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Register-Map1.pdf) it says that to guarantee readings from the same sampling instant I must use burst reads b/c as soon as an idle I2C bus is detected, the sensor registers are refreshed with new data from a new sampling instant. The datasheet snippet shows the simple I2C burst read:
However, this approach (to the best of my understanding) would only work reading the ACCEL_X registers from the same sampling instant if the auto-increment was supported (such that the first DATA in the above sequence would be from ACCEL_XOUT[15:8] # address 0x3B and the second DATA would be from ACCEL_XOUT[7:0] # address 0x3C). But the datasheet (https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Datasheet1.pdf) only mentions that I2C burst writes support the auto-increment feature. Without auto-increment on the I2C read side how would I go about reading two different registers whilst maintaining the same sampling instant?
I also recognize that I could use the sensor's FIFO feature or the interrupt to accomplish what I'm after, but (for my own curiosity) I would like a solution that didn't rely on either.
I also have the same problem, looks like the documentation on this topic is incomplete.
Reading single sample
I think you can burst read the ACCEL_*OUT_*, TEMP_OUT_* and GYRO_*OUT_*. In fact I tried reading the data one register at once, but I got frequent data corruption.
Then, just to try, I requested 6 bytes from ACCEL_XOUT_H, 6 bytes from GYRO_XOUT_H and 2 bytes from TEMP_OUT_H and... it worked! No more data corruption!
I think they simply forgot to mention this in the register map.
How to
Here is some example code that can work in the Arduino environment.
These are the function that I use, they are not very safe, but it works for my project:
////////////////////////////////////////////////////////////////
inline void requestBytes(byte SUB, byte nVals)
{
Wire.beginTransmission(SAD);
Wire.write(SUB);
Wire.endTransmission(false);
Wire.requestFrom(SAD, nVals);
while (Wire.available() == 0);
}
////////////////////////////////////////////////////////////////
inline byte getByte(void)
{
return Wire.read();
}
////////////////////////////////////////////////////////////////
inline void stopRead(void)
{
Wire.endTransmission(true);
}
////////////////////////////////////////////////////////////////
byte readByte(byte SUB)
{
requestBytes(SUB, 1);
byte result = getByte();
stopRead();
return result;
}
////////////////////////////////////////////////////////////////
void readBytes(byte SUB, byte* buff, byte count)
{
requestBytes(SUB, count);
for (int i = 0; i < count; i++)
buff[i] = getByte();
stopRead();
}
At this point, you can simply read the values in this way:
// ACCEL_XOUT_H
// burst read the registers using auto-increment:
byte data[6];
readBytes(ACCEL_XOUT_H, data, 6);
// convert the data:
acc_x = (data[0] << 8) | data[1];
// ...
Warning!
Looks like this cannot be done for other registers. For example, to read the FIFO_COUNT_* I have to do this (otherwise I get incorrect results):
uint16_t FIFO_size(void)
{
byte bytes[2];
// this does not work
//readBytes(FIFO_COUNT_H, bytes, 2);
bytes[1] = readByte(FIFO_COUNT_H);
bytes[2] = readByte(FIFO_COUNT_L);
return unisci_bytes(bytes[1], bytes[2]);
}
Reading the FIFO
Looks like the FIFO works differently: you can burst read by simply requesting multiple bytes from the FIFO_R_W register and the MPU6050 will give you the bytes in the FIFO without incrementing the register.
I found this example where they use I2Cdev::readByte(SAD, FIFO_R_W, buffer) to read a given number of bytes from the FIFO and if you look at I2Cdev::readByte() (here) it simply requests N bytes from the FIFO register:
// ... send FIFO_R_W and request N bytes ...
for(...; ...; count++)
data[count] = Wire.receive();
// ...
How to
This is simple since the FIFO_R_W does not auto-increment:
byte data[12];
void loop() {
// ...
readBytes(FIFO_R_W, data, 12); // <- replace 12 with your burst size
// ...
}
Warning!
Using FIFO_size() is very slow!
Also my advice is to use 400kHz I2C frequency, which is the MPU6050's maximum speed
Hope it helps ;)
As Luca says, the burst read semantic seems to be different depending on the register the read operation starts at.
Reading consistent samples
To read a consistent set of raw data values, you can use the method I2C.readRegister(int, ByteBuffer, int) with register number 59 (ACCEL_XOUTR[15:8]) and a length of 14 to read all the sensor data ACCEL, TEMP, and GYRO in one operation and get consistent data.
Burst read of FIFO data
However, if you use the FIFO buffer of the chip, you can start the burst read with the same method signature on register 116 (FIFO_R_W) to read the given amount of data from the chip-internal fifo buffer. Doing so you must keep in mind that there is a limit on the number of bytes that can be read in one burst operation. If I'm interpreting https://github.com/joan2937/pigpio/blob/c33738a320a3e28824af7807edafda440952c05d/pigpio.c#L3914 right, a maximum of 31 bytes can be read in a single burst operation.
I have a fixed message protocol to work with for a COM device. How do I
specifically declare that I do not have a termination character when I write to the serial port?
If I declare that I do not have any termination character from the serial port, is it necessary to specify that in the Serial.readline() as well?
import serial
ser = serial.Serial(
port='COM4',
baudrate=115200,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=None,
xonxoff=False,
rtscts=False,
write_timeout=None,
dsrdtr=False,
inter_byte_timeout=None,
exclusive=None)
If you don't have a line termination character and your message is of fixed length it makes more sense to read a fixed number of bytes from the port.
If, for instance, your message is 100 bytes you can do:
serial.read(100) # Reads UP TO 100 bytes
Note that if you have a None timeout this will read up to 100 bytes. So if you have less than that it will return as many as it found (that might be 99, 5, or none).
With this in mind, it is recommended that you check you received the complete message by comparing the number of bytes received with the expected message length.
You can also define a timeout with timeout=1 and do something like this:
timeout=time.time()+3.0
while ser.inWaiting() or time.time()-timeout<0.0:
if ser.inWaiting()>0:
data+=ser.read(ser.inWaiting())
timeout=time.time()+3.0
print(data)
This will make sure you read the whole message and nothing else has been arrived to the buffer by waiting for 3 seconds after you finish reading.
I am using C_Decrypt with the CKM_AES_CBC_PAD mechanism. I know that my ciphertext which is 272 bytes long should actually decrypt to 256 bytes, which means a full block of padding was added.
I know that according to the standard when invoking C_Decrypt with a NULL output buffer the function may return an output length which is somewhat longer than the actual required length, in particular when padding is used this is understandable, as the function can't know how many padding bytes are in the final block without carrying out the actual decryption.
So the question is whether if I know that I should get exactly 256 bytes back, such as in the scenario I explained above, does it make sense that I am still getting a CKR_BUFFER_TOO_SMALL error as a result, despite passing a 256 bytes buffer? (To make it clear: I am indicating that this is the length of the output buffer in the appropriate output buffer length parameter, see the parameters of C_Decrypt to observe what I mean)
I am encountering this behavior with a Safenet Luna device and am not sure what to make of it. Is it my code's fault for not querying for the length first by passing NULL in the output buffer, or is this a bug on the HSM/PKCS11 library side?
One more thing I should perhaps mention is that when I provide a 272 (256+16) bytes output buffer, the call succeeds and I am noticing that I am getting back my expected plaintext, but also the padding block which means 16 final bytes with the value 0x10. However, the output length is updated correctly to 256, not 272 - this also proves that I am not using CKM_AES_CBC instead of CKM_AES_CBC_PAD accidentally, which I suspected for a moment as well :)
I have used CKM.AES_CBC_PAD padding mechanism with C_Decrypt in past. You have to make 2 calls to C_Decrypt (1st ==> To get the size of the plain text, 2nd ==> Actual decryption). see the documentation here which talks about determining the length of the buffer needed to hold the plain-text.
Below is the step-by-step code to show the behavior of decryption:
//Defining the decryption mechanism
CK_MECHANISM mechanism = new CK_MECHANISM(CKM.AES_CBC_PAD);
//Initialize to zero -> variable to hold size of plain text
LongRef lRefDec = new LongRef();
// Get ready to decrypt
CryptokiEx.C_DecryptInit(session_1, mechanism, key_handleId_in_hsm);
// Get the size of the plain text -> 1st call to decrypt
CryptokiEx.C_Decrypt(session_1, your_cipher, your_cipher.length, null, lRefDec);
// Allocate space to the buffer to store plain text.
byte[] clearText = new byte[(int)lRefDec.value];
// Actual decryption -> 2nd call to decrypt
CryptokiEx.C_Decrypt(session_1, eFileCipher, eFileCipher.length, eFileInClear,lRefDec);
Sometimes, decryption fails because your input encryption data was misleading (however, encryption is successful but corresponding decryption will fail) the decryption algorithm. So it is important not to send raw bytes directly to the encryption algorithm; rather encoding the input data with UTF-8/16 schema's preserves the data from getting misunderstood as network control bytes.
Quoted from MSDN entry for TransmitFile:
The maximum number of bytes that can be transmitted using a single call to the TransmitFile function is 2,147,483,646, the maximum value for a 32-bit integer minus 1. The maximum number of bytes to send in a single call includes any data sent before or after the file data pointed to by the lpTransmitBuffers parameter plus the value specified in the nNumberOfBytesToWrite parameter for the length of file data to send. If an application needs to transmit a file larger than 2,147,483,646 bytes, then multiple calls to the TransmitFile function can be used with each call transferring no more than 2,147,483,646 bytes. Setting the nNumberOfBytesToWrite parameter to zero for a file larger than 2,147,483,646 bytes will also fail since in this case the TransmitFile function will use the size of the file as the value for the number of bytes to transmit.
Alright. Sending a file of size 2*2,147,483,646 bytes (~ 4 GiB) with TransmitFile would then have to be divided into two parts at minimum (e.g. 2 GiB + 2 GiB in two calls to TransmitFile). But how exactly would one go about doing that, while preferably also keeping the underlying TCP connection alive in between?
When the file is indeed <=2,147,483,646 bytes in size, one could just write:
HANDLE fh = CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, NULL,
OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN, NULL);
TransmitFile(SOCK_STREAM_socket, fh, 0, 0, NULL, NULL, TF_DISCONNECT);
to let Windows handle all the lower-level stuff (caching, chunking the data up into pieces for efficient transmission etc. However, unlike the comparable Linux sendfile() syscall, there is no immediately obvious offset argument in the call (although the fifth argument, LPOVERLAPPED lpOverlapped probably is exactly what I'm looking for). I suppose I could hack something together, but I'm also looking for a graceful, good practice Win32 solution from someone who actually knows about this stuff.
You can use the lpOverlapped parameter to specify a 64-bit offset within the file at which to start the file data transfer by setting the Offset and OffsetHigh member of the OVERLAPPED structure. If lpOverlapped is a NULL pointer, the transmission of data always starts at the current byte offset in the file.
So, for lack of a minimal example readily available on the net, which calls are necessary to accomplish such a task?
Managed to figure it out based on the comments.
So, if LPOVERLAPPED lpOverlapped is a null pointer, the call starts transmission at the current file offset of the file (much like the Linux sendfile() syscall and its off_t *offset parameter). This offset (pointer) can be manipulated with SetFilePointerEx, so one could write:
#define TRANSMITFILE_MAX ((2<<30) - 1)
LARGE_INTEGER total_bytes;
memset(&total_bytes, 0, sizeof(total_bytes));
while (total_bytes < filesize) {
DWORD bytes = MIN(filesize-total_bytes, TRANSMITFILE_MAX);
if (!TransmitFile(SOCK_STREAM_socket, fh, bytes, 0, NULL, NULL, 0))
{ /* error handling */ }
total_bytes.HighPart += bytes;
SetFilePointerEx(fh, total_bytes, NULL, FILE_BEGIN);
}
closesocket(SOCK_STREAM_socket);
to accomplish the task.
Not very elegant imo, but it works.