Problem receiving data from bluetooth at very high speeds with Bluetooth Serial - flutter

I'm using the flutter_bluetooth_serial 0.4.0 package, it has a listen function that receives a function that returns the reading of the sending of a string (List Uint8), but for my case I need to carry out the communication at a very high speed, and when this happens it does not understand where the end of a string is and joins overflow with 230 bytes and shows as if it was a single string, I tried to solve this in several ways, but I only managed to receive a complete string (18 bytes) when I slow down of transmission. I would need to read 40 bytes at a time regardless of '\n' or anything, I would have for this case maybe read byte by byte to validate the message and not a Uint8. In this case the input is a Stream and I would have to transform it into a Stream or something that makes me receive 40 bytes at a time regardless of the speed. Most of the time, a message comes together with another, and this could not happen in this transmission.
Here is the code snippet I get the data:
_dataSubscription = connection.input!.listen(_onDataReceived);
void _onDataReceived(Uint8List data) {
print('Data incoming: ${ascii.decode(data)}');
// Allocate buffer for parsed data
var backspacesCounter = 0;
for (var byte in data) {
if (byte == 8 || byte == 127) backspacesCounter++;
}
var buffer = Uint8List(data.length - backspacesCounter);
var bufferIndex = buffer.length;
// Apply backspace control character
backspacesCounter = 0;
for (int i = data.length - 1; i >= 0; i--) {
if (data[i] == 8 || data[i] == 127) {
backspacesCounter++;
} else {
if (backspacesCounter > 0) {
backspacesCounter--;
} else {
buffer[--bufferIndex] = data[i];
}
}
// print(backspacesCounter);
// print(buffer);
// print(bufferIndex);
}

Related

Problem receiving data from bluetooth at very high speeds

I'm using the flutter_bluetooth_serial 0.4.0 package, it has a listen function that receives a function that returns the reading of the sending of a string (List Uint8), but for my case I need to carry out the communication at a very high speed, and when that it happens it does not understand where the end of a string is and it joins it until it overflows with 230 bytes and shows it as if it were a single string, I tried to solve this in several ways, but I can only receive the complete string (18 bytes) when I reduce the transmission speed, I've tried using some characters like '\n', to see if it understands where a string ends and another begins, but I wasn't successful either. If I could read character by character for me it would also solve it, because the messages have a sending pattern.
Do you have an idea how I could be solving this? Some package that works better than this one for this purpose or some way to determine where the end of the string is. I thank!
Here is the code snippet I get the data:
_dataSubscription = connection.input!.listen(_onDataReceived);
void _onDataReceived(Uint8List data) {
print('Data incoming: ${ascii.decode(data)}');
// Allocate buffer for parsed data
var backspacesCounter = 0;
for (var byte in data) {
if (byte == 8 || byte == 127) backspacesCounter++;
}
var buffer = Uint8List(data.length - backspacesCounter);
var bufferIndex = buffer.length;
// Apply backspace control character
backspacesCounter = 0;
for (int i = data.length - 1; i >= 0; i--) {
if (data[i] == 8 || data[i] == 127) {
backspacesCounter++;
} else {
if (backspacesCounter > 0) {
backspacesCounter--;
} else {
buffer[--bufferIndex] = data[i];
}
}
// print(backspacesCounter);
// print(buffer);
// print(bufferIndex);
}
I've tried using some characters like '\n', to see if it understands where a string ends and another begins, read character per character, but doesn't have function to do this.

TCP/.NET: Reading length prefixed buffer or fixed buffer size

I'm currently in the progress of writing my own TCP network library in C# and I want to make sure I'm doing things right.
Currently, I have every packet that gets sent through a socket prefixed with the length. The receiving end will read 4 bytes, convert that to an int, and read that many bytes afterward.
I'm not sure if that's the best approach. Would it be better to just read a fixed amount of bytes and process from there?
In the end, I would just extract 4 bytes from the fixed length buffer and read ahead however much I need.
Here is the code that I currently have to demonstrate my thought process.
internal void BeginReceive()
{
ReceiveImpl();
}
protected virtual void ReceiveImpl()
{
// Should we rather receive a bigger buffer (e.g. 8KB) immediately and then grab the length from that?
// It would require an internal buffer to store whatever we've already read ahead.
var recvState = new ReceiveState(new byte[4]);
Socket.BeginReceive(recvState.Buffer, 0, 4, SocketFlags.None, OnReceiveLength, recvState);
}
private void OnReceiveLength(IAsyncResult ar)
{
var recvState = (ar.AsyncState as ReceiveState)!;
var bytesRead = Socket.EndReceive(ar, out var errorCode);
if (errorCode != SocketError.Success)
{
// we ain't good fam!
return;
}
if (bytesRead > 0) // should we rather check if we read the 4 bytes we wanted?
{
var length = LengthFromBuffer(recvState.Buffer);
recvState.Buffer = new byte[length];
#if DEBUG
Console.WriteLine($"Receiving a packet with length of {length}.");
#endif
// what if the packet is absolutely massive? should we limit the buffer size?
Socket.BeginReceive(recvState.Buffer, 0, length, SocketFlags.None, OnReceive, recvState);
}
}
private void OnReceive(IAsyncResult ar)
{
var recvState = (ar.AsyncState as ReceiveState)!;
var bytesRead = Socket.EndReceive(ar, out var errorCode);
if (errorCode != SocketError.Success)
{
// we ain't good fam!
return;
}
if (bytesRead > 0)
{
recvState.BytesReceived += bytesRead;
if (recvState.BytesReceived < recvState.Buffer.Length)
{
Socket.BeginReceive(recvState.Buffer, recvState.BytesReceived,
recvState.Buffer.Length - recvState.BytesReceived, SocketFlags.None, OnReceive, recvState);
return;
}
OnDataReceived(recvState.Buffer); // this will call BeginReceive again.
}
}

Sandisk SD-Card freeze after CMD41

I'm using FTDI C232HM USB->SPI cable using Windows10
The issue is: once I send CMD41 to SanDisk 4GB microSD card, it replay with "1" then doesn't replay to any other command
the init commands are the
Set CS pin HIGH (deselects the slave)
Toggle clock 80 times
Set CS pin LOW (selects the slave)
Send CMD0: 0x40,0x00,0x00,0x00,0x00,0x95 (response: 0x01)
Send CMD8: 0x48,0x00,0x00,0x01,0xAA,0x87 (response is 0x01; 0x000001AA)
Send CMD55: 0x77,0x00,0x00,0x00,0x00,0x00 (response: 0x01)
Send ACMD41:0x69,0x40,0x00,0x00,0x00,0x00 (response: 0x01)
Send CMD55: 0x77,0x00,0x00,0x00,0x00,0x00 (response: 0xFF) loop forever
Any ideas ?
Thanks,
Roy
Added DUMMY 0XFF before any Send SPI command
Added DUMMY 0XFF + 0xFF before any Send SPI command
SD_PowerOn();
Sleep(100);
SD_SendCmd(CMD0, 0);
if (SD_SendCmd(CMD0, 0) == 1)
{
Timer1 = 100;
if (SD_SendCmd(CMD8, 0x1AA) == 1)
{
do {
if (SD_SendCmd(CMD55, 0) <= 1)
{
n = SD_SendCmd(CMD41, 1UL << 30);
if(n == 0)
break; /* ACMD41 with HCS bit */
}
} while (Timer1--);
if (Timer1 && SD_SendCmd(CMD58, 0) == 0)
{
for (n = 0; n < 4; n++)
{
ocr[n] = SPI_RxByte();
}
type = (ocr[0] & 0x40) ? 6 : 2;
}
}

read from socket part of data but only once

I would like to get data from serial port on uClinux. How it works: I have peripheral device that I want to enter bootloader mode. To do this I have to send data i2500$ and it means that it is in bootloader mode. Unfortunetly I can read only >i2 and if I use my method it do not return data any more or if i reset the device and repeat start jumping to bootloader by
int TEnforaUpdate::ReadFromComport (unsigned long timeouta, unsigned long size)
{
FD_SET(fdCom, &read_fds);
int retValue = 0;
// Set timeout to x microseconds
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 1000 * timeouta;
// Wait for input to become ready or until the time out; the first parameter is
// 1 more than the largest file descriptor in any of the sets
if ((select (fdCom + 1, &read_fds, &write_fds, &except_fds, &timeout) == 1)
&& (FD_ISSET(fdCom,&read_fds)))
{
//read max
retValue = read (fdCom, RxBuffer, RX_BUFFER_SIZE);
printf ("Read %d bytes: ", retValue);
int i;
for (i = 0; i < retValue; i++)
printf ("[%02x]", RxBuffer[i]);
printf ("\n");
}
else
return 1;
if (retValue > 2)
{
strcpy (answer, (char*) RxBuffer);
//remove trashes from buffer
memset (RxBuffer, 0x00, RX_BUFFER_SIZE);
printf ("Comport answers: %s \n", answer);
}
FD_CLR(fdCom, &read_fds);
tcflush (fdCom, TCIFLUSH);
return retValue;
}

Transmission of float values over TCP/IP and data corruption

I have an extremely strange bug.
I have two applications that communicate over TCP/IP.
Application A is the server, and application B is the client.
Application A sends a bunch of float values to application B every 100 milliseconds.
The bug is the following: sometimes some of the float values received by application B are not the same as the values transmitted by application A.
Initially, I thought there was a problem with the Ethernet or TCP/IP drivers (some sort of data corruption). I then tested the code in other Windows machines, but the problem persisted.
I then tested the code on Linux (Ubuntu 10.04.1 LTS) and the problem is still there!!!
The values are logged just before they are sent and just after they are received.
The code is pretty straightforward: the message protocol has a 4 byte header like this:
//message header
struct MESSAGE_HEADER {
unsigned short type;
unsigned short length;
};
//orientation message
struct ORIENTATION_MESSAGE : MESSAGE_HEADER
{
float azimuth;
float elevation;
float speed_az;
float speed_elev;
};
//any message
struct MESSAGE : MESSAGE_HEADER {
char buffer[512];
};
//receive specific size of bytes from the socket
static int receive(SOCKET socket, void *buffer, size_t size) {
int r;
do {
r = recv(socket, (char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//send specific size of bytes to a socket
static int send(SOCKET socket, const void *buffer, size_t size) {
int r;
do {
r = send(socket, (const char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//get message from socket
static bool receive(SOCKET socket, MESSAGE &msg) {
int r = receive(socket, &msg, sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
if (ntohs(msg.length) == 0) return true;
r = receive(socket, msg.buffer, ntohs(msg.length));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
//send message
static bool send(SOCKET socket, const MESSAGE &msg) {
int r = send(socket, &msg, ntohs(msg.length) + sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
When I receive the message 'orientation', sometimes the 'azimuth' value is different from the one sent by the server!
Shouldn't the data be the same all the time? doesn't TCP/IP guarantee delivery of the data uncorrupted? could it be that an exception in the math co-processor affects the TCP/IP stack? is it a problem that I receive a small number of bytes first (4 bytes) and then the message body?
EDIT:
The problem is in the endianess swapping routine. The following code swaps the endianess of a specific float around, and then swaps it again and prints the bytes:
#include <iostream>
using namespace std;
float ntohf(float f)
{
float r;
unsigned char *s = (unsigned char *)&f;
unsigned char *d = (unsigned char *)&r;
d[0] = s[3];
d[1] = s[2];
d[2] = s[1];
d[3] = s[0];
return r;
}
int main() {
unsigned long l = 3206974079;
float f1 = (float &)l;
float f2 = ntohf(ntohf(f1));
unsigned char *c1 = (unsigned char *)&f1;
unsigned char *c2 = (unsigned char *)&f2;
printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]);
printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]);
getchar();
return 0;
}
The output is:
7F 8A 26 BF
7F CA 26 BF
I.e. the float assignment probably normalizes the value, producing a different value from the original.
Any input on this is welcomed.
EDIT2:
Thank you all for your replies. It seems the problem is that the swapped float, when returned via the 'return' statement, is pushed in the CPU's floating point stack. The caller then pops the value from the stack, the value is rounded, but it is the swapped float, and therefore the rounding messes up the value.
TCP tries to deliver unaltered bytes, but unless the machines have similar CPU-s and operating-systems, there's no guarantee that the floating-point representation on one system is identical to that on the other. You need a mechanism for ensuring this such as XDR or Google's protobuf.
You're sending binary data over the network, using implementation-defined padding for the struct layout, so this will only work if you're using the same hardware, OS and compiler for both application A and application B.
If that's ok, though, I can't see anything wrong with your code. One potential issue is that you're using ntohs to extract the length of the message and that length is the total length minus the header length, so you need to make sure you setting it properly. It needs to be done as
msg.length = htons(sizeof(ORIENTATION_MESSAGE) - sizeof(MESSAGE_HEADER));
but you don't show the code that sets up the message...