Cancelling retransmissions on a L2CAP socket - sockets

I was wondering if anyone can assist me with a problem that I have with C Bluetooth programming (Linux Bluez).
I am using Ubuntu 10.04, BlueZ 4.60.
My goal is to have a L2CAP socket in which there will be minimal delay for sending data between 2 computers.
So far I managed to open an L2CAP socket, but this socket has endless retransmissions and I'm trying to change it. I want to have no retransmissions at all because I need the data to be transfer fast with minimal delay and the reliability of the data is not important.
I found an example online that deals with changing the flush timout for the socket and by that causing that if a packet is not acked after a certain period of time it is dropped and the next data in the buffer is sent.
The problem is that this example doesn't work :-(
Here is my code, this method is called after the bind command:
int set_flush_timeout(bdaddr_t *ba, int timeout)
{
int err = 0, dd, dev_id;
struct hci_conn_info_req *cr = 0;
struct hci_request rq = { 0 };
struct {
uint16_t handle;
uint16_t flush_timeout;
} cmd_param;
struct {
uint8_t status;
uint16_t handle;
} cmd_response;
// find the connection handle to the specified bluetooth device
cr = (struct hci_conn_info_req*) malloc(
sizeof(struct hci_conn_info_req) +
sizeof(struct hci_conn_info));
bacpy( &cr->bdaddr, ba );
cr->type = ACL_LINK;
dev_id = hci_get_route( NULL);
dd = hci_open_dev( dev_id );
if( dd < 0 ) {
err = dd;
goto cleanup;
}
err = ioctl(dd, HCIGETCONNINFO, (unsigned long) cr );
if( err ) goto cleanup;
// build a command packet to send to the bluetooth microcontroller
cmd_param.handle = cr->conn_info->handle;
cmd_param.flush_timeout = htobs(timeout);
rq.ogf = OGF_HOST_CTL;
rq.ocf = 0x28;
rq.cparam = &cmd_param;
rq.clen = sizeof(cmd_param);
rq.rparam = &cmd_response;
rq.rlen = sizeof(cmd_response);
rq.event = EVT_CMD_COMPLETE;
// send the command and wait for the response
err = hci_send_req( dd, &rq, 1 );
if( err ) goto cleanup;
if( cmd_response.status ) {
err = -1;
errno = bt_error(cmd_response.status);
}
cleanup:
free(cr);
if( dd >= 0) close(dd);
return err;
}
What is my mistake?
Does anyone know another option that will solve my problem.
Code examples will also be great!!
Thanks!!

This code to set the automatic flush time out seems to be correct.
You can make sure by ensuring that you are getting "Success" in response to this command's command complete event.
I suspect that the issue might be in your packet sending code, note that for the automatic flush timeout to take effect the individual packets should be marked as automatically flushable, The HCI data packet has the Packet_Boundary_Flag which you can sent to indicate if individual packets are flushable.
Also note that the Flush timeout has to be large enough to allow for enough time so that the packets gets a transmission attempt, the way the flush timeout are defined can cause the packet to be flushed even without the packet being transmitted even once, so you need to tune it. By definition Flush timeout start when the packet is Queued for transmission.

Related

Is recv(bufsize) guaranteed to receive all the data if sended data is smaller then bufsize?

For example:
Client Side
...
socket.connect(server_address)
data = some_message_less_than_100_bytes
socket.sendall(data)
...
Server Side
...
socket.accept()
socket.recv(1024)
...
Is the server side guaranteed to receive the data in one recv()?
If not, how does the standard solution using header for specifying message length even works?
The header itself could have been split and we have to check if header has been correctly received.
Or the header is fixed length? So that the receiver can always interpret the first few bytes in the same way no matter in how many pieces that data is sent?
Actually I'm trying to do something like this
Client
while():
send()
recv()
Server
recv()
while():
send() # Acknowledge to client
recv()
which is suggested by ravi in Linux socket: How to make send() wait for recv()
but I figured out the problem described above.
Is the ravi's answer assuming that both client and server will receive what the other sent in a single recv()?
Update
I would very like to post the image but I can't because of low reputation...
Following link is the HTTP Frame Format
https://datatracker.ietf.org/doc/html/rfc7540#section-4
It indeed used a fixed length solution, so that no matter in how many pieces the header is split it can work with the same way.
So I guess, some sort of 'fixed' length is the only solution? Even if the header size itself is variable, it then probably have some promised bits to indicate how long the header would be. Am I right?
Is the server side guaranteed to receive the data in one recv()?
For UDP, yes. recv() will return either 1 whole datagram, or an error. Though, if the buffer size is smaller than the datagram then the data will be truncated and you can't recover it.
For TCP, no. The only guarantee you have is that if no error occurs then recv() will return at least 1 byte but no more than the specified buffer size, it can return any number of bytes in between.
If not, how does the standard solution using header for specifying message length even works? The header itself could have been split and we have to check if header has been correctly received. Or the header is fixed length?
It can go either way, depending on the particular format of the header. Many protocols use fixed-length headers, and many protocols use variable-length headers.
Either way, you may have to call send() multiple times to ensure you send all the relevant bytes, and call recv() multiple times to ensure you receive all them. There is no 1:1 relationship between sends and reads in TCP.
Is the ravi's answer assuming that both client and server will receive what the other sent in a single recv()?
Ravi's answer makes no assumptions whatsoever about the number of bytes sent by send() and received by recv(). His answer is presented in a more higher-level perspective. But, it is very trivial to force the required behavior, eg:
int sendAll(int sckt, void *data, int len)
{
char *pdata = (char*) data;
while (len > 0) {
int res = send(sckt, pdata, len, 0);
if (res > 0) {
pdata += res;
len -= res;
}
else if (errno != EINTR) {
if ((errno != EWOULDBLOCK) && (errno != EAGAIN)) {
return -1;
}
/*
optional: use select() or (e)poll to
wait for the socket to be writable ...
*/
}
}
return 0;
}
int recvAll(int sckt, void *data, int len)
{
char *pdata = (char*) data;
while (len > 0) {
int res = recv(sckt, pdata, len, 0);
if (res > 0) {
pdata += res;
len -= res;
}
else if (res == 0) {
return 0;
}
else if (errno != EINTR) {
if ((errno != EWOULDBLOCK) && (errno != EAGAIN)) {
return -1;
}
/*
optional: use select() or (e)poll to
wait for the socket to be readable ...
*/
}
}
return 1;
}
This way, you can use sendAll() to send the message header followed by the message data, and recvAll() to receive the message header followed by the message data.
Is the server side guaranteed to receive the data in one recv()?
No.
TCP is a byte stream, not a message protocol. While it will likely work with small messages and an empty send buffer in most cases, it will start to fail if the data send get larger than the MTU of the underlying data link. TCP does not guarantee any atomar send-recv pair though for anything but a single octet. So don't count on it even for small data.

Asynchronous sending data using kqueue

I have a server written in plain-old C accepting TCP connections using kqueue on FreeBSD.
Incoming connections are accepted and added to a simple connection pool to keep track of the file handle.
When data is received (on EVFILT_READ), I call recv() and then I put the payload in a message queue for a different thread to process it.
Receiving and processing data this way works perfect.
When the processing thread is done, it may need to send something back to the client. Since the processing thread has access to the connection pool and can easily get the file handle, I'm simply calling send() from the processing thread.
This works 99% of the time, but every now and then kqueue gives me a EV_EOF flag, and the connection is dropped.
There is a clear correlation between the frequency of the calls to send() and the number of EV_EOF errors, so I have a feeling the EV_EOF due to some race condition between my kqueue thread and the processing thread.
The calls to send() always returns the expected byte count, so I'm not filling up the tx buffer.
So my question; Is it acceptable to call send() from a separate thread as described here? If not, what would be the right way to send data back to the clients asynchronously?
All the examples I find calls send() in the same context as the kqueue loop, but my processing threads may need to send back data at any time - even minutes after the last received data from the client - so obviously I can't block the kqueue loop for that time..
Relevant code snippets:
void *tcp_srvthread(void *arg)
{
[[...Bunch of declarations...]]
tcp_serversocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
...
setsockopt(tcp_serversocket, SOL_SOCKET, SO_REUSEADDR, &i, sizeof(int));
...
err = bind(tcp_serversocket, (const struct sockaddr*)&sa, sizeof(sa));
...
err = listen(tcp_serversocket, 10);
...
kq = kqueue();
EV_SET(&evSet, tcp_serversocket, EVFILT_READ | EV_CLEAR, EV_ADD, 0, 0, NULL);
...
while(!fTerminated) {
timeout.tv_sec = 2; timeout.tv_nsec = 0;
nev = kevent(kq, &evSet, 0, evList, NLIST, &timeout);
for (i=0; i<nev; i++) {
if (evList[i].ident == tcp_serversocket) { // new connection?
socklen = sizeof(addr);
fd = accept(evList[i].ident, &addr, &socklen); // accept it
if(fd > 0) { // accept ok?
uidx = conn_add(fd, (struct sockaddr_in *)&addr); // Add it to connected controllers
if(uidx >= 0) { // add ok?
EV_SET(&evSet, fd, EVFILT_READ | EV_CLEAR, EV_ADD, 0, 0, (void*)(uint64_t)(0x00E20000 | uidx)); // monitor events from it
if (kevent(kq, &evSet, 1, NULL, 0, NULL) == -1) { // monitor ok?
conn_delete(uidx); // ..no, so delete it from my list also
}
} else { // no room on server?
close(fd);
}
}
else Log(0, "ERR: accept fd=%d", fd);
}
else
if (evList[i].flags & EV_EOF) {
[[ ** THIS IS CALLED SOMETIMES AFTER CALLING SEND - WHY?? ** ]]
uidx = (uint32_t)evList[i].udata;
conn_delete( uidx );
}
else
if (evList[i].filter == EVFILT_READ) {
if((nr = recv(evList[i].ident, buf, sizeof(buf)-2, 0)) > 0) {
uidx = (uint32_t)evList[i].udata;
recv_data(uidx, buf, nr); // This will queue the message for the processing thread
}
}
}
else {
// should not get here.
}
}
}
The processing thread looks something like this (obviously there's a lot of data manipulation going on in addition to what's shown) :
void *parsethread(void *arg)
{
int len;
tmsg_Queue mq;
char is_ok;
while(!fTerminated) {
if((len = msgrcv(msgRxQ, &mq, sizeof(tmsg_Queue), 0, 0)) > 0) {
if( process_message(mq) ) {
[[ processing will find the uidx of the client and build the return data ]]
send( ctl[uidx].fd, replydata, replydataLen, 0 );
}
}
}
}
Appreciate any ideas or nudges in the right direction. Thanks.
EV_EOF
If you write to a socket after the peer closed the reading part of it, you will receive a RST, which triggered EVFILT_READ with EV_EOF set.
Async
You should try aio_read and aio_write.

UDP port arduino increment after packet sent

I'm working on a code to communicate two arduinos, one with ethernet shield and another with an ENC28J60 ethernet module. I'm not a newbie in arduino neither an wise/expert yet. But i'm a complete -and less than a- newbie in UDP communication.
Here is the question: my code works fine, it sends and receives UDP packets from one to another and viceversa. But after every packet is sent, it increment in one the "Udp.remotePort" value (that viewing from the "udp-reader" side). It starts from 1024 up to ~32000 (and starts over after reach the highest value). I have researched about UDP and i understand that the first 0-1023 are reserved for specifics services p.e. 80 http, 21 ftp. But i think it should not be incremented after every send. Or it should?
I don't paste the code because as i said it works OK. I just would like to know what could be wrong from your experience.
The sentence i'm using to write the packets is:
udp.beginPacket(IPAddress([ip address]), [port no]);
The libraries i'm using:
UIPEthernet.h https://github.com/UIPEthernet/UIPEthernet for ENC28J60
Ethernet.h for ethernet shield
EDIT: This is the code of the UDP sender (ENC28J60). Basically is the example code of the library as i said it works correctly in terms of communication. I only changed the IPs: 192.168.1.50 which is the UDP sender and 192.168.1.51 which is the UDP destination.
#include <UIPEthernet.h>
EthernetUDP udp;
unsigned long next;
void setup() {
Serial.begin(115200);
uint8_t mac[6] = {0x00,0x01,0x02,0x03,0x04,0x05};
Ethernet.begin(mac,IPAddress(192,168,1,51));
// Also i used: Ethernet.begin(mac,IPAddress(192,168,1,51), 5000);
// with the same result
next = millis()+2000;
}
void loop() {
int success;
int len = 0;
if (((signed long)(millis()-next))>0)
{
do
{
success = udp.beginPacket(IPAddress(192,168,1,50),5000);
Serial.print("beginPacket: ");
Serial.println(success ? "success" : "failed");
//beginPacket fails if remote ethaddr is unknown. In this case an
//arp-request is send out first and beginPacket succeeds as soon
//the arp-response is received.
}
while (!success && ((signed long)(millis()-next))<0);
if (!success )
goto stop;
success = udp.write("hello world&from&arduino");
Serial.print("bytes written: ");
Serial.println(success);
success = udp.endPacket();
Serial.print("endPacket: ");
Serial.println(success ? "success" : "failed");
do
{
//check for new udp-packet:
success = udp.parsePacket();
}
while (!success && ((signed long)(millis()-next))<0);
if (!success )
goto stop;
Serial.print("received: '");
do
{
int c = udp.read();
Serial.write(c);
len++;
}
while ((success = udp.available())>0);
Serial.print("', ");
Serial.print(len);
Serial.println(" bytes");
//finish reading this packet:
udp.flush();
stop:
udp.stop();
next = millis()+2000;
}
}
EDIT 2: This is a capture of testing with SocketTest listening on port 5000, and after a packet received, the next one arrives with the remote port incremented on 1 each time
You must be creating a new UDP socket per sent datagram. Don't do that. Use the same one for the life of the application.

WSAConnect returns WSAEINVAL on WindowsXP

I use sockets in non-blocking mode, and sometimes WSAConnect function returns WSAEINVAL error.
I investigate a problem and found, that it occurs if there is no pause (or it is very small ) between
WSAConnect function calls.
Does anyone know how to avoid this situation?
Below you can found source code, that reproduce the problem. If I increase value of parameter in Sleep function to 50 or great - problem dissapear.
P.S. This problem reproduces only on Windows XP, on Win7 it works well.
#undef UNICODE
#include <winsock2.h>
#include <ws2tcpip.h>
#include <stdio.h>
#include <iostream>
#include <windows.h>
#pragma comment(lib, "Ws2_32.lib")
static int getError(SOCKET sock)
{
DWORD error = WSAGetLastError();
return error;
}
void main()
{
SOCKET sock;
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData) != 0) {
fprintf(stderr, "Socket Initialization Error. Program aborted\n");
return;
}
for (int i = 0; i < 1000; ++i) {
struct addrinfo hints;
struct addrinfo *res = NULL;
memset(&hints, 0, sizeof(hints));
hints.ai_flags = AI_PASSIVE;
hints.ai_socktype = SOCK_STREAM;
hints.ai_family = AF_INET;
hints.ai_protocol = IPPROTO_TCP;
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
fprintf(stderr, "GetAddrInfo Error. Program aborted\n");
closesocket(sock);
WSACleanup();
return;
}
struct addrinfo *ptr = 0;
for (ptr=res; ptr != NULL ;ptr=ptr->ai_next) {
sock = WSASocket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol, NULL, 0, NULL); //
if (sock == INVALID_SOCKET)
int err = getError(sock);
else {
u_long noblock = 1;
if (ioctlsocket(sock, FIONBIO, &noblock) == SOCKET_ERROR) {
int err = getError(sock);
closesocket(sock);
sock = INVALID_SOCKET;
}
break;
}
}
int ret;
do {
ret = WSAConnect(sock, ptr->ai_addr, (int)ptr->ai_addrlen, NULL, NULL, NULL, NULL);
if (ret == SOCKET_ERROR) {
int error = getError(sock);
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue;
}
else if (error == WSAEISCONN) {
fprintf(stderr, "+");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else if (error == 10037) {
fprintf(stderr, "-");
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
else {
fprintf(stderr, "Connect Error. [%d]\n", error);
closesocket(sock);
sock = SOCKET_ERROR;
break;
}
}
else {
int one = 1;
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char*)&one, sizeof(one));
fprintf(stderr, "OK\n");
break;
}
}
while (1);
}
std::cout<<"end";
char ch;
std::cin >> ch;
}
You've got a whole basketful of errors and questionable design and coding decisions here. I'm going to have to break them up into two groups:
Outright Errors
I expect if you fix all of the items in this section, your symptom will disappear, but I wouldn't want to speculate about which one is the critical fix:
Calling connect() in a loop on a single socket is simply wrong.
If you mean to establish a connection, drop it, and reestablish it 1000 times, you need to call closesocket() at the end of each loop, then call socket() again to get a fresh socket. You can't keep re-connecting the same socket. Think of it like a power plug: if you want to plug it in twice, you have to unplug (closesocket()) between times.
If instead you mean to establish 1000 simultaneous connections, you need to allocate a new socket with socket() on each iteration, connect() it, then go back around again to get another socket. It's basically the same loop as for the previous case, except without the closesocket() call.
Beware that since XP is a client version of Windows, it's not optimized for handling thousands of simultaneous sockets.
Calling connect() again is not the correct response to WSAEWOULDBLOCK:
if (error == WSAEWOULDBLOCK) {
Sleep(5);
continue; /// WRONG!
}
That continue code effectively commits the same error as above, but worse, if you only fix the previous error and leave this, this usage will then make your code start leaking sockets.
WSAEWOULDBLOCK is not an error. All it means after a connect() on a nonblcoking socket is that the connection didn't get established immediately. The stack will notify your program when it does.
You get that notification by calling one of select(), WSAEventSelect(), or WSAAsyncSelect(). If you use select(), the socket will be marked writable when the connection gets established. With the other two, you will get an FD_CONNECT event when the connection gets established.
Which of these three APIs to call depends on why you want nonblocking sockets in the first place, and what the rest of the program will look like. What I see so far doesn't need nonblocking sockets at all, but I suppose you have some future plan that will inform your decision. I've written an article, Which I/O Strategy Should I Use (part of the Winsock Programmers' FAQ) which will help you decide which of these options to use; it may instead guide you to another option entirely.
You shouldn't use AI_PASSIVE and connect() on the same socket. Your use of AI_PASSIVE with getaddrinfo() tells the stack you intend to use this socket to accept incoming connections. Then you go and use that socket to make an outgoing connection.
You've basically lied to the stack here. Computers find ways to get revenge when you lie to them.
Sleep() is never the right way to fix problems with Winsock. There are built-in delays within the stack that your program can see, such as TIME_WAIT and the Nagle algorithm, but Sleep() is not the right way to cope with these, either.
Questionable Coding/Design Decisions
This section is for things I don't expect to make your symptom go away, but you should consider fixing them anyway:
The main reason to use getaddrinfo() — as opposed to older, simpler functions like inet_addr() — is if you have to support IPv6. That kind of conflicts with your wish to support XP, since XP's IPv6 stack wasn't nearly as heavily tested during the time XP was the current version of Windows as its IPv4 stack. I would expect XP's IPv6 stack to still have bugs as a result, even if you've got all the patches installed.
If you don't really need IPv6 support, doing it the old way might make your symptoms disappear. You might end up needing an IPv4-only build for XP.
This code:
for (int i = 0; i < 1000; ++i) {
// ...
if (0 != getaddrinfo("172.20.1.59", "8091", &hints, &res)) {
...is inefficient. There is no reason you need to keep reinitializing res on each loop.
Even if there is some reason I'm not seeing, you're leaking memory by not calling freeaddrinfo() on res.
You should initialize this data structure once before you enter the loop, then reuse it on each iteration.
else if (error == 10037) {
Why aren't you using WSAEALREADY here?
You don't need to use WSAConnect() here. You're using the 3-argument subset that Winsock shares with BSD sockets. You might as well use connect() here instead.
There's no sense making your code any more complex than it has to be.
Why aren't you using a switch statement for this?
if (error == WSAEWOULDBLOCK) {
// ...
}
else if (error == WSAEISCONN) {
// ...
}
// etc.
You shouldn't disable the Nagle algorithm:
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, ...);

Data is getting discarded in TCP/IP with boost::asio::read_some?

I have implemented a TCP server using boost::asio. This server uses basic_stream_socket::read_some function to read data. I know that read_some does not guarantee that supplied buffer will be full before it returns.
In my project I am sending strings separated by a delimiter(if that matters). At client side I am using WinSock::send() function to send data. Now my problem is on server side I am not able to get all the strings which were sent from client side. My suspect is that read_some is receiving some data and discarding leftover data for some reason. Than again in next call its receiving another string.
Is it really possible in TCP/IP ?
I tried to use async_receive but that is eating up all my CPU, also since buffer has to be cleaned up by callback function its causing serious memory leak in my program. (I am using IoService::poll() to call handler. That handler is getting called at a very slow rate compared to calling rate of async_read()).
Again I tried to use free function read but that will not solve my purpose as it blocks for too much time with the buffer size I am supplying.
My previous implementation of the server was with WinSock API where I was able to receive all data using WinSock::recv().
Please give me some leads so that I can receive complete data using boost::asio.
here is my server side thread loop
void
TCPObject::receive()
{
if (!_asyncModeEnabled)
{
std::string recvString;
if ( !_tcpSocket->receiveData( _maxBufferSize, recvString ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
else
_parseAndPopulateQueue ( recvString );
}
else
{
if ( !_tcpSocket->receiveDataAsync( _maxBufferSize ) )
{
LOG_ERROR("Error Occurred while receiving data on socket.");
}
}
}
receiveData() in TCPSocket
bool
TCPSocket::receiveData( unsigned int bufferSize, std::string& dataString )
{
boost::system::error_code error;
char *buf = new char[bufferSize + 1];
size_t len = _tcpSocket->read_some( boost::asio::buffer((void*)buf, bufferSize), error);
if(error)
{
LOG_ERROR("Error in receiving data.");
LOG_ERROR( error.message() );
_tcpSocket->close();
delete [] buf;
return false;
}
buf[len] ='\0';
dataString.insert( 0, buf );
delete [] buf;
return true;
}
receiveDataAsync in TCP Socket
bool
TCPSocket::receiveDataAsync( unsigned int bufferSize )
{
char *buf = new char[bufferSize + 1];
try
{
_tcpSocket->async_read_some( boost::asio::buffer( (void*)buf, bufferSize ),
boost::bind(&TCPSocket::_handleAsyncReceive,
this,
buf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred) );
//! Asks io_service to execute callback
_ioService->poll();
}
catch (std::exception& e)
{
LOG_ERROR("Error Receiving Data Asynchronously");
LOG_ERROR( e.what() );
delete [] buf;
return false;
}
//we dont delete buf here as it will be deleted by callback _handleAsyncReceive
return true;
}
Asynch Receive handler
void
TCPSocket::_handleAsyncReceive(char *buf, const boost::system::error_code& ec, size_t size)
{
if(ec)
{
LOG_ERROR ("Error occurred while sending data Asynchronously.");
LOG_ERROR ( ec.message() );
}
else if ( size > 0 )
{
buf[size] = '\0';
emit _asyncDataReceivedSignal( QString::fromLocal8Bit( buf ) );
}
delete [] buf;
}
Client Side sendData function.
sendData(std::string data)
{
if(!_connected)
{
return;
}
const char *pBuffer = data.c_str();
int bytes = data.length() + 1;
int i = 0,j;
while (i < bytes)
{
j = send(_connectSocket, pBuffer+i, bytes-i, 0);
if(j == SOCKET_ERROR)
{
_connected = false;
if(!_bNetworkErrNotified)
{
_bNetworkErrNotified=true;
emit networkErrorSignal(j);
}
LOG_ERROR( "Unable to send Network Packet" );
break;
}
i += j;
}
}
Boost.Asio's TCP capabilities are pretty well used, so I would be hesitant to suspect it is the source of the problem. In most cases of data loss, the problem is the result of application code.
In this case, there is a problem in the receiver code. The sender is delimiting strings with \0. However, the receiver fails to proper handle the delimiter in cases where multiple strings are read in a single read operation, as string::insert() will cause truncation of the char* when it reaches the first delimiter.
For example, the sender writes two strings "Test string\0" and "Another test string\0". In TCPSocket::receiveData(), the receiver reads "Test string\0Another test string\0" into buf. dataString is then populated with dataString.insert(0, buf). This particular overload will copy up to the delimiter, so dataString will contain "Test string". To resolve this, consider using the string::insert() overload that takes the number of characters to insert: dataString.insert(0, buf, len).
I have not used the poll function before. What I did is create a worker thread that is dedicated to processing ASIO handlers with the run function, which blocks. The Boost documentation says that each thread that is to be made available to process async event handlers must first call the io_service:run or io_service:poll method. I'm not sure what else you are doing with the thread that calls poll.
So, I would suggest dedicating at least one worker thread for the async ASIO event handlers and use run instead of poll. If you want that worker thread to continue to process all async messages without returning and exiting, then add a work object to the io_service object. See this link for an example.