How to get NSOutputStream to send or flush packets immediately - iphone

I am having an issue with latency when connecting to a bluetooth accessory using the External Accessory Framework. When sending data I get the following custom output in the console:
if( [stream hasSpaceAvailable] )
{
NSLog( #"Space avail" );
}
else {
NSLog(#"No space");
}
while( [stream hasSpaceAvailable] && ( [_outputBuffer length] > 0 ) )
{
/* write as many bytes as possible */
NSInteger written = [stream write:[_outputBuffer bytes] maxLength:[_outputBuffer length]];
NSLog( #"wrote %i out of %i bytes to the stream", written, [_outputBuffer length] );
if( written == -1 )
{
/* error, bad */
Log( #"Error writing bytes" );
break;
}
else if( written > 0 )
{
/* remove the bytes from the buffer that were written */
Log( #"erasing %i bytes", written );
[_outputBuffer replaceBytesInRange:NSMakeRange( 0, written ) withBytes:nil length:0 ];
}
}
This results with the following output where immediate pack buffer is the payload.
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040010005
No space
immediate pack buffer-> 030040007
No space
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030010004
No space
immediate pack buffer-> 040000004
Space avail
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030000003
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
Notice how it continually has "No Space" written which means that the method hasSpaceAvailable is returning false and forcing the data to be buffered until it returns true.
1) What I need to know is why is the happening? Is it waiting for an Ack from the BT hardware? If so how do you removing this blocking?
2) How do you do this so it sends immediately and we basically stream the data in real time without buffering?
3) Is there a hidden API method that will disable this blocking?
This is a real problem because there cannot be any delay/latency in sending the data to the device, it must be sent immediately in order for the hardware to be in sync with the iPhone commands. Please help.

What you're asking for is impossible with most hardware (which will finish sending the current packet before starting the next one), and impossible with the usual "stream" paradigm (which requires that data is received in order, so is bandwidth-limited).
It is also physically impossible to have zero latency unless the source and destination are coincident.
The actual problem seems to be that the underlying stream only queues one packet at a time, even if the packet is only 10 bytes long. I don't know why; possibly because it's intended as a very simple protocol.
The usual way of dealing with such a queue is to register for the appropriate delegate callbacks and send as much data as you can when the stream has space available, instead of waiting for the next time you attempt to send data (which appears to be what you're doing).

The problem is the HandleEvent delegate function is an asynchronous call.So every time it is not hitting the delegate.
What you can do is, have the collections of commands in an array at once, open the session, call the writeData Function.What happens here is, once the write data is called, you don't need the HandleEvent Function to be hit for every command.
Have a count incremented in writeData function for the count of array items,Until count == arrayItems, Delegate is not hit..
So all the commands from list are sent one by one.

I am facing the same issue but in different scenario.
Scenario: iPhone app is able to communicate the PED when gets connected for the first time. But when PED battery dies or switched off and then switched on, app is not able to communicate with PED in spite of active session and valid output stream. Output steam says its does not have spece to write anything.
Solution: When PED gets switched, app gets notified, and at that moment I make the app to kill EASession and create it again when PED gets connection. Not sure whether it is best solution. Please suggest another solution if there is any.

Related

Using GSocketClient, how do I read incoming data without knowing how many incoming bytes there will be?

I am still struggling to be able to read incoming response messages from a piece of hardware my program is communicating with.
I am using a GSocketClient and am able to connect and successfully send messages using g_output_stream_write(). I then want to read the response sent back from the device, but I have no way of knowing how many bytes the reply will be in order to use g_input_stream_read(). I have also tried using g_input_stream_read_all(), but this seems to block the application and never return. I don't know how g_input_stream_read_all() determines that it has reached the end of a stream, but I assume the problem is somewhere there?
I know that there is incoming data because I can use g_input_stream_read() with a made-up byte size like 5 and I then see the first 5 incoming bytes, but the response size will always be different.
So my questions is, is there a way to determine how much data is waiting to be read so that I can plug that into g_input_stream_read() as a variable for the size to read? And if not, what is the correct usage of g_input_stream_read_all() to get it to not block like I am seeing it do?
Does something like the following work?
#define BUF_SIZE 1024
guint8 buffer[BUF_SIZE];
GByteArray *array = g_byte_array_new();
gsize bytes_read;
GError *error = NULL;
while (g_input_stream_read_all(istream, buffer, BUF_SIZE, &bytes_read, NULL, &error))
{
g_byte_array_append(array, buffer, bytes_read);
if (bytes_read < BUF_SIZE)
/* We've reached the end of the stream */
break;
}
if (error)
// error handling code

High CPU and Memory Consumption on using boost::asio async_read_some

I have made a server that reads data from client and I am using boost::asio async_read_some for reading data, and I have made one handler function and here _ioService->poll() will run event processing loop to execute ready handlers. In handler _handleAsyncReceive I am deallocating the buf that is assigned in receiveDataAsync. bufferSize is 500.
Code is as follows:
bool
TCPSocket::receiveDataAsync( unsigned int bufferSize )
{
char *buf = new char[bufferSize + 1];
try
{
_tcpSocket->async_read_some( boost::asio::buffer( (void*)buf, bufferSize ),
boost::bind(&TCPSocket::_handleAsyncReceive,
this,
buf,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred) );
_ioService->poll();
}
catch (std::exception& e)
{
LOG_ERROR("Error Receiving Data Asynchronously");
LOG_ERROR( e.what() );
delete [] buf;
return false;
}
//we dont delete buf here as it will be deleted by callback _handleAsyncReceive
return true;
}
void
TCPSocket::_handleAsyncReceive(char *buf, const boost::system::error_code& ec, size_t size)
{
if(ec)
{
LOG_ERROR ("Error occurred while sending data Asynchronously.");
LOG_ERROR ( ec.message() );
}
else if ( size > 0 )
{
buf[size] = '\0';
LOG_DEBUG("Deleting Buffer");
emit _asyncDataReceivedSignal( QString::fromLocal8Bit( buf ) );
}
delete [] buf;
}
Here the problem is buffer is allocated at much faster rate as compare to deallocation so memory usage will go high at exponential rate and at some point of time it will consume all the memory and system will be stuck. CPU usage will also be around 90%. How can I reduce the memory and CPU consumption?
You have a memory leak. io_service poll does not guarantee that it with dispatch your _handleAsyncReceive. It can dispatch other event (e.g an accept), so your memory at char *buf is lost. My guess you are calling receiveDataAsync from a loop, but its not necessary - leak will exist in any case (with different leak speed).
Its better if you follow asio examples and work with suggested patterns rather than make your own.
You might consider using a wrap around buffer, which is also called a circular buffer. Boost has a template circular buffer version available. You can read about it here. The idea behind it is that when it becomes full, it circles around to the beginning where it will store things. You can do the same thing with other structures or arrays as well. For example, I currently use a byte array for this purpose in my application.
The advantage of using a dedicated large circular buffer to hold your messages is that you don't have to worry about creating and deleting memory for each new message that comes in. This avoids fragmentation of memory, which could become a problem.
To determine an appropriate size of the circular buffer, you need to think about the maximum number of messages that can come in and are in some stage of being processed simultaneously; multiply that number by the average size of the messages and then multiply by a fudge factor of perhaps 1.5. The average message size for my application is under 100 bytes. My buffer size is 1 megabyte, which would allow for at least 10,000 messages to accumulate without it affecting the wrap around buffer. But, if more than 10,000 messages did accumulate without being completely processed, then the circular buffer would be unuseable and the program would have to be restarted. I have been thinking about reducing the size of the buffer because the system would probably be dead long before it hit the 10,000 message mark.
As PSIAlt suggest, consider following the Boost.Asio examples and build upon their patterns for asynchronous programming.
Nevertheless, I would suggest considering whether multiple read calls need to be queued onto the same socket. If the application only allows for a single read operation to be pending on the socket, then resources are reduced:
There is no longer the scenario where there are an excessive amount of handlers pending in the io_service.
A single buffer can be preallocated and reused for each read operation. For example, the following asynchronous call chain only requires a single buffer, and allows for the concurrent execution of starting an asynchronous read operation while the previous data is being emitted on the Qt signal, as QString performs deep-copies.
TCPSocket::start()
{
receiveDataAsync(...) --.
} |
.---------------'
| .-----------------------------------.
v v |
TCPSocket::receiveDataAsync(...) |
{ |
_tcpSocket->async_read_some(_buffer); --. |
} | |
.-------------------------------' |
v |
TCPSocket::_handleAsyncReceive(...) |
{ |
QString data = QString::fromLocal8Bit(_buffer); |
receiveDataAsync(...); --------------------------'
emit _asyncDataReceivedSignal(data);
}
...
tcp_socket.start();
io_service.run();
It is important to identify when and where the io_service's event loop will be serviced. Generally, applications are designed so that the io_service does not run out of work, and the processing thread is simply waiting for events to occur. Thus, it is fairly common to start setting up asynchronous chains, then process the io_service event loop at a much higher scope.
On the other hand, if it is determined that TCPSocket::receiveDataAsync() should process the event loop in a blocking manner, then consider using synchronous operations.

RedPark Cable readBytesAvailable read twice every time

I have not been able to find this information anywhere. How long can a string be send with the TTL version of the redpark cable?
The following delegate method is called twice when I print something thorough serial from my Arduino, an example of a string is this: 144;480,42;532,40;20e
- (void) readBytesAvailable:(UInt32)length{
When I use the new function methods of retrieving available data [getStringFromBytesAvailable] I will only get 144;480,42;532,40; and then the whole function is called again and the string now contains the rest of the string: 20e
The following method is working for appending the two strings, but only if the rate of data transmission is 'slow' (1 time a second, I would prefer minimum 10 times a second).
-
(void) readBytesAvailable:(UInt32)length{
if(string && [string rangeOfString:#"e"].location == NSNotFound){
string = [string stringByAppendingString:[rscMgr getStringFromBytesAvailable]];
NSLog(string);
finishedReading = YES;
}
else{
string = [rscMgr getStringFromBytesAvailable];
}
if (finishedReading == YES)
{
//do stuff
}
finishedReading = NO;
string = nil;
}
}
But can you tell my why the methods is called twice if I write a "long" string, and how to avoid this issue?
Since your program fragment runs faster then the time it takes to send a string, you need to capture the bytes and append them to a string.
If the serial data is terminated with a carriage return you can test for it to know when you have received the entire string.
Then you can allow your Arduino to send 10 times a second.
That is just how serial ports work. You can't and don't need to avoid those issues. There is no attempt at any level of the SW/HW to keep your serial data stream intact, so making any assumptions about that in your code is just wrong. Serial data is just a stream of bytes, with no concept of packetization. So you have to deal with the fact that you might have to read partial data and read the rest later.
The serialPortConfig within the redparkSerial header file provided by RedPark does, in fact, give you more configuration control than you may realize. The readBytesAvailable:length method is abstracted, and is only called when one of two conditions is met: rxForwardingTimeout value is exceeded with data in the primary buffer (default set to 100 ms) or rxForwardCount is reached (default set to 16 characters).
So, in your case it looks like you've still got data in your buffer after your initial read, which means that the readBytesAvailable:length method will be called again (from the main run loop) to retrieve the remaining data. I would propose playing around with the rxForwardingTimeout and rxForwardCount until it performs as you'd expect.
As already mentioned, though, I'd recommend adding a flag (doesn't have to be carriage return) to at least the end of your packet, for identification.
Also, some good advice here: How do you design a serial command protocol for an embedded system?
Good luck!

Weird Winsock recv() slowdown

I'm writing a little VOIP app like Skype, which works quite good right now, but I've run into a very strange problem.
In one thread, I'm calling within a while(true) loop the winsock recv() function twice per run to get data from a socket.
The first call gets 2 bytes which will be casted into a (short) while the second call gets the rest of the message which looks like:
Complete Message: [2 Byte Header | Message, length determined by the 2Byte Header]
These packets are round about 49/sec which will be round about 3000bytes/sec.
The content of these packets is audio-data that gets converted into wave.
With ioctlsocket() I determine wether there is some data on the socket or not at each "message" I receive (2byte+data). If there's something on the socket right after I received a message within the while(true) loop of the thread, the message will be received, but thrown away to work against upstacking latency.
This concept works very well, but here's the problem:
While my VOIP program is running and when I parallely download (e.g. via browser) a file, there always gets too much data stacked on the socket, because while downloading, the recv() loop seems actually to slow down. This happens in every download/upload situation besides the actual voip up/download.
I don't know where this behaviour comes from, but when I actually cancel every up/download besides the voip traffic of my application, my apps works again perfectly.
If the program runs perfectly, the ioctlsocket() function writes 0 into the bytesLeft var, defined within the class where the receive function comes from.
Does somebody know where this comes from? I'll attach my receive function down below:
std::string D_SOCKETS::receive_message(){
recv(ClientSocket,(char*)&val,sizeof(val),MSG_WAITALL);
receivedBytes = recv(ClientSocket,buffer,val,MSG_WAITALL);
if (receivedBytes != val){
printf("SHORT: %d PAKET: %d ERROR: %d",val,receivedBytes,WSAGetLastError());
exit(128);
}
ioctlsocket(ClientSocket,FIONREAD,&bytesLeft);
cout<<"Bytes left on the Socket:"<<bytesLeft<<endl;
if(bytesLeft>20)
{
// message gets received, but ignored/thrown away to throw away
return std::string();
}
else
return std::string(buffer,receivedBytes);}
There is no need to use ioctlsocket() to discard data. That would indicate a bug in your protocol design. Assuming you are using TCP (you did not say), there should not be any left over data if your 2byte header is always accurate. After reading the 2byte header and then reading the specified number of bytes, the next bytes you receive after that constitute your next message and should not be discarded simply because it exists.
The fact that ioctlsocket() reports more bytes available means that you are receiving messages faster than you are reading them from the socket. Make your reading code run faster, don't throw away good data due to your slowness.
Your reading model is not efficient. Instead of reading 2 bytes, then X bytes, then 2 bytes, and so on, you should instead use a larger buffer to read more raw data from the socket at one time (use ioctlsocket() to know how many bytes are available, and then read at least that many bytes at one time and append them to the end of your buffer), and then parse as many complete messages are in the buffer before then reading more raw data from the socket again. The more data you can read at a time, the faster you can receive data.
To help speed up the code even more, don't process the messages inside the loop directly, either. Do the processing in another thread instead. Have the reading loop put complete messages in a queue and go back to reading, and then have a processing thread pull from the queue whenever messages are available for processing.

unix sockets: how to send really big data with one "send" call?

I'm using unix scoket for data transferring (SOCK_STREAM mode)
I need to send a string of more than 100k chars. Firstly, I send length of a string - it's sizeof(int) bytes.
length = strlen(s)
send(sd, length, sizeof(int))
Then I send the whole string
bytesSend = send(sd, s, length)
but for my surprise "bytesSend" is less than "length".
Note, that this works fine when I send not so big strings.
May be there exist some limitations for system call "send" that I've been missing ...
The send system call is supposed to be fast, because the program may have other things useful things to do. Certainly you do not want to wait for the data to be sent out and the other computer to send a reply - that would lead to terrible throughput.
So, all send really does is queues some data for sending and returns control to the program. The kernel could copy the entire message into kernel memory, but this would consume a lot of kernel memory (not good).
Instead, the kernel only queues as much of the message as is reasonable. It is the program's responsibility to re-attempt sending of the remaining data.
In your case, use a loop to send the data that did not get sent the first time.
while(length > 0) {
bytesSent = send(sd, s, length);
if (bytesSent == 0)
break; //socket probably closed
else if (bytesSent < 0)
break; //handle errors appropriately
s += bytesSent;
length -= bytesSent;
}
At the receiving end you will likely need to do the same thing.
Your initial send() call is wrong. You need to pass send() the address of the data, i.e.:
bytesSend = send(sd, &length, sizeof(int))
Also, this runs into some classical risks, with endianness, size of int on various platforms, et cetera.