CocoaAsyncSocket and reading data from a socket - iphone

On my TCP-socket based server, I send a packets over the stream where packets consist of a header specifying the number of bytes in the packet, followed by that number of bytes. For those familiar with Erlang, I'm simply setting the {packet, 4} option. On the iOS side, I have code that looks like this, assuming I want to figure out the size of the stream for this message:
[asyncSocket readDataToLength:4 withTimeout:-1 tag:HEADER_TAG];
That works fine and the following delegate method callback is invoked:
onSocket:didReadData:withTag:
I figure the next logical step is to figure out the size of the stream, and I do that with:
UInt32 readLength;
[data getBytes:&readLength length:4];
readLength = ntohl(readLength);
After hard coding a string of 12 bytes on the server-side, readLength does indeed read 12 on the client also, so all is good so far. I proceed with the following:
[sock readDataToLength:readLength withTimeout:1 tag:MESSAGE_TAG];
At this point though the callback onSocket:didReadData:withTag: is no longer invoked. Instead timeouts on the read are occurring, probably because I didn't handle the read properly, this delegate method gets invoked:
- (NSTimeInterval)onSocket:(AsyncSocket *)sock shouldTimeoutReadWithTag:(long)tag elapsed:(NSTimeInterval)elapsed bytesDone:(NSUInteger)length
so in total, the server is sending 16 bytes, a 4 byte header and a 12 byte binary stream.
I'm confident that the error is on how I'm using CocoaAsyncSocket. What's the right way to go about reading the rest of the stream after I figure out its size?
** UPDATE **
I changed my client and it seems to be working now. The problem is, I don't understand the point of readDataToLength with the new solution. Here's what I changed my initial read to:
[socket readDataWithTimeout:-1 tag:HEADER_TAG];
Now in my callback, I just do the following:
- (void)onSocket:(AsyncSocket *)sock didReadData:(NSData *)data withTag:(long)tag {
if (tag == HEADER_TAG) {
UInt32 readLength;
[data getBytes:&readLength length:4];
readLength = ntohl(readLength);
int offset = 4;
NSRange range = NSMakeRange(offset, readLength);
char buffer[readLength];
[data getBytes:&buffer range:range];
NSLog(#"buffer %s", buffer);
//[sock readDataToLength:readLength withTimeout:1 tag:MESSAGE_TAG];
} else if (tag == MESSAGE_TAG) {
//[sock readDataToLength:4 withTimeout:1 tag:HEADER_TAG];
}
}
So everything is coming in as one, atomic payload. Perhaps this is because of the way Erlang {packet, 4} works. I hope it is. Otherwise, what's the point of readDataToLength? there's no way to know the length of a message in advance on the client, so what is a good use case to use that method in?

It depends on how you send from the Erlang side, I suppose. The option {packet, 4} will send each data packet with a 4-byte length prefixed to it. Each send operation in Erlang will result in one packet being sent with it's length prefixed (the max size for length 4, for example, is 2 Gb). The relevant part of the Erlang documentation is for setting the socket options using inet:setopts/2.
I'm guessing the data is the total accumulated data read from the socket so far. If that data contains your whole packet, it's fine. But if not, you might want to continue to do a blocked read from the socket using readDataToLength with the remaining data.

Related

Using GSocketClient, how do I read incoming data without knowing how many incoming bytes there will be?

I am still struggling to be able to read incoming response messages from a piece of hardware my program is communicating with.
I am using a GSocketClient and am able to connect and successfully send messages using g_output_stream_write(). I then want to read the response sent back from the device, but I have no way of knowing how many bytes the reply will be in order to use g_input_stream_read(). I have also tried using g_input_stream_read_all(), but this seems to block the application and never return. I don't know how g_input_stream_read_all() determines that it has reached the end of a stream, but I assume the problem is somewhere there?
I know that there is incoming data because I can use g_input_stream_read() with a made-up byte size like 5 and I then see the first 5 incoming bytes, but the response size will always be different.
So my questions is, is there a way to determine how much data is waiting to be read so that I can plug that into g_input_stream_read() as a variable for the size to read? And if not, what is the correct usage of g_input_stream_read_all() to get it to not block like I am seeing it do?
Does something like the following work?
#define BUF_SIZE 1024
guint8 buffer[BUF_SIZE];
GByteArray *array = g_byte_array_new();
gsize bytes_read;
GError *error = NULL;
while (g_input_stream_read_all(istream, buffer, BUF_SIZE, &bytes_read, NULL, &error))
{
g_byte_array_append(array, buffer, bytes_read);
if (bytes_read < BUF_SIZE)
/* We've reached the end of the stream */
break;
}
if (error)
// error handling code

RedPark Cable readBytesAvailable read twice every time

I have not been able to find this information anywhere. How long can a string be send with the TTL version of the redpark cable?
The following delegate method is called twice when I print something thorough serial from my Arduino, an example of a string is this: 144;480,42;532,40;20e
- (void) readBytesAvailable:(UInt32)length{
When I use the new function methods of retrieving available data [getStringFromBytesAvailable] I will only get 144;480,42;532,40; and then the whole function is called again and the string now contains the rest of the string: 20e
The following method is working for appending the two strings, but only if the rate of data transmission is 'slow' (1 time a second, I would prefer minimum 10 times a second).
-
(void) readBytesAvailable:(UInt32)length{
if(string && [string rangeOfString:#"e"].location == NSNotFound){
string = [string stringByAppendingString:[rscMgr getStringFromBytesAvailable]];
NSLog(string);
finishedReading = YES;
}
else{
string = [rscMgr getStringFromBytesAvailable];
}
if (finishedReading == YES)
{
//do stuff
}
finishedReading = NO;
string = nil;
}
}
But can you tell my why the methods is called twice if I write a "long" string, and how to avoid this issue?
Since your program fragment runs faster then the time it takes to send a string, you need to capture the bytes and append them to a string.
If the serial data is terminated with a carriage return you can test for it to know when you have received the entire string.
Then you can allow your Arduino to send 10 times a second.
That is just how serial ports work. You can't and don't need to avoid those issues. There is no attempt at any level of the SW/HW to keep your serial data stream intact, so making any assumptions about that in your code is just wrong. Serial data is just a stream of bytes, with no concept of packetization. So you have to deal with the fact that you might have to read partial data and read the rest later.
The serialPortConfig within the redparkSerial header file provided by RedPark does, in fact, give you more configuration control than you may realize. The readBytesAvailable:length method is abstracted, and is only called when one of two conditions is met: rxForwardingTimeout value is exceeded with data in the primary buffer (default set to 100 ms) or rxForwardCount is reached (default set to 16 characters).
So, in your case it looks like you've still got data in your buffer after your initial read, which means that the readBytesAvailable:length method will be called again (from the main run loop) to retrieve the remaining data. I would propose playing around with the rxForwardingTimeout and rxForwardCount until it performs as you'd expect.
As already mentioned, though, I'd recommend adding a flag (doesn't have to be carriage return) to at least the end of your packet, for identification.
Also, some good advice here: How do you design a serial command protocol for an embedded system?
Good luck!

How to get NSOutputStream to send or flush packets immediately

I am having an issue with latency when connecting to a bluetooth accessory using the External Accessory Framework. When sending data I get the following custom output in the console:
if( [stream hasSpaceAvailable] )
{
NSLog( #"Space avail" );
}
else {
NSLog(#"No space");
}
while( [stream hasSpaceAvailable] && ( [_outputBuffer length] > 0 ) )
{
/* write as many bytes as possible */
NSInteger written = [stream write:[_outputBuffer bytes] maxLength:[_outputBuffer length]];
NSLog( #"wrote %i out of %i bytes to the stream", written, [_outputBuffer length] );
if( written == -1 )
{
/* error, bad */
Log( #"Error writing bytes" );
break;
}
else if( written > 0 )
{
/* remove the bytes from the buffer that were written */
Log( #"erasing %i bytes", written );
[_outputBuffer replaceBytesInRange:NSMakeRange( 0, written ) withBytes:nil length:0 ];
}
}
This results with the following output where immediate pack buffer is the payload.
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040010005
No space
immediate pack buffer-> 030040007
No space
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030010004
No space
immediate pack buffer-> 040000004
Space avail
wrote 20 out of 20 bytes to the stream
immediate pack buffer-> 030000003
Space avail
wrote 10 out of 10 bytes to the stream
immediate pack buffer-> 040040008
Space avail
wrote 10 out of 10 bytes to the stream
Notice how it continually has "No Space" written which means that the method hasSpaceAvailable is returning false and forcing the data to be buffered until it returns true.
1) What I need to know is why is the happening? Is it waiting for an Ack from the BT hardware? If so how do you removing this blocking?
2) How do you do this so it sends immediately and we basically stream the data in real time without buffering?
3) Is there a hidden API method that will disable this blocking?
This is a real problem because there cannot be any delay/latency in sending the data to the device, it must be sent immediately in order for the hardware to be in sync with the iPhone commands. Please help.
What you're asking for is impossible with most hardware (which will finish sending the current packet before starting the next one), and impossible with the usual "stream" paradigm (which requires that data is received in order, so is bandwidth-limited).
It is also physically impossible to have zero latency unless the source and destination are coincident.
The actual problem seems to be that the underlying stream only queues one packet at a time, even if the packet is only 10 bytes long. I don't know why; possibly because it's intended as a very simple protocol.
The usual way of dealing with such a queue is to register for the appropriate delegate callbacks and send as much data as you can when the stream has space available, instead of waiting for the next time you attempt to send data (which appears to be what you're doing).
The problem is the HandleEvent delegate function is an asynchronous call.So every time it is not hitting the delegate.
What you can do is, have the collections of commands in an array at once, open the session, call the writeData Function.What happens here is, once the write data is called, you don't need the HandleEvent Function to be hit for every command.
Have a count incremented in writeData function for the count of array items,Until count == arrayItems, Delegate is not hit..
So all the commands from list are sent one by one.
I am facing the same issue but in different scenario.
Scenario: iPhone app is able to communicate the PED when gets connected for the first time. But when PED battery dies or switched off and then switched on, app is not able to communicate with PED in spite of active session and valid output stream. Output steam says its does not have spece to write anything.
Solution: When PED gets switched, app gets notified, and at that moment I make the app to kill EASession and create it again when PED gets connection. Not sure whether it is best solution. Please suggest another solution if there is any.

Streaming JPEGs, detect end of JPEG

I have created a java server, which takes screenshots, resizes them, and sends them over TCP/IP to my iPhone application. The application then uses NSInputStream to collect the incoming image data, create an NSMutableData instance with the byte buffer, and then create a UIImage object to display on the iPhone. Screenshare, essentially. My iPhone code to collect the image data is currently as follow:
- (void)stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent{
if(streamEvent == NSStreamEventHasBytesAvailable && [iStream hasBytesAvailable]){
uint8_t buffer[1024];
while([iStream hasBytesAvailable]){
NSLog(#"New Data");
int len = [iStream read:buffer maxLength:sizeof(buffer)];
[imgdata appendBytes:buffer length:len];
fullen=fullen+len;
/*Here is where the problem lies. What should be in this
if statement in order to make it test the last byte of
the incoming buffer, to tell if it is the End of Image marker
for the end of incoming JPEG file?
*/
if(buffer[len]=='FFD9'){
UIImage *img = [[UIImage alloc] initWithData:imgdata];
NSLog(#"NUMBER OF BYTES: %u", len);
image.image = img;
}
}
}
}
My problem, as indicated by the in-code comment, is figuring out when to stop collecting data in the NSMutableData object, and use the data to create a UIImage. It seems to make sense to look for the JPEG End of File marker--End of Image (EOI) marker (FFD9)--in the incoming bytes, as the image will be ready for display when this is sent. How can I test for this? I'm either missing something about how the data is stored, or about the marker within the JPEG file, but any help in testing for this would be greatly appreciated!
James
You obviously don't want to close the stream because that would kill performance.
Since you control the client server connection, send down the # of bytes in the image before sending the image data. Better yet, send down # of bytes in the image, the image data, and an easily identified serial # at the end so you can quickly verify that the data has actually arrived.
Much easier and more efficient than actually checking for the end of file marker. Though, of course, you could also just check for that after the # of bytes have been received, too. Easy enough.
Of course, all of this is going to be grossly inefficient for screensharing style purposes in all but the unusual cases. In most cases, only a small part of the screen to be mirrored actually changes with each frame. If you try to send the whole screen with every frame, you'll quickly saturate your connection and the client side will be horribly laggy and unresponsive.
Given that this is an extremely mature market, there are tons of solutions and quite a few open source bits from which you can derive a solution to fit your needs (see VNC, for example).

unix sockets: how to send really big data with one "send" call?

I'm using unix scoket for data transferring (SOCK_STREAM mode)
I need to send a string of more than 100k chars. Firstly, I send length of a string - it's sizeof(int) bytes.
length = strlen(s)
send(sd, length, sizeof(int))
Then I send the whole string
bytesSend = send(sd, s, length)
but for my surprise "bytesSend" is less than "length".
Note, that this works fine when I send not so big strings.
May be there exist some limitations for system call "send" that I've been missing ...
The send system call is supposed to be fast, because the program may have other things useful things to do. Certainly you do not want to wait for the data to be sent out and the other computer to send a reply - that would lead to terrible throughput.
So, all send really does is queues some data for sending and returns control to the program. The kernel could copy the entire message into kernel memory, but this would consume a lot of kernel memory (not good).
Instead, the kernel only queues as much of the message as is reasonable. It is the program's responsibility to re-attempt sending of the remaining data.
In your case, use a loop to send the data that did not get sent the first time.
while(length > 0) {
bytesSent = send(sd, s, length);
if (bytesSent == 0)
break; //socket probably closed
else if (bytesSent < 0)
break; //handle errors appropriately
s += bytesSent;
length -= bytesSent;
}
At the receiving end you will likely need to do the same thing.
Your initial send() call is wrong. You need to pass send() the address of the data, i.e.:
bytesSend = send(sd, &length, sizeof(int))
Also, this runs into some classical risks, with endianness, size of int on various platforms, et cetera.