iPhone Socket fails after a large number of data transfers - iphone

I've got an interesting issue with my socket test application.
I've set up a listening socket with an AcceptCallback function. I've connected to the listening socket using :
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault,
(CFStringRef) self.clientService.hostName,
self.clientService.port,
&myReadStream,
&myWriteStream);
and I've send data back to the listening socket the myReadStream and myWriteStream, both of which I've cast to their NSStream equivalents.
The problem occurs after sending many separate packets of data. The size of the packets do not matter, it's the number of packets (or the number of CFStreamCreatePairWithSocketToHost creations) that seems to introduce the error.
After I send tons of packets (maybe around 100 or 200), when I try to send data over the NSOutputStream I get an error in the NSStreamEvent callback:
Operation could not be completed. (NSUnknownErrorDomain error 8.)
Then, if I try to create a new service and publish it on the network I get an error when I try to resolve the new address. It gives me an error code of 10 in the NSNetService:didNotResolve delegate method (also, the error description is blank here).
It's almost as if the listening socket is "full" but it seems to think it's functioning fine because when I check CFSocketIsValid it returns true.
I'm stumped and have spent several hours trying to debug the situation... Any thoughts anybody? Thanks.

Alright, I figured out the issue.
When connecting to a socket and initializing a read and write stream, as with the following:
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, (CFStringRef) self.clientService.hostName, self.clientService.port, &myInputStream, &myWriteStream);
you need to make sure you set the following variable so that the lower level BSD stream closes when you close the CFStream or NSStream (in my case I cast the CFStream to an NSStream type):
CFReadStreamSetProperty(myReadStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);
CFWriteStreamSetProperty(myWriteStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);
If you don't set this property the BSD stream never actually closes and you hit some sort of max number of socket connections - not sure exactly what the problem is.

Related

Bidirectional communication of Unix sockets

I'm trying to create a server that sets up a Unix socket and listens for clients which send/receive data. I've made a small repository to recreate the problem.
The server runs and it can receive data from the clients that connect, but I can't get the server response to be read from the client without an error on the server.
I have commented out the offending code on the client and server. Uncomment both to recreate the problem.
When the code to respond to the client is uncommented, I get this error on the server:
thread '' panicked at 'called Result::unwrap() on an Err value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/main.rs:77:42
MRE Link
Your code calls set_read_timeout to set the timeout on the socket. Its documentation states that on Unix it results in a WouldBlock error in case of timeout, which is precisely what happens to you.
As to why your client times out, the likely reason is that the server calls stream.read_to_string(&mut response), which reads the stream until end-of-file. On the other hand, your client calls write_all() followed by flush(), and (after uncommenting the offending code) attempts to read the response. But the attempt to read the response means that the stream is not closed, so the server will wait for EOF, and you have a deadlock on your hands. Note that none of this is specific to Rust; you would have the exact same issue in C++ or Python.
To fix the issue, you need to use a protocol in your communication. A very simple protocol could consist of first sending the message size (in a fixed format, perhaps 4 bytes in length) and only then the actual message. The code that reads from the stream would do the same: first read the message size and then the message itself. Even better than inventing your own protocol would be to use an existing one, e.g. to exchange messages using serde.

Monitor TCP/IP stream

I am interested in learning Vapor, so I decided to work on a website that displays government issued weather alerts. Alert distribution is done via a TCP/IP data stream (streaming1.naad-adna.pelmorex.com port 8080).
What I have in mind is to use IBM's BlueSocket (https://github.com/IBM-Swift/BlueSocket) to create a socket, though after this point, I gave it a bit of thought but was unable to come to a conclusion on what the next steps would be.
Alerts are streamed over the data stream, so I am aware the socket would need to be opened and listened on but wasn't able to get to much past that.
A few things with the data stream are that the start and end of an alert is detected using the start and end tags of the XML document (alert and /alert). There are no special or proprietary headers added to the data, it's only raw XML. I know some alerts also include an XML declaration so I assume the encoding should be taken into account if the declaration is available.
I was then thinking of using XMLParser to parse the XML and use the data I am interested in from the alert.
So really, the main thing I am struggling with is, when the socket is open, what would be the method to listen to it, determine the start and end of the alert and then pass that XML alert for processing.
I would appreciate any input, I am also not restricted to BlueSocket so if there is a better option for what I am trying to achieve, I would be more than open to it.
So really, the main thing I am struggling with is, when the socket is
open, what would be the method to listen to it, determine the start
and end of the alert and then pass that XML alert for processing.
The method that you should use is read(into data: inout Data). It stores any available data that the server has sent into data. There are a few reasons for this method to fail, such as the connection disconnecting.
Here's an example of how to use it:
import Foundation
import Socket
let s = try Socket.create()
try s.connect(to: "streaming1.naad-adna.pelmorex.com", port: 8080)
while true {
if try Socket.wait(for: [s], timeout: 0, waitForever: true) != nil {
var alert = Data()
try s.read(into: &alert)
if let message = String(data: alert, encoding: .ascii) {
print(message)
}
}
}
s.close()
First create the socket. The default is what we want, a IPv4 TCP Stream.
Second connect() to the server using the hostname and port. Without this step, the socket isn't connected and cannot receive or send any data.
wait() until hostname has sent us some data. It returns a list of sockets that have data available to read.
read() the data, decode it and print it. By default this call will block if there is no data available on the socket.
close() the socket. This is good practice.
You might also like to consider thinking about:
non blocking sockets
error handling
streaming (a single call to read() might not give a complete alert).
I hope this answers your question.

lwip - what's the reason tcp socket blocked in send()?

I am make a application base on lwip,the applcation just send data to the server;
When my app works for some times (about 5 hours),I found that the send thread hung in send() function,and after about 30min send() return 0,and my thread run agin;
In the server side ,have make a keepalive,its time is 5min,when my app hungs,5min later the server close the sockect,but my app have not get this,still hungs in send() until 30min get 0 return; why this happen?
1: the up speed is not enough to send data,it will hungs in send?
2: maybe the server have not read data on time,and it make send buff full and hungs?
how can i avoid these peoblems in my code ? I have try to set TCP_NODELAY,SO_SNDTIMEO and select before send,but also have this problem.
send() blocks when the receiver is too far behind the sender. recv() returns zero when the peer has closed the connection, which means you must close the socket and stop reading.

How can I defense from attackers who send junk data packet?

I wrote a TCP socket program,and define a text protocol format like: "length|content",
to make it simple, the "length" is always 1-byte-long and it define the number of bytes of "content"
My problem is:
when attackers send packets like "1|a51",it will stay in tcp's receive buffer
the program will parse it wrong and the next packet would start like "5|1XXXX",
then the rest of the packets remain in the buffer would all parsed wrong,
how to solve this problem?
If you get garbage, just close the connection. It's not your problem to figure out what they meant, if anything.
instead of length|content only, you also need to provide a checksum, if the checksum is not correct, you should drop the connection to avoid partial receive.
this is a typical problem in tcp protocol, since the tcp is stream based. but just as http, which is an application of tcp protocol, it has a structure of request / response to make sure each end of the connection knows when the data has been fully transferred.
but your scenario is a little bit tricky, since the hacker can only affect the connection of his own. while it cannot change the data from other connections, only if he can control the route / switcher between your application and the users.

GCDAsyncSocket write timeout does not work

I am trying to set a timeout on write operations when using GCDAsyncSocket. The code is pretty simple and is the following.
[iAsyncSocket writeData:bytesToSend withTimeout:3.0 tag:0];
Then I disable the Internet connection on my Mac and wait for write timeout to occur, but nothing happens. I don't get a disconnection with a GCDAsyncSocketWriteTimeoutError error as I should.
I have also validated that my server stops, as expected, receiving the messages after I turn off the Internet connection.
I have looked inside the source code and I have found out that the writeTimer, that is responsible for firing a write timeout event, is always cancelled (function endCurrentWrite is called). Tracing back to where the timer is cancelled, I ended up at the following line of code.
ssize_t result = write(socketFD, buffer, (size_t)bytesToWrite);
The write system call always returns the total number of bytes that I am sending, as if the socket manages to send the data although there is no Internet connection. Is this logical?
Has anyone come up with the same problem or seen similar behaviour? Or has anyone managed to set a write timeout for a GCDAsyncSocket?
Thanks a lot.