In TCP/IP sockets, how would the server know that a client is busy and not receiving data ? - sockets

In TCP/IP sockets, how would the server know that a client is busy and not receiving data ?
My solution:
Use connect(),
I am not sure.
thanks

In TCP/IP sockets, how would the server know that a client is busy and
not receiving data
If a TCP is constantly pushing data that the peer doesn't acknowledge, eventually the send window will fill up. At that point the TCP is going to buffer data to "send later". Eventually the buffer size will be reached and send(2) will hang (something it doesn't usually do).
If send(2) starts hanging it means the peer TCP isn't acknowledging data.
Obviously, even if the peer TCP accepts data it doesn't mean the peer application actually uses it. You could implement your own ACK mechanism on top of TCP, and it's not as unreasonable as it sounds. It would involve having the client send a "send me more" message once in a while.

A client will almost always receive your data, by which I mean the OS will accept the packets and queue them up for reading. If that queue fills up, then the sender will block (TCP, anyways). You can't actually know the activity of the client code. Pretty much your only option is to use timeouts.

Related

Network packet loss causes client code to act strange

I am facing some issues which I need some help on coming with a best way to resolve this.
here is the problem -
I have server code running which has a socket that is listening to accept new incoming connections.
I then attempt to start a client, which also has a socket that is listening to accept new incoming connections.
The client code begins with accepting a new connection on the listening socket file descriptor and gets a new socket file descriptor for I/O.
The server does the same thing and gets a new socket file descriptor for I/O.
Note: The client is not completely up, yet. It needs to receive some bytes from the server and send some before it can start.
I then introduce some packet loss over the TCP/IP network connection. This causes the certain errors (example: the recv() system call in the client process sees no received bytes and then closes the socket connection on the client side and the associated new socket file descriptor is closed.) However, this leaves the client process hanging since there are other descriptors in the FD_SET but none of them are I/O ready. So pselect() keeps returning 0 file descriptors ready for I/O. The client needs to send and receive certain bytes over the connection before it can start up.
My question is more of what should I do here ?
I did research on the SO_KEEPALIVE option when I create the new socket connection during the accept() system call. But I do not think that would resolve my problem here especially if the network packet loss is ongoing.
Should I kill the client process here if I realize there are no file descriptors ready for I/O and never will be ? Is there a better way to approach this ?
If I'm reading the question correctly, the core of the question is: "what should your client program do when a TCP connection that is central to its functionality has been broken?"
The answer to that question is really a matter of preference -- what would you like your client program to do in that case? Or to put it another way, what behavior would your users find most useful?
In many of my own client programs, I have logic included such that if the TCP connection to the server is ever broken, the client will automatically try to create a new TCP connection to the server and thereby recover its connectivity and useful functionality as soon as possible.
The other obvious option would be to just have the client quit when the connection is broken; perhaps with some sort of error indication so that the user will know why the client went away. (perhaps an error dialog that asks if the user would like to try to reconnect?)
SO_KEEPALIVE is probably not going to help you much in this scenario, by the way -- despite its name, its purpose is to help a program discover in a more timely manner that TCP connectivity has been lost, not to try harder to keep a TCP connection from being lost. (And it doesn't even serve that purpose particularly well, since in many TCP stacks only one keepalive packet is sent per hour, or so, which means that even with SO_KEEPALIVE enabled it can be a very long time before your program starts receiving error messages reflecting the loss of network connectivity)

how to find that the client is reading from tcp buffer in go

I'm starting to use golang for a quite amount of time for a project. In my project I have to implement a tcp server which responds to tcp clients. The server has to send a number of messages to a client.
The problem is that when a server writes a message to a client connection, it has to wait until the client has read that message from buffer and then send another message (the server has to wait until the client calls the reader.ReadString('\n') method).
In my server code I wrote:
for {
data := <-client.outgoing
client.writer.WriteString(data + "\n")
client.writer.Flush()
}
but the server sends all the messages to client without waiting for ReadString in client.
How to make server wait until the client read a message and then send the other message?
I think that either the assignment is ambiguous or you're misinterpreting it and solving the XY problem.
The short answer is that you can never know whether the client has read a message just by looking at the TCP conversation. You have to implement this "protocol" in your application.
Here are a few problems:
From your application you don't really have access to what TCP is doing. You get a stream on which you can perform I/O.
The fact that a write to your stream "succeeds" only means that TCP has agreed to try to transport your stuff and has an independent copy. It doesn't say anything about whether the data has been received and it doesn't even mean the data has been even sent
You may find certain mechanisms to peer into TCP's inner workings (such as ioctls, SIOCINQ, SIOCOUTQ or various setsockopts): these won't help
Even if you find out what your TCP is doing, this only tells you what the remote TCP is doing. So if you have full control over your TCP and even see the acknowledgments from the peer, you still don't know what the application is doing. It's very possible the application didn't read the data yet (it might not have requested the data, the TCP might be withholding it in a buffer for some weird reason, the scheduler might not have scheduled the remote process etc.)
Going back to your question, a way to really know whether the remote application has received your message is to have the remote application tell you. This means you have to restructure your protocol to:
Send stuff from the server
Wait for a message from the application telling you it received your stuff
Send more stuff (because you know from point 2 it's safe to do so)

Will a TCP RST cause a host to drop the receive buffer?

Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.

UNIX domain socket: is there such a thing as a "busy" signal?

Can a Client pushing data through a UNIX domain socket ( AF_UNIX type ) be signaled busy if the receiving end cannot cope with the load?
OR
Must there be a Client-Server protocol on top of the socket to handle flow control?
You can definitely do a blocking send to a UNIX Domain socket. If the receiving side's receive buffer is full, or if the number of outstanding (undelivered) send socket buffers is too high, the sender will block.
SOCK_STREAM UNIX Domain Sockets work like TCP sockets. SOCK_DGRAM UNIX Domain Sockets work like UDP, except that UNIX Domain datagrams have guaranteed, in-order delivery, whereas UDP sockets can be re-ordered or dropped. (Also, UNIX Domain Sockets can be used to send file descriptors and pass user credentials between processes, neither of which can be done with TCP, UDP, or pipes.)
So, because in-order delivery is guaranteed by all types of UNIX Domain Sockets, the receiver can just stop receiving when it is busy doing other things, and the sender will be automatically blocked when there's no more buffer space available (or will be notified that there's no more buffer space, if they requested non-blocking operation on their socket). Then, when the receiver starts receiving again, the sender will be allowed to send more.
Unless you include this in the protocol, there is no way for the server to tell the client to pause sending the information.
Other than the server having some knowledge of when it is 'busy' and sending a specific signal back (e.g. HTTP's 503 Service Unavailable). You can also set up the client side connection to timeout after a certain length of time, and if you get a timeout event, interpret that as the server is busy.

UDP Response

UDP doesnot sends any ack back, but will it send any response?
I have set up client server UDP program. If I give client to send data to non existent server then will client receive any response?
My assumption is as;
Client -->Broadcast server address (ARP)
Server --> Reply to client with its mac address(ARP)
Client sends data to server (UDP)
In any case Client will only receive ARP response. If server exists or not it will not get any UDP response?
Client is using sendto function to send data. We can get error information after sendto call.
So my question is how this info is available when client doesn't get any response.
Error code can be get from WSAGetLastError.
I tried to send data to non existent host and sendto call succeeded . As per documentation it should fail with return value SOCKET_ERROR.
Any thoughts??
You can never receive an error, or notice for a UDP packet that did not reach destination.
The sendto call didn't fail. The datagram was sent to the destination.
The recipient of the datagram or some router on the way to it might return an error response (host unreachable, port unreachable, TTL exceeded). But the sendto call will be history by the time your system receives it. Some operating systems do provide a way to find out this occurred, often with a getsockopt call. But since you can't rely on getting an error reply anyway since it depends on network conditions you have no control over, it's generally best to ignore it.
Sensible protocols layered on top of UDP use replies. If you don't get a reply, then either the other end didn't get your datagram or the reply didn't make it back to you.
"UDP is a simpler message-based connectionless protocol. In connectionless protocols, there is no effort made to set up a dedicated end-to-end connection. Communication is achieved by transmitting information in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information."
The machine to which you're sending packets may reply with an ICMP UDP port unreachable message.
The UDP protocol is implemented on top of IP. You send UDP packets to hosts identified by IP addresses, not MAC addresses.
And as pointed out, UDP itself will not send a reply, you will have to add code to do that yourself. Then you will have to add code to expect the reply, and take the proper action if the response is lost (typically resend on a timer, until you decide the other end is "dead"), and so on.
If you need reliable UDP as in ordering or verification such that TCP/IP will give you take a look at RUDP or Reliable UDP. Sometimes you do need verification but a mixture of UDP and TCP can be held up on the TCP reliability causing a bottleneck.
For most large scale MMO's for isntance UDP and Reliablity UDP are the means of communication and reliability. All RUDP does is add a smaller portion of TCP/IP to validate and order certain messages but not all.
A common game development networking library is Raknet which has this built in.
RUDP
http://www.javvin.com/protocolRUDP.html
An example of RUDP using Raknet and Python
http://pyraknet.slowchop.com/