Recently, a local port shortage has occurred in my sever.
So I tuned on the socket linger option so that RST packet would be sent.
Afterwards I found that turning on the tcp_tw_reuse option was a better way.
Anyway if the socket linger option is turned on, the RST packet is sent to the other end, but if the receiving end receives a lot of it, what kind of effect occurs?
Is it safe to send a lot of RST packets?
Related
I have UDP implementation with facility to get back acknowledge from server. The client re-sends packets for which acknowledgement is not received from server with in a specified time. Clients send around 10 packets while waiting for acknowledgement from server for 1st packet. It then repeats sending packets for which acknowledgement is not received. This works fine in normal scenario with minor delay in network.
The real issue is being experienced on a low bandwidth connection where round trip delay is a bit significant. Clients keeps on adding packets in send queue based on acknowledgement timeouts. This results into many duplicate packets getting added to queue.
Tried to find any elegant solution to avoid duplicate packets in send queue with no luck. Any help will be appreciated.
If I can get a way to mark/set a property of a packet such that if packet is not send within NN ms then it will be removed from queue then I can build algorithm around it.
UDP has no builtin duplicate detection as is the case with TCP. This means any kind of such detection has to be done by the application itself. Since the only way an application can interact with the send queue is to send datagrams any kind of duplicate detection on the sender side has to be done before the packet gets put into the send queue.
How you figure out at this stage if this is really a duplicate packet to a previous one which should not be sent or if this a duplicate packet which should be sent because the original one got lost is fully up to the application. And any "...not send within NN ms..." has to be implemented in the application too with timers or similar. You might additionally try to get more control of the queue by reducing the size of the send queue with SO_SNDBUF.
There are a number of ways that a TCP connection can end. This is my understanding:
rst: is immediate. The connection is done, and immediately closes. All further communication is a "new" connection.
fin: is a nice request from one side. It must be acknowledged, and the other side then sends a fin to say they are done talking. This too must be acknowledged. These stages CAN happen simultaneously, but a fin and the ack that accompanies it must be passed.
Are there any others? I'm looking at a tcp stream in Wireshark that just has a fin, psh, and ack bit sent. This is acknowledged and the connection is over. What other ways of closing a TCP connection are there?
If a fin is acked, can more data be sent? If it is, does the original fin need to be resent (does the state of the side that sent the fin reset at some point)?
Are there any others?
No.
I'm looking at a tcp stream in Wireshark that just has a fin, psh, and ack bit sent. This is acknowledged and the connection is over.
No. It is shut down in one direction only. It isn't over until both peers have sent a FIN.
What other ways of closing a TCP connection are there?
None.
If a fin is acked, can more data be sent?
It can't be sent even if the FIN wasn't ACKed, in the direction that the FIN was sent. It can be sent in the other direction.
If it is, does the original fin need to be resent (does the state of the side that sent the fin reset at some point)?
Can't happen.
There is indeed only 2 ways to close a TCP connection.
The FIN 4 way handshake
The RST
The FIN mechanism is the normal way of closing a TCP connection. You must understand that a TCP socket has a state machine underneeth. This statemachine is checked for each operation on the socket that happens. A close on a socket is called a halfclose. If you do this, it is like telling I have nothing more to send anymore. The state machine underneeth the socket is really going to check this, if you still were to try to send you will get error return codes.
The statemachine i am talking about is very well described over here with drawings ;-) http://www.tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm
The TCP/IP guide is an online book, also the RST scenario's that can happen are described in that book, as is the sliding window mechanism, byte acking, nagle and many other things. And not just for TCP, other protocols as well.
Suppose I have a UDP socket and send a msg to the server. Now how to figure out if the msg has been received by the server?
Coded acknowledgment from server is a trivial option, is there any other way to find out? I guess no..
You can never be sure that a message arrives, even with TCP not. That is a general limitation of communication networks (also applies to snail mail, for example). You can start with the Bizantine Generals Problem if you want to know more.
The thing you can do, is to increase the likelihood of detecting a message loss. Usually that is done be sending an acknowledgement to the sender. But that may get lost too, so for 100% reliability, you would need to send an acknowledgement for the acknowledgement. And then an acknowledgement for the acknowledgement's acknowledgement. And so on.
My advice: Use TCP, if reliability is your main concern. It has been around for some time, and probably won't have some of the flaws a custom solution would have. If you don't need the reliability of TCP, but need low latencies or something else UDP is good at, use UDP. In that case better make sure that it is not a problem if some packets get lost.
Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.
In TCP/IP sockets, how would the server know that a client is busy and not receiving data ?
My solution:
Use connect(),
I am not sure.
thanks
In TCP/IP sockets, how would the server know that a client is busy and
not receiving data
If a TCP is constantly pushing data that the peer doesn't acknowledge, eventually the send window will fill up. At that point the TCP is going to buffer data to "send later". Eventually the buffer size will be reached and send(2) will hang (something it doesn't usually do).
If send(2) starts hanging it means the peer TCP isn't acknowledging data.
Obviously, even if the peer TCP accepts data it doesn't mean the peer application actually uses it. You could implement your own ACK mechanism on top of TCP, and it's not as unreasonable as it sounds. It would involve having the client send a "send me more" message once in a while.
A client will almost always receive your data, by which I mean the OS will accept the packets and queue them up for reading. If that queue fills up, then the sender will block (TCP, anyways). You can't actually know the activity of the client code. Pretty much your only option is to use timeouts.