There are a number of ways that a TCP connection can end. This is my understanding:
rst: is immediate. The connection is done, and immediately closes. All further communication is a "new" connection.
fin: is a nice request from one side. It must be acknowledged, and the other side then sends a fin to say they are done talking. This too must be acknowledged. These stages CAN happen simultaneously, but a fin and the ack that accompanies it must be passed.
Are there any others? I'm looking at a tcp stream in Wireshark that just has a fin, psh, and ack bit sent. This is acknowledged and the connection is over. What other ways of closing a TCP connection are there?
If a fin is acked, can more data be sent? If it is, does the original fin need to be resent (does the state of the side that sent the fin reset at some point)?
Are there any others?
No.
I'm looking at a tcp stream in Wireshark that just has a fin, psh, and ack bit sent. This is acknowledged and the connection is over.
No. It is shut down in one direction only. It isn't over until both peers have sent a FIN.
What other ways of closing a TCP connection are there?
None.
If a fin is acked, can more data be sent?
It can't be sent even if the FIN wasn't ACKed, in the direction that the FIN was sent. It can be sent in the other direction.
If it is, does the original fin need to be resent (does the state of the side that sent the fin reset at some point)?
Can't happen.
There is indeed only 2 ways to close a TCP connection.
The FIN 4 way handshake
The RST
The FIN mechanism is the normal way of closing a TCP connection. You must understand that a TCP socket has a state machine underneeth. This statemachine is checked for each operation on the socket that happens. A close on a socket is called a halfclose. If you do this, it is like telling I have nothing more to send anymore. The state machine underneeth the socket is really going to check this, if you still were to try to send you will get error return codes.
The statemachine i am talking about is very well described over here with drawings ;-) http://www.tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm
The TCP/IP guide is an online book, also the RST scenario's that can happen are described in that book, as is the sliding window mechanism, byte acking, nagle and many other things. And not just for TCP, other protocols as well.
Related
I'm investigating resetting a TCP connection as a solution to the TIME_WAIT issue.
Let's use the following request-reply protocol as an example:
The client opens a connection to the server.
The client sends a request.
The server replies.
The server closes.
The client closes as well.
This causes a TIME_WAIT state at the server. As a variation, the client could close first. Then, the TIME_WAIT lands on the client.
Can we not replace steps 4 and 5 by the following?
The client resets.
The server resets in response to the incoming reset.
This seems to be a way to avoid the TIME_WAIT issue. The server has proven that it received and processed the request by sending its reply. Once the client has the reply the connection is expendable and can just go away.
Is this a good idea?
I would say: No it's not a good idea. Every possible solution ends up with the same "problem" that TIME_WAIT ultimately addresses: how does party A, acknowledging the ending of the connection (or acknowledging the other side's final acknowledgment of the ending of the connection), know that party B got the acknowledgment? And the answer is always: it can't ever know that for sure.
You say:
the server has proven that it received and processed the request by sending its reply
... but what if that reply gets lost? The server has now cleaned up its side of the session, but the client will be waiting forever for that reply.
The TCP state machine may seem overly complicated at first glance but it's all done that way for good reason.
The only problem is that the server doesn't know whether the client received everything. The situation is ambiguous: did the client connection reset because the client received the whole reply, or was it reset for some other reason?
Adding an application level acknowledgement doesn't reliably fix the problem. If the client acknowledges, and then immediately closes abortively, the client can't be sure that the server received that acknowledgement, because the abortive close discards untransmitted data. Moreover, even if the data are transmitted, it can be lost since the connection is unreliable; and once the connection is aborted, the TCP stack will no longer provide re-transmissions of that data.
The regular, non-abortive situation addresses the problem by having the client and server TCP stacks take care of the final rites independently of application execution.
So, in summary, the aborts are okay if all we care about is that the client receives its reply, and the server doesn't care whether or not that succeeded: not an unreasonable assumption in many circumstances.
I suspect you are wrong about the TIME_WAIT being on the server.
If you follow the following sequence for a single TCP-based client-server transaction, then the TIME_WAIT is on the client side:
client initiates active connection to server
client sends request to server.
client half-closes the connection (i.e. sends FIN)
server reads client request until EOF (FIN segment)
server sends reply and closes (generating FIN)
clients reads response to EOF
client closes.
Since client was the first to send the FIN, it goes into TIME_WAIT.
The trick is that the client must close the sending direction first, and the server synchronizes on it by reading the entire request. In other words, you use the stream boundaries as your message boundaries.
What you're trying to do is do the request framing purely inside the application protocol and not use the TCP framing at all. That is to say, the server recognizes the end of the client message without the client having closed, and likewise the client parses the server response without caring about reading until the end.
Even if your protocol is like this, you can still go through the motions of the half-close dance routine. The server, after having retrieve the client request, can nevertheless keep reading from its socket and discarding bytes until it reads everything, even though no bytes are expected.
I am writing an application on Linux (Client and Server) with socket programming. I came across the scenario, where my server application never responds to the initial SYN packet of the other end.
I am still debugging the issue.
Since my server is listening on a port, it never generates the accept event. Is the accept event is generated after the TCP handshake is done OR the accept event is generated when the initial SYN packet is received?
Some useful links, would be helpful.
Best
Is the accept event is generated after the TCP handshake is done
Yes.
OR the accept event is generated when the initial SYN packet is received?
No. The handshake has already happened. accept() just delivers you a socket from a queue of already accepted connections. While the queue is empty, it blocks.
This means that a client can connect even if the server has never called accept().
Accept() is not exactly an event, but a function that encapsulates the server side logic for the TCP handshake. The function is called beforehand(waiting for a client connection) and it returns after the handshake is over (it received the ACK from the client).
Some detailed explanations here:
http://lwn.net/Articles/508865/
http://www.ibm.com/developerworks/aix/library/au-tcpsystemcalls/
What kind of error do you get? Make sure your server is reachable for the client.
The TCP handshake is handled by the kernel; the server process is not involved. The kernel maintains two queues, one for incomplete connections (initial SYN received) and one for complete connections (3-way handshake complete).
The accept call retrieves the first entry in the complete queue, if the queue is empty and the socket is blocking the call blocks until a connection is made. If the socket is nonblocking, the call fails with EAGAIN or EWOULDBLOCK.
refs:
https://books.google.com/books?id=ptSC4LpwGA0C&lpg=PP1&pg=PA104#v=onepage&q&f=false/0131411551_ch04lev1sec5.html
https://man7.org/linux/man-pages/man2/accept.2.html
Upon receiving a TCP RST packet, will the host drop all the remaining data in the receive buffer that has already been ACKed by the remote host but not read by the application process using the socket?
I'm wondering if it's dangerous to close a socket as soon as I'm not interested in what the other host has to say anymore (e.g. to conserver resources); e.g. if that could cause the other party to lose any data I've already sent, but he has not yet read.
Should RSTs generally be avoided and indicate a complete, bidirectional failure of communication, or are they a relatively safe way to unidirectionally force a connection teardown as in the example above?
I've found some nice explanations of the topic, they indicate that data loss is quite possible in that case:
http://blog.olivierlanglois.net/index.php/2010/02/06/tcp_rst_flag_subtleties
http://blog.netherlabs.nl/articles/2009/01/18/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable also gives some more information on the topic, and offers a solution that I've used in my code. So far, I've not seen any RSTs sent by my server application.
Application-level close(2) on a socket does not produce an RST but a FIN packet sent to the other side, which results in normal four-way connection tear-down. RSTs are generated by the network stack in response to packets targeting not-existing TCP connection.
On the other hand, if you close the socket but the other side still has some data to write, its next send(2) will result in EPIPE.
With all of the above in mind, you are much better off designing your own protocol on top of TCP that includes explicit "logout" or "disconnect" message.
In TCP/IP sockets, how would the server know that a client is busy and not receiving data ?
My solution:
Use connect(),
I am not sure.
thanks
In TCP/IP sockets, how would the server know that a client is busy and
not receiving data
If a TCP is constantly pushing data that the peer doesn't acknowledge, eventually the send window will fill up. At that point the TCP is going to buffer data to "send later". Eventually the buffer size will be reached and send(2) will hang (something it doesn't usually do).
If send(2) starts hanging it means the peer TCP isn't acknowledging data.
Obviously, even if the peer TCP accepts data it doesn't mean the peer application actually uses it. You could implement your own ACK mechanism on top of TCP, and it's not as unreasonable as it sounds. It would involve having the client send a "send me more" message once in a while.
A client will almost always receive your data, by which I mean the OS will accept the packets and queue them up for reading. If that queue fills up, then the sender will block (TCP, anyways). You can't actually know the activity of the client code. Pretty much your only option is to use timeouts.
I have a socket server that listens to connection on port 5001, when a connection is accepted and data is received i request my database to create a packet of data in a particular format and write it back to client.
To make the data transmission more reliable i have to implement a TCP retry in PHP, how do i go about this my current implementation uses a thread class that fires a thread to check and see if ack has been received for that packet till timeout else it retires 3 times till timeout, but havent hand any success with the same.
Is there a better way to implement the same.
To make the data transmission more
reliable i have to implement a TCP
retry in PHP
No you don't. TCP is already reliable and it already implements retry. And you don't have any way of knowing whether an ACK has been received or not so you can't implement what you described anyway. Unless you are talking about application-level ACKs? in which case you need to clarify your question.