What is the recommended way to "gracefully" close a TCP socket?
I've learned that close()ing before read()ing all the remaining data in my host buffer can cause problems for the remote host (he could lose all the data in his receive buffer that has been acked, but not yet been read by his application). Is that correct?
What would be a good approach to avoid that situation? Is there some way to tell the API that I don't really care about data being lost due to my ignoring any remaining buffered data and closing the socket?
Or do I have to consider the problem at the application protocol level and use some kind of implicit or explicit "end of transmission" signal to let the other party know that it's safe to close a socket for reading?
1) Call shutdown to indicate that you will not write any more data to the socket.
2) Continue to read from the socket until you get either an error or the connection is closed.
3) Now close the socket.
If you don't do this, you may wind up closing the connection while there's still data to be read. This will result in an ungraceful close.
Related
I'm writing a TCP client using asynchronous calls. If the server is active when the app starts then it connects and talks OK. However if the first connect fails, then every subsequent call to connect() fails with WSAENOTCONN(10057) without producing any network traffic (checked with Wireshark).
Currently the code does not close the socket when it gets the error. The TCP state diagram does not seem to require it. It simply waits for 30 seconds and tries the connect() again.
That subsequent connect() and the following read() both return the WSAENOTCONN error.
Do I need to close the socket and open a new one? If so, which errors require me to close the socket, since there are a lot of different errors, some of which I will probably never see on the test bench.
You can assume this is MS Winsock2, although it is actually Interval Zero RTX 2009, which is subtly different in some places.
Do I need to close the socket and open a new one?
Yes.
If so, which errors require me to close the socket, since there are a lot of different errors, some of which I will probably never see on the test bench.
Almost all errors are fatal to the connection and should result in you closing the socket. EAGAIN/EWOULDBLOCK s a prominent exception, as is EINTR, but I can't think of any others offhand.
Do I need to close the socket and open a new one?
Yes.
You should close the socket under all error conditions that results in connection gone for good (Say, like the peer has closed the connection)
start listening client with netcat -l
go program opens a conn with net.DialTCP to said client.
kill the netcat
in go program, do conn.Write() with a []byte -> it runs fine without error!
it takes another conn.Write to get the error: broken pipe
The first write is the one where data loss happens, and I want to avoid. if i only get an error I know i can just keep the data and try again later.
I've seen https://stackoverflow.com/a/15071574/2757887 which is a very similar case and the explanation seems to apply here, but it still doesn't explain how to deal with the issue, if the tcp protocol I need to implement only does one-way communication.
I've sniffed the traffic with wireshark, and when i kill the netcat, I can see that it sends FIN to the go program, to which the go program replies with ACK. For some reason the go program doesn't immediately reply with it's own FIN - and i'm curious why that is, it might help with my problem - but there's probably a good reason for it.
Either way, from the "connection termination" section # http://en.wikipedia.org/wiki/Transmission_Control_Protocol, I conclude that the socket is in the CLOSE_WAIT state at this point, which I also confirmed with "netstat -np", which shows the socket going from ESTABLISHED to CLOSE_WAIT after killing netstat.
Looking at wireshark, the first conn.write results in a packet with push and ack fields set, and of course my payload. this is the write that succeeds fine in go.
then the old socket that used to belong to netstat sends RST,
which makes sure that as soon as i try to write in go (2nd write) it fails.
So my question is:
A) why can't I get an error on the first write? if the socket received the FIN and is in CLOSE_WAIT why does Go let me write to the socket and tell me all is fine?
B) is there any way I can check in Go whether the socket is in CLOSE_WAIT? and if so, I could for this purpose consider it closed and not do the write.
thanks,
Dieter
Fundamentally, a successful write only tells you that data has been queued to be sent to the other end. If you need to make sure the other end gets that data, even if the connection closes or errors, you must store a copy of the data until the other end provides you with an application-level acknowledgment.
My goal is to send a TCP packet with empty data field, in order to test the socket with the remote machine.
I am using the OutputStream class's method of write(byte[] b).
my attempt:
outClient = ClientSocket.getOutputStream();
outClient.write(("").getBytes());
Doing so, the packet never show up on the wire. It works fine if "" is replaced by " " or any non-zero string.
I tried jpcap, which worked with me, but didn't serve my goal.
I thought of extending the OutputStream class, and implementing my own OutputStream.write method. But this is beyond my knowledge. Please give me advice if someone have already done such a thing.
If you just want to quickly detect silently dropped connections, you can use Socket.setKeepAlive( true ). This method ask TCP/IP to handle heartbeat probing without any data packets or application programming. If you want more control on the frequency of the heartbeat, however, you should implement heartbeat with application level packets.
See here for more details: http://mindprod.com/jgloss/socket.html#DISCONNECT
There is no such thing as an empty TCP packet, because there is no such thing as a TCP packet. TCP is a byte-stream protocol. If you send zero bytes, nothing happens.
To detect broken connections, short of keepalive which only helps you after two hours, you must rely on read timeouts and write exceptions.
Is it wise/safe to close() a socket directly after the last send()?
I know that TCP is supposed to try to deliver all remaining data in the send buffer even after closing the socket, but can I really count on that?
I'm making sure that there is no remaining data in my receive buffer so that no RST will be sent following my close.
In my case, the close is actually the very last statement of code before calling exit().
Will the TCP stack really continue to try and transmit the data even after the process sending it has terminated? Is that as reliable as waiting for an arbitrary timeout myself before calling close() by setting SO_LINGER?
That is, do the same TCP timeouts apply, or are they shorter? With a big send buffer and a slow connection, the time to actually transfer all the buffered data could be substantial, after all.
I'm not interested at all in being notified of the last byte sent; I just want them to eventually arrive at the remote host as reliably as possible.
Application layer acknowledgements are not an option (the protocol is HTTP, and I'm writing a small server).
I've been reading the The ultimate SO_LINGER page, or: why is my tcp not reliable blog post a lot. I recommend you read it too. It discusses edge cases of large data transfers with regards to TCP sockets.
I'm not the expert at SO_LINGER, but on my server code (still in active development) I do the following:
After the last byte is sent via send(), I call shutdown(sock, SHUT_WR) to trigger a FIN to be sent.
Then wait for a subsequent recv() call on that socket to return 0 (or recv returns -1 and errno is anything other that EAGAIN/EWOULDBLOCK).
Then the server does a close() on the socket.
The assumption is that the client will close his socket first after it has received all the bytes of the response.
But I do have a timeout enforced between the final send() and when recv() indicates EOF. If the client never closes his end of the connection, the server will give up waiting and close the connection anyway. I'm at 45-90 seconds for this timeout.
All of my sockets are non-blocking and I use poll/epoll to be notified of connection events as a hint to see if it's time to try calling recv() or send() again.
Application layer acknowledgements are not an option (the protocol is HTTP, and I'm writing a small server).
HTTP protocol doesn't suffer from this problem. A HTTP server is not supposed to close the connection in any normal operation. The client closes it after recv(), and it knows exactly how many bytes it expects.
And just to be clear, the answer is "no".
Yes, it is safe that send() then close() immediately.
the kernel will sent out all data in buffer and wait ack, then fin the socket gracefully.
I'm currently maintaining some web server software and I need to perform a lot of I/O operations. The read(), write(), close() and shutdown() calls, when used on a socket, may sometimes raise an ENOTCONN error. What exactly does this error mean? What are the conditions that would trigger it? I can never seem to reproduce it locally but there are users who can.
Right now I just ignore ENOTCONN when raised by close() and shutdown() because it seems harmless, but I'm not entirely sure.
EDIT:
I am absolutely sure that the connect() call succeeded. I check for its return value.
ENOTCONN is most often raised by close() and shutdown(). I've only very rarely seen a read() and write() raising ENOTCONN.
If you are sure that nothing on your side of the TCP connection is closing the connection, then it sounds to me like the remote side is closing the connection.
ENOTCONN, as others have pointed out, simply means that the socket is not connected. This doesn't necessarily mean that connect failed. The socket may well have been connected previously, it just wasn't at the time of the call that resulted in ENOTCONN.
This differs from:
ECONNRESET: the other end of the connection sent a TCP reset packet. This can happen if the other end is refusing a connection, or doesn't acknowledge that it is already connected, among other things.
ETIMEDOUT: this generally applies only to connect. This can happen if the connection attempt is not successful within a system-dependent amount of time.
EPIPE can sometimes be returned by some socket-related system calls under conditions that are more or less the same as ENOTCONN. For example, on some systems, EPIPE and ENOTCONN are synonymous when returned by send.
While it's not unusual for shutdown to return ENOTCONN, since this function is supposed to tear down the TCP connection, I would be surprised to see close return ENOTCONN. It really should never do that.
Finally, as dwc mentioned, EBADF shouldn't apply in your scenario unless you are attempting some operation on a file descriptor that has already been closed. Having a socket get disconnected (i.e. the TCP connection has broken) is not the same as closing the file descriptor associated with that socket.
It's because, at the moment of shutting() the socket, you have data in the socket's buffer waiting to be delivered to the remote party which has closed() or shutted down() its receiving socket.
I don't finish understanding how sockets work, I am rather a noob, and I've failed to even find the files where this "shutdown" function is implemented, but seeing that there's practically no user manual for the whole sockets thing I started trying all possibilities until I got the error in a "controlled" environment. It could be something else, but after much trying these are the explanations I settled for:
If you sent data after the remote side closed the connection, when you shutdown(), you get the error.
If you sent data before the remote side closed the connection but it didn't get received() on the other end, you can shutdown() once, the next time you try to shutdown(), you get the error.
If you didn't send any data, you can shutdown all the times you want, as long as the remote side doesn't shutdown(); once the remote side has shutdown(), if you try to shutdown() and the socket was already shutdown(), you get the error.
I believe ENOTCONN is returned, because shutdown() is not supposed to return ECONNRESET or other more accurate errors.
It is wrong to assume that the other side “just” closed the connection. On the TCP-level, the other side can only half-close a connection (or abort it). The connection is ordinary fully closed if both sides do a shutdown() (or close()). If both side do that, shutdown() actually succeeds for both of them!
The problem is that shutdown() did not succeed in ordinary (half-)closing the connection, neither as the first one to close it, nor as the second one. – From the errors listed in the POSIX docs for shutdown(), ENOTCONN is the least inappropriate, because the others indicate problems with arguments passed to shutdown() (or local resource problems to handle the request).
So what happened? These days, a NAT device somewhere between the two parties involved might have dropped the association and sends out RESET packets as a reaction. Reset connections are so common for IPv4, that you will get them anywhere in your code, even masked as ENOTCONN in shutdown().
A coding bug might also be the reason. On a non-blocking socket, for example, a connect() can return 0 without indicating a successful connection yet.
Transport endpoint is not connected
The socket is associated with a connection-oriented protocol and has not been connected. This is usually a programming flaw.
From: http://www.wlug.org.nz/ENOTCONN
If you're sure you've connected properly in the first place, ENOTCONN is most likely to be caused by either the fd being closed on your end (perhaps in another thread?) while you're in the middle of a request, or by the connection dropping while you're in the middle of the request.
In any case, it means that the socket is not connected. Go ahead and clean up that socket. It's dead. No problem calling close() or shutdown() on it.