TCP backlog exhaustion causes incoming connections not to be signaled - sockets

I am doing following:
Open a listening TCP socket.
Set BACKLOG to 10
Open 50 connecting sockets (non-blocking connect is used)
poll on the listening socket and accept the connections
Connections that are able to transfer any data are closed
What I see is that all 50 connects succeed, however, POLLIN on the listening socket is signaled only ~30 times. Which means only 30 connections are accepted.
When I run netstat is such condition I see no hanging ESTABLISHED connections. There are couple of connections hanging in TIME_WAIT state, but that doesn't seem relevant.
The above was observed on Linux, however, similar behaviour seems to happen on FreeBSD and NetBSD as well.
Anyone any experience with this kind of thing?

I've got the explanation out-of-band. Those interested in it can read about it here:
http://www.evanjones.ca/tcp-stuck-connection-mystery.html

Related

Why is the same socket in TIME_WAIT many times?

I have read other threads regarding sockets in TIME_WAIT, but I am clearly still missing something.
Below is a few lines from a "netstat -an". How could it get into this situation? If I understood the descriptions I found, we should not have more than one instance of the socket 63444 ... but after the one listed as "LISTEN" there are about 50 individual socket connections with one end at 63444, all in "TIME_WAIT". How could this happen, and how can I fix it?
tcp 0 0 0.0.0.0:63444 0.0.0.0:* LISTEN
tcp 0 0 169.254.7.228:63444 169.254.66.84:35391 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35283 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35352 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35431 TIME_WAIT
I'm not sure what descriptions you've found, but that's nonsense. A web server may have dozens of connections to port 80 active at once and many others in the process of shutting down. They all have the same local endpoint.
Each of these TIME_WAIT lines represent a different connection to port 63444 that is in the process of closing. The machine at 169.254.66.84 made a bunch of connections to this machine, and several of them are now in TIME_WAIT state. There's nothing unusual about that.
Connections are uniquely (generally) identified by the source port, source address, dest port and dest address. If any of those is different it's a different connection. In each of the lines you show there is a different port on the "other" side, so each is a different connection.
Why is the same socket in TIME_WAIT many times?
It isn't the same socket. Look at the remote address. It's the same local IP address and port every time, but the remote addresses are all different.
I have read other threads regarding sockets in TIME_WAIT, but I am clearly still missing something. Below is a few lines from a "netstat -an". How could it get into this situation?
The server accepted some connections and later closed them.
If I understood the descriptions I found, we should not have more than one instance of the socket 63444 ...
That's nonsense, wherever you read it. Otherwise TCP servers couldn't work at all.
but after the one listed as "LISTEN" there are about 50 individual socket connections with one end at 63444, all in "TIME_WAIT". How could this happen, and how can I fix it?
This is perfectly normal. There is nothing here that needs fixing.
When a connection is accepted, a new socket is created with the same local IP address and port, and the source IP address:port set to those of the client. When the server closes this socket it transitions through various states as the close handshake proceeds, ending in TIME_WAIT for two minutes, and then it disappears.

How to handle TCP keepalive in application

I have a TCP application running on VxWorks. I have SO_KEEPALIVE option set for my TCP connections. My application keep track of all TCP connection and put it into a link list.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up. Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
TCP keepalive is intended primarily to prevent network routers from shutting the TCP connection down during long periods of inactivity, not to prevent your OS or application from shutting down the connection when it deems appropriate.
In most TCP/IP implementations, you can determine if a connection has been closed by attempting to read from it.
From this reference : http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I quote :
This procedure is useful because if the other peers lose their connection (for example by rebooting) you will notice that the connection is broken, even if you don't have traffic on it. If the keepalive probes are not replied to by your peer, you can assert that the connection cannot be considered valid and then take the correct action.
If you have a server for instance and a lot of clients can connect to it, without sending regularly, you might end up in a situation with clients that are no longer there. A client may have rebooted and this goes undetected because a FIN is never sent in that case.
For cases like this the keepalive exists.
From TCP point of view there is nothing special with a keep alive. And hence if the peer fails to ack a keepalive, you will receive 0 bytes on your socket and you'll have to close your end of the socket. Which is the only corrective action you can do at that moment.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up.
Only if you never use the connection again.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
Make up your mind. Either you see it or you don't. What you will see is the port in CLOSE_WAIT in netstat.
Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
Next time you use the connection for read or write you will get an ECONNRESET.

TCP connection between client and server gone wrong

I establish a TCP connection between my server and client which runs on the same host. We gather and read from the server or say source in our case continuously.
We read data on say 3 different ports.
Once the source stops publishing data or gets restarted , the server/source is not able to publish data again on the same port saying port is already bind. The reason given is that client still has established connection on those ports.
I wanted to know what could be the probable reasons of this ? Can there be issue since client is already listening on these ports and trying to reconnect again and again because we try this reconnection mechanism. I am more looking for reason on source side as the same code in client sides when source and client are on different host and not the same host works perfectly fine for us.
Edit:-
I found this while going through various article .
On the question of using SO_LINGER to send a RST on close to avoid the TIME_WAIT state: I've been having some problems with router access servers (names withheld to protect the guilty) that have problems dealing with back-to-back connections on a modem dedicated to a specific channel. What they do is let go of the connection, accept another call, attempt to connect to a well-known socket on a host, and the host refuses the connection because there is a connection in TIME_WAIT state involving the well-known socket. (Stevens' book TCP Illustrated, Vol 1 discusses this problem in more detail.) In order to avoid the connection-refused problem, I've had to install an option to do reset-on-close in the server when the server initiates the disconnection.
Link to source:- http://developerweb.net/viewtopic.php?id=2941
I guess i am facing the same problem: 'attempt to connect to a well-known socket on a host, and the host refuses the connection'. Probable fix mention is 'option to do reset-on-close in the server when the server initiates the disconnection'. Now how do I do that ?
Set the SO_REUSEADDR option on the server socket before you bind it and call listen().
EDIT The suggestion to fiddle around with SO_LINGER option is worthless and dangerous to your data in flight. Just use SO_RESUSEADDR.
You need to close the socket bound to that port before you restart/shutdown the server!
http://www.gnu.org/software/libc/manual/html_node/Closing-a-Socket.html
Also, there's a timeout time, which I think is 4 minutes, so if you created a TCP socket and close it, you may still have to wait 4 minutes until it closes.
You can use netstat to see all the bound ports on your system. If you shut down your server, or close your server after forking on connect, you may have zombie processes which are bound to certain ports that do not close and remain active, and thus, you can't rebind to the same port. Show some code.

TCP IO Akka socket connection closed is not called when Internet is down

I have implemented a socket - client interaction using akka's TCP module. I am trying to make the application to detect when the socket is closed and release the resources assigned to that client's socket.
Akka has case _ : ConnectionClosed case in order to handle this kind of situation.But i have realized that it is not being called when the internet connection is down.
I couldn't be able to find anything to detect that the socket's client part is disconnected from the internet.
Is there any specifics that I am missing?
The network connection going down doesn't necessarily close any sockets, the OS is free to leave them open in case the network connection recovers. I believe this is really an issue with your OS, and not with Akka. TCP connections will eventually timeout, but this can take tens of minutes. See TCP Socket no connection timeout.

How can we remove close_wait state of the socket without restarting the server?

We have written an application in which client-server communication is used with the IOCP concept.
Client connects to the server through wireless access points.
When temporary disconnection happens in the network, this can lead a CLOSE_WAIT state.This could indicate that the
client properly closed the connection. But the server still has its socket open.
If there are too many instances of the port (to which the server and client were talking) were in CLOSE_WAIT state then at the highest peak ,server stop functioning thus rejecting the connection.That is totally frustrating.In this case, user has to restart the server to wipe out all the close_wait state by clearing the memory.When server restart,client again try to connect to the server.Server calls accept command again,But before accepting a new connection ,previous connection should be closed at server side,How can we do that ?
How can we remove close_wait state of the socket without restarting the server ?
Is there any alternate way to avoid server restart ?
We also came to know that,If all of the available ephemeral ports are allocated to client applications then the
client experiences a condition known as TCP/IP port exhaustion. When TCP/IP port exhaustion occurs, client port
reservations cannot be made and errors will occur in client applications that attempt to connect to a server via TCP/IP sockets.
if this is happening then we need to increase the upper range of ephemeral ports that are dynamically allocated to client TCP/IP socket connections.
Reference :
http://msdn.microsoft.com/en-us/library/aa560610%28v=bts.10%29.aspx
Let us know if this alternate way is useful or not ?
Thanks in advance.
Regards
Amey
Fix the server code.
The server should be reading with a timeout, and if the timeout expires it should close the socket.