Why is the same socket in TIME_WAIT many times? - sockets

I have read other threads regarding sockets in TIME_WAIT, but I am clearly still missing something.
Below is a few lines from a "netstat -an". How could it get into this situation? If I understood the descriptions I found, we should not have more than one instance of the socket 63444 ... but after the one listed as "LISTEN" there are about 50 individual socket connections with one end at 63444, all in "TIME_WAIT". How could this happen, and how can I fix it?
tcp 0 0 0.0.0.0:63444 0.0.0.0:* LISTEN
tcp 0 0 169.254.7.228:63444 169.254.66.84:35391 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35283 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35352 TIME_WAIT
tcp 0 0 169.254.7.228:63444 169.254.66.84:35431 TIME_WAIT

I'm not sure what descriptions you've found, but that's nonsense. A web server may have dozens of connections to port 80 active at once and many others in the process of shutting down. They all have the same local endpoint.
Each of these TIME_WAIT lines represent a different connection to port 63444 that is in the process of closing. The machine at 169.254.66.84 made a bunch of connections to this machine, and several of them are now in TIME_WAIT state. There's nothing unusual about that.

Connections are uniquely (generally) identified by the source port, source address, dest port and dest address. If any of those is different it's a different connection. In each of the lines you show there is a different port on the "other" side, so each is a different connection.

Why is the same socket in TIME_WAIT many times?
It isn't the same socket. Look at the remote address. It's the same local IP address and port every time, but the remote addresses are all different.
I have read other threads regarding sockets in TIME_WAIT, but I am clearly still missing something. Below is a few lines from a "netstat -an". How could it get into this situation?
The server accepted some connections and later closed them.
If I understood the descriptions I found, we should not have more than one instance of the socket 63444 ...
That's nonsense, wherever you read it. Otherwise TCP servers couldn't work at all.
but after the one listed as "LISTEN" there are about 50 individual socket connections with one end at 63444, all in "TIME_WAIT". How could this happen, and how can I fix it?
This is perfectly normal. There is nothing here that needs fixing.
When a connection is accepted, a new socket is created with the same local IP address and port, and the source IP address:port set to those of the client. When the server closes this socket it transitions through various states as the close handshake proceeds, ending in TIME_WAIT for two minutes, and then it disappears.

Related

can a socket with the same local address be in two states at once 'LISTEN' and 'ESTABLISHED'

I have a question about sockets where a socket gets in to a peculiar situation, it is both Listening and Established. I must add that I am working with SSL sockets. It is possible that the TLS handshake timed out and some how this got in to this state?
netstat output
tcp 0 0 XX.XX.xx.83.9999 XX.XX.xx.10.42146 ESTABLISHED --> socket in established state
tcp 0 0 XX.XX.xx.83.9999 . LISTEN -> socket is also in listen state.
how can be it in both LISTEN/ESTAB
A single socket cannot both listen and be established. And this is not what you are seeing.
These are two separate sockets, not the same socket. Both sockets are bound to the same local IP and port. But one is connected to a peer (i.e. connection established) while the other is listening for new connections.

How does a TCP server handle multiple Sockets to listening to the same port? [duplicate]

This might be a very basic question but it confuses me.
Can two different connected sockets share a port? I'm writing an application server that should be able to handle more than 100k concurrent connections, and we know that the number of ports available on a system is around 60k (16bit). A connected socket is assigned to a new (dedicated) port, so it means that the number of concurrent connections is limited by the number of ports, unless multiple sockets can share the same port. So the question.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all ip addresses". An easy mistake to make is to only listen on port 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/quickweb) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
A server socket listens on a single port. All established client connections on that server are associated with that same listening port on the server side of the connection. An established connection is uniquely identified by the combination of client-side and server-side IP/Port pairs. Multiple connections on the same server can share the same server-side IP/Port pair as long as they are associated with different client-side IP/Port pairs, and the server would be able to handle as many clients as available system resources allow it to.
On the client-side, it is common practice for new outbound connections to use a random client-side port, in which case it is possible to run out of available ports if you make a lot of connections in a short amount of time.
A connected socket is assigned to a new (dedicated) port
That's a common intuition, but it's incorrect. A connected socket is not assigned to a new/dedicated port. The only actual constraint that the TCP stack must satisfy is that the tuple of (local_address, local_port, remote_address, remote_port) must be unique for each socket connection. Thus the server can have many TCP sockets using the same local port, as long as each of the sockets on the port is connected to a different remote location.
See the "Socket Pair" paragraph in the book "UNIX Network Programming: The sockets networking API" by
W. Richard Stevens, Bill Fenner, Andrew M. Rudoff at: http://books.google.com/books?id=ptSC4LpwGA0C&lpg=PA52&dq=socket%20pair%20tuple&pg=PA52#v=onepage&q=socket%20pair%20tuple&f=false
Theoretically, yes. Practice, not. Most kernels (incl. linux) doesn't allow you a second bind() to an already allocated port. It weren't a really big patch to make this allowed.
Conceptionally, we should differentiate between socket and port. Sockets are bidirectional communication endpoints, i.e. "things" where we can send and receive bytes. It is a conceptional thing, there is no such field in a packet header named "socket".
Port is an identifier which is capable to identify a socket. In case of the TCP, a port is a 16 bit integer, but there are other protocols as well (for example, on unix sockets, a "port" is essentially a string).
The main problem is the following: if an incoming packet arrives, the kernel can identify its socket by its destination port number. It is a most common way, but it is not the only possibility:
Sockets can be identified by the destination IP of the incoming packets. This is the case, for example, if we have a server using two IPs simultanously. Then we can run, for example, different webservers on the same ports, but on the different IPs.
Sockets can be identified by their source port and ip as well. This is the case in many load balancing configurations.
Because you are working on an application server, it will be able to do that.
I guess none of the answers tells every detail of the process, so here it goes:
Consider an HTTP server:
It asks the OS to bind the port 80 to one or many IP addresses (if you choose 127.0.0.1, only local connections are accepted. You can choose 0.0.0.0 to bind to all IP addresses (localhost, local network, wide area network, both IP versions)).
When a client connects to that port, it WILL lock it up for a while (that's why the socket has a backlog: it queues a number of connection attempts, because they ARE NOT instantaneous).
The OS then chooses a random port and transfer that connection to that port (think of it as a temporary port that will handle all the traffic from now on).
The port 80 is then released for the next connection (first, it will accept the first one in the backlog).
When client or server disconnects, the random port is held open for a while (CLOSE_WAIT in the remote side, TIME_WAIT in the local side). That allows flushing some lost packets along the path. The default time for that state is 2 * MSL seconds (and it WILL consume memory while is waiting).
After that waiting, that random port is free again to receive other connections.
So, TCP cannot even share a port amongst two IP's!
No. It is not possible to share the same port at a particular instant. But you can make your application such a way that it will make the port access at different instant.
Absolutely not, because even multiple connections may shave same ports but they'll have different IP addresses

How to handle TCP keepalive in application

I have a TCP application running on VxWorks. I have SO_KEEPALIVE option set for my TCP connections. My application keep track of all TCP connection and put it into a link list.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up. Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
TCP keepalive is intended primarily to prevent network routers from shutting the TCP connection down during long periods of inactivity, not to prevent your OS or application from shutting down the connection when it deems appropriate.
In most TCP/IP implementations, you can determine if a connection has been closed by attempting to read from it.
From this reference : http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I quote :
This procedure is useful because if the other peers lose their connection (for example by rebooting) you will notice that the connection is broken, even if you don't have traffic on it. If the keepalive probes are not replied to by your peer, you can assert that the connection cannot be considered valid and then take the correct action.
If you have a server for instance and a lot of clients can connect to it, without sending regularly, you might end up in a situation with clients that are no longer there. A client may have rebooted and this goes undetected because a FIN is never sent in that case.
For cases like this the keepalive exists.
From TCP point of view there is nothing special with a keep alive. And hence if the peer fails to ack a keepalive, you will receive 0 bytes on your socket and you'll have to close your end of the socket. Which is the only corrective action you can do at that moment.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up.
Only if you never use the connection again.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
Make up your mind. Either you see it or you don't. What you will see is the port in CLOSE_WAIT in netstat.
Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
Next time you use the connection for read or write you will get an ECONNRESET.

Handling multiple TCP connections on the only one socket in server

Please assume that we can distinguish packets of different TCP connections from each other, if so then can we accept multiple TCP connections on the only one socket in server side? I know that the server binds on a socket and when accepting new connection assigns a new socket to new connection. Would I override ACCEPT systemcall?
Please assume that we can distinguish packets of different TCP connections from each other
You can't assume it. There are no 'packets' visible to the application over a TCP connection. A TCP connection provides a byte stream. You can't guarantee that the next thing you read will be say a message header telling you which client the message is from.

TCP backlog exhaustion causes incoming connections not to be signaled

I am doing following:
Open a listening TCP socket.
Set BACKLOG to 10
Open 50 connecting sockets (non-blocking connect is used)
poll on the listening socket and accept the connections
Connections that are able to transfer any data are closed
What I see is that all 50 connects succeed, however, POLLIN on the listening socket is signaled only ~30 times. Which means only 30 connections are accepted.
When I run netstat is such condition I see no hanging ESTABLISHED connections. There are couple of connections hanging in TIME_WAIT state, but that doesn't seem relevant.
The above was observed on Linux, however, similar behaviour seems to happen on FreeBSD and NetBSD as well.
Anyone any experience with this kind of thing?
I've got the explanation out-of-band. Those interested in it can read about it here:
http://www.evanjones.ca/tcp-stuck-connection-mystery.html