TCP connection between client and server gone wrong - sockets

I establish a TCP connection between my server and client which runs on the same host. We gather and read from the server or say source in our case continuously.
We read data on say 3 different ports.
Once the source stops publishing data or gets restarted , the server/source is not able to publish data again on the same port saying port is already bind. The reason given is that client still has established connection on those ports.
I wanted to know what could be the probable reasons of this ? Can there be issue since client is already listening on these ports and trying to reconnect again and again because we try this reconnection mechanism. I am more looking for reason on source side as the same code in client sides when source and client are on different host and not the same host works perfectly fine for us.
Edit:-
I found this while going through various article .
On the question of using SO_LINGER to send a RST on close to avoid the TIME_WAIT state: I've been having some problems with router access servers (names withheld to protect the guilty) that have problems dealing with back-to-back connections on a modem dedicated to a specific channel. What they do is let go of the connection, accept another call, attempt to connect to a well-known socket on a host, and the host refuses the connection because there is a connection in TIME_WAIT state involving the well-known socket. (Stevens' book TCP Illustrated, Vol 1 discusses this problem in more detail.) In order to avoid the connection-refused problem, I've had to install an option to do reset-on-close in the server when the server initiates the disconnection.
Link to source:- http://developerweb.net/viewtopic.php?id=2941
I guess i am facing the same problem: 'attempt to connect to a well-known socket on a host, and the host refuses the connection'. Probable fix mention is 'option to do reset-on-close in the server when the server initiates the disconnection'. Now how do I do that ?

Set the SO_REUSEADDR option on the server socket before you bind it and call listen().
EDIT The suggestion to fiddle around with SO_LINGER option is worthless and dangerous to your data in flight. Just use SO_RESUSEADDR.

You need to close the socket bound to that port before you restart/shutdown the server!
http://www.gnu.org/software/libc/manual/html_node/Closing-a-Socket.html
Also, there's a timeout time, which I think is 4 minutes, so if you created a TCP socket and close it, you may still have to wait 4 minutes until it closes.
You can use netstat to see all the bound ports on your system. If you shut down your server, or close your server after forking on connect, you may have zombie processes which are bound to certain ports that do not close and remain active, and thus, you can't rebind to the same port. Show some code.

Related

Keep TCP connection on permanently with ESP8266 TCP client

I am using the wifi chip ESP8266 with SMING framework.
I am able to establish a TCP connection as a client to a remote server. The code for initiating client connection to server is simple.
tcpClient.connect(SERVER_HOST, SERVER_PORT);
Unfortunately, the connection will close after idling for some time. I would like to keep this connection open forever permanently. How can this be done?
You will actually need to monitor the connection state and reconnect it if it failed. Your protocol on top of it will need to keep track of what got actually received by the other side and retransmit it.
In any wireless network your link may go down for one reason or another and if you need to maintain a long term connection you will need to have it in a layer above TCP itself.
TCP will continue to be connected as long as both sides allow for it (none of them disconnected) and there are no errors on the link, in this case sending keepalives may actually cause disconnects since the keepalive may fail at one time but the link could recover and if you didn't have the keepalive the link would have stayed up.

tcp connection issue for unreachable server after connection

I am facing an issue with tcp connection..
I have a number of clients connected to the a remote server over tcp .
Now,If due to any issue i am not able to reach my server , after the successful establishment of the tcp connection , i do not receive any error on the client side .
On client end if i do netstat , it shows me that clients are connected the remote server , even though i am not able to ping the server.
So,now i am in the case where the server shows it is not connected to any client and on another end the client shows it is connected the server.
I have tested this for websocket also with node.js , but the same behavior persists over there also .
I have tried to google it around , but no luck .
Is there any standard solution for that ?
This is by design.
If two endpoints have a successful socket (TCP) connection between each other, but aren't sending any data, then the TCP state machines on both endpoints remains in the CONNECTED state.
Imagine if you had a shell connection open in a terminal window on your PC at work to a remote Unix machine across the Internet. You leave work that evening with the terminal window still logged in and at the shell prompt on the remote server.
Overnight, some router in between your PC and the remote computer goes out. Hours later, the router is fixed. You come into work the next day and start typing at the shell prompt. It's like the loss of connectivity never happened. How is this possible? Because neither socket on either endpoint had anything to send during the outage. Given that, there was no way that the TCP state machine was going to detect a connectivity failure - because no traffic was actually occurring. Now if you had tried to type something at the prompt during the outage, then the socket connection would eventually time out within a minute or two, and the terminal session would end.
One workaround is to to enable the SO_KEEPALIVE option on your socket. YMMV with this socket option - as this mode of TCP does not always send keep-alive messages at a rate in which you control.
A more common approach is to just have your socket send data periodically. Some protocols on top of TCP that I've worked with have their own notion of a "ping" message for this very purpose. That is, the client sends a "ping" message over the TCP socket every minute and the server responds back with "pong" or some equivalent. If neither side gets the expected ping/pong message within N minutes, then the connection, regardless of socket error state, is assumed to be dead. This approach of sending periodic messages also helps with NATs that tend to drop TCP connections for very quiet protocols when it doesn't observe traffic over a period of time.

How to handle TCP keepalive in application

I have a TCP application running on VxWorks. I have SO_KEEPALIVE option set for my TCP connections. My application keep track of all TCP connection and put it into a link list.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up. Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
TCP keepalive is intended primarily to prevent network routers from shutting the TCP connection down during long periods of inactivity, not to prevent your OS or application from shutting down the connection when it deems appropriate.
In most TCP/IP implementations, you can determine if a connection has been closed by attempting to read from it.
From this reference : http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
I quote :
This procedure is useful because if the other peers lose their connection (for example by rebooting) you will notice that the connection is broken, even if you don't have traffic on it. If the keepalive probes are not replied to by your peer, you can assert that the connection cannot be considered valid and then take the correct action.
If you have a server for instance and a lot of clients can connect to it, without sending regularly, you might end up in a situation with clients that are no longer there. A client may have rebooted and this goes undetected because a FIN is never sent in that case.
For cases like this the keepalive exists.
From TCP point of view there is nothing special with a keep alive. And hence if the peer fails to ack a keepalive, you will receive 0 bytes on your socket and you'll have to close your end of the socket. Which is the only corrective action you can do at that moment.
As the connection is closed by TCP stack, resources allocated for that connection are not cleaned up.
Only if you never use the connection again.
If client is idle for long time, we see that connection is closing down. Connection is not listed in netstat output.
Make up your mind. Either you see it or you don't. What you will see is the port in CLOSE_WAIT in netstat.
Can you please help me figure out how does application get notified if connection is closed due to keep-alive's failures.
Next time you use the connection for read or write you will get an ECONNRESET.

How can we remove close_wait state of the socket without restarting the server?

We have written an application in which client-server communication is used with the IOCP concept.
Client connects to the server through wireless access points.
When temporary disconnection happens in the network, this can lead a CLOSE_WAIT state.This could indicate that the
client properly closed the connection. But the server still has its socket open.
If there are too many instances of the port (to which the server and client were talking) were in CLOSE_WAIT state then at the highest peak ,server stop functioning thus rejecting the connection.That is totally frustrating.In this case, user has to restart the server to wipe out all the close_wait state by clearing the memory.When server restart,client again try to connect to the server.Server calls accept command again,But before accepting a new connection ,previous connection should be closed at server side,How can we do that ?
How can we remove close_wait state of the socket without restarting the server ?
Is there any alternate way to avoid server restart ?
We also came to know that,If all of the available ephemeral ports are allocated to client applications then the
client experiences a condition known as TCP/IP port exhaustion. When TCP/IP port exhaustion occurs, client port
reservations cannot be made and errors will occur in client applications that attempt to connect to a server via TCP/IP sockets.
if this is happening then we need to increase the upper range of ephemeral ports that are dynamically allocated to client TCP/IP socket connections.
Reference :
http://msdn.microsoft.com/en-us/library/aa560610%28v=bts.10%29.aspx
Let us know if this alternate way is useful or not ?
Thanks in advance.
Regards
Amey
Fix the server code.
The server should be reading with a timeout, and if the timeout expires it should close the socket.

Connectivity issues with SSL Socket Server

Socket Server with SSLStream some times refuses new connections from clients.
I used the telent hostname port, and it says Connecting To host...
Could not open connection to the host, on port 6002: Connect failed
I used netstat -a , and I see TCP status as
TCP 0.0.0.0:6002 host:0 LISTENING
I also see the service as listening in tcpview too.
The error I see on client side is connection refused with error code 10061.
The same socket server was accepting new connections and just runs fine without any issues.But after some time the above issue happens.its random.
When I restart the sockets it just works fine and accepts conenctions, which I don;t want to do it frequently.becasue this disconnects clients, who are already connected.
Could somebody help me to trouble shoot this?
Thanks.
Where are you running netstat? On the server?
Try connecting to the socket from localhost (from the server itself) using the destination IP address 127.0.0.1
Do the same test with the network IP of the server.
My guess is that the firewall is preventing external access or a router in between is preventing the connection.
It works for a while and then stops. Few options I can think of:
Some firewall on the way does some kind of throttling
You open and close too many connections too quickly. In this case you exhaust the ephemeral ports on the client (usually) and/or on the server. If you do netstat -a you will see a lot of sockets in TIME_WAIT state, try this both on client and server. Solution here is to reuse connections (best). Or increase the number of ephemeral ports (registry setting). But this will take you only so far.
You have a bug in your server and it stops accepting new connections after a while.