TCP Server having Five clients and one of client has closed abnormally - tcpserver

TCP server is having connections with 5 clients and its waiting on select call for clients to read\write.
But one of the client got closed abnormally.
How Server will come to know that client got closed.

Is your server configured to handle multiple clients in the same time ?
For example, does it create a sub process every time a new client comes ?
If it is indeed a multiple clients TCP server, you can make the sub process send a message to the main process with the signal() system call when it disconnect : http://www.tutorialspoint.com/unix_system_calls/signal.htm

Related

what should the server do when a conneceted client was force killed the process which both using tcp socket?

while using net and stream socket, after client connect server, what should the server do when a conneceted client was force killed the process which both using tcp socket?
does the server know when a connected client was force killed the process?
The server knows when a client socket gets closed, which it implicitly does when the process owning the socket gets killed. The server does not get the reason why the socket gets closed though.
So there is no way for the server to react specifically at a socket close due to process killed. The server can only react to a socket closed at a time when the server does not expect the socket to get closed. How the server should react to this depends on the specific use case, i.e. there is no universal behavior.

Windows server 2008 send [RST, ACK] packets while several clients ask for tcp connections at the same time(less than 5ms)

I have a Java Socket Server running on a Windows Server 2008.
When using a multi-threads client to send several TCP connections at the same time, the client always get the "Errno 111 connection refused" error after the establishment of the first connection.
Here's the capture trace of Wireshark (10.1.3.136 is the server, 10.34.10.132 is the client): Trace and the specific red trace goes here:Trace2
So, what's the issue?
If I delay-launch the thread by more than 5ms, or use a centos as the server, the errors disapperar. No exceptions are found in the server trace file.
The issue is that you have filled the backlog queue, whereupon Windows starts issuing resets to further incoming connection requests.
This could be because you specified a small backlog value, but the more likely cause is that your server is simply not accepting connections fast enough: your accept loop is fiddling around doing other things, such as DNS calls or even I/O with the client, all of which should be done in the client's thread. All the accept loop should do is accept sockets and start threads.

tcp connection issue for unreachable server after connection

I am facing an issue with tcp connection..
I have a number of clients connected to the a remote server over tcp .
Now,If due to any issue i am not able to reach my server , after the successful establishment of the tcp connection , i do not receive any error on the client side .
On client end if i do netstat , it shows me that clients are connected the remote server , even though i am not able to ping the server.
So,now i am in the case where the server shows it is not connected to any client and on another end the client shows it is connected the server.
I have tested this for websocket also with node.js , but the same behavior persists over there also .
I have tried to google it around , but no luck .
Is there any standard solution for that ?
This is by design.
If two endpoints have a successful socket (TCP) connection between each other, but aren't sending any data, then the TCP state machines on both endpoints remains in the CONNECTED state.
Imagine if you had a shell connection open in a terminal window on your PC at work to a remote Unix machine across the Internet. You leave work that evening with the terminal window still logged in and at the shell prompt on the remote server.
Overnight, some router in between your PC and the remote computer goes out. Hours later, the router is fixed. You come into work the next day and start typing at the shell prompt. It's like the loss of connectivity never happened. How is this possible? Because neither socket on either endpoint had anything to send during the outage. Given that, there was no way that the TCP state machine was going to detect a connectivity failure - because no traffic was actually occurring. Now if you had tried to type something at the prompt during the outage, then the socket connection would eventually time out within a minute or two, and the terminal session would end.
One workaround is to to enable the SO_KEEPALIVE option on your socket. YMMV with this socket option - as this mode of TCP does not always send keep-alive messages at a rate in which you control.
A more common approach is to just have your socket send data periodically. Some protocols on top of TCP that I've worked with have their own notion of a "ping" message for this very purpose. That is, the client sends a "ping" message over the TCP socket every minute and the server responds back with "pong" or some equivalent. If neither side gets the expected ping/pong message within N minutes, then the connection, regardless of socket error state, is assumed to be dead. This approach of sending periodic messages also helps with NATs that tend to drop TCP connections for very quiet protocols when it doesn't observe traffic over a period of time.

Is TCP Reset (RST) two way?

I have a client-server (Java) application using persistent TCP connections, but sometimes the Server receives java.io.IOException: Connection reset by peer exception when trying to write on the socket, however I don't see any error in the Client log.
This RST is probably caused by an intermediate proxy/router, but if that's the case, should this be seen on the client as well?
If the RST is sent by the client, it can be seen on it using a packet sniffer such as wireshark. However, it won't show up in any user-level sockets since it's sent by the OS as a response to various erroneous inputs (such as connection attempts to a closed port).
If the RST is sent by the network, then it's pretending to be the client to sever the connection. It can do so in one direction, or in both of them. In that case, the client might not see anything, except for a RST sent by the actual server when the client continues to send data to a connection it perceives as open, while the server sees it as closed.
Try capturing the traffic on both the server and the client, see where the resets are coming from.

Creating client and server sockets in the same file

I would have to design a job scheduling system which works like - users (clients) would deposit jobs (executables) to a server. I have three files - client.c, jobQ.c and server.c. The jobQ would take client requests, and send them to the server on specific timestamps (if a user wants to run job X on server Y at 07-29-2010 3:34 AM, then the jobQ would store it in a stack, and when time comes and if the server is free, it sends the job to the server).
jobQ.c would act as a server for client.c and as a client for server.c.
I have used TCP/IP sockets to program these, and the problem I am facing is while creating multiple sockets in jobQ.c. Is it possible for the same file to have client and server sockets? The error points to this line in jobQ.c:
sockSer = socket(AF_INET, SOCK_STREAM, 0);
error: lvalue required as decrement operand
...when I am opening a second socket to talk to the server.
My idea is that jobQ would open different ports to listen to clients and connect to the server.
Thanks,
Sayan