client/server socket reconnection - sockets

I developed a client/server application based on sockets.
The client side is in Delphi. The server side is on an IBM I (as400)
Sometimes, the client and the server get disconnected. I'm not really sure why, but I think it's because of a machine between them (a proxy, a router, a firewall) sending a RST packet.
Anyway, I'm trying to reconnect the client with the same process on the server. (not another one, the same, that's important).
To do that, I create a new connection from the client. So, I have two processes on the server. I'll call them the "LostProcess" and the "HelperProcess".
The LostProcess is waiting for data in a data queue.
The client tells the HelperProcess that it was connected to the LostProcess.
The HelperProcess sends data to the LostProcess (via the data queue).
The HelperProcess makes a giveDescriptor, and the LostProcess makes a takeDescriptor.
Then the HelperProcess stops and the LostProcess sends data to the client (to say “I'm back”).
So far, it works, but when the client sends data , the LostProcess (we can call it the RebornProcess now) never receives them (I tried not to stop the HelperProcess, and that he is who receives the data).
With Wireshark, I could see that the client sends data with a different local port, so I guess that's why the RebornProcess does not receive them.
I tried to force the local port of the new client socket to be the same as the first one, but then the new client socket cannot connect for a while, and if I wait long enough, I have the same problem as before.
Does somebody have an idea how to make the reconnection work?

What you are doing is generally not possible. Once a TCP connection has been lost, it is gone forever. Both apps must close their respective sockets for the lost connection, and the client app must create a new socket connection to continue exchanging data with the server.
If the client app wants to reuse the same local port via bind() (which is generally not advisable in most cases), but does not want to wait for the OS to release the port first, then the client can enable the SO_REUSEADDR option via setsockopt() on the new socket before calling bind() and connect().

Pretty sure the answer is you can't.
There'd be all kinds of security issues if TCP/IP allowed a new connection to reconnect to an existing processes connection.
You should have the lost process terminate and just use the new process instead.

Related

Network packet loss causes client code to act strange

I am facing some issues which I need some help on coming with a best way to resolve this.
here is the problem -
I have server code running which has a socket that is listening to accept new incoming connections.
I then attempt to start a client, which also has a socket that is listening to accept new incoming connections.
The client code begins with accepting a new connection on the listening socket file descriptor and gets a new socket file descriptor for I/O.
The server does the same thing and gets a new socket file descriptor for I/O.
Note: The client is not completely up, yet. It needs to receive some bytes from the server and send some before it can start.
I then introduce some packet loss over the TCP/IP network connection. This causes the certain errors (example: the recv() system call in the client process sees no received bytes and then closes the socket connection on the client side and the associated new socket file descriptor is closed.) However, this leaves the client process hanging since there are other descriptors in the FD_SET but none of them are I/O ready. So pselect() keeps returning 0 file descriptors ready for I/O. The client needs to send and receive certain bytes over the connection before it can start up.
My question is more of what should I do here ?
I did research on the SO_KEEPALIVE option when I create the new socket connection during the accept() system call. But I do not think that would resolve my problem here especially if the network packet loss is ongoing.
Should I kill the client process here if I realize there are no file descriptors ready for I/O and never will be ? Is there a better way to approach this ?
If I'm reading the question correctly, the core of the question is: "what should your client program do when a TCP connection that is central to its functionality has been broken?"
The answer to that question is really a matter of preference -- what would you like your client program to do in that case? Or to put it another way, what behavior would your users find most useful?
In many of my own client programs, I have logic included such that if the TCP connection to the server is ever broken, the client will automatically try to create a new TCP connection to the server and thereby recover its connectivity and useful functionality as soon as possible.
The other obvious option would be to just have the client quit when the connection is broken; perhaps with some sort of error indication so that the user will know why the client went away. (perhaps an error dialog that asks if the user would like to try to reconnect?)
SO_KEEPALIVE is probably not going to help you much in this scenario, by the way -- despite its name, its purpose is to help a program discover in a more timely manner that TCP connectivity has been lost, not to try harder to keep a TCP connection from being lost. (And it doesn't even serve that purpose particularly well, since in many TCP stacks only one keepalive packet is sent per hour, or so, which means that even with SO_KEEPALIVE enabled it can be a very long time before your program starts receiving error messages reflecting the loss of network connectivity)

websocket close error handling

When establishing a web socket connection between client to server the connection can close unexpectedly for several reasons:
Inactivity on the TCP channel.
server issue which makes the connection to close.
Client crash or reload/ refresh.
I am looking for the way of dealing with such situations or, at least, know they occurred.
When reading about WebSocket close, I understood the WebSocket protocol support server initiated pings pongs which can be used for the server to know if a client has crashed. (client initiated ping pong are not supported). - is it the best way to deal with client crash?
Also, I see in spec that on the client side we can listen to the onClose event and that there are several codes to understand why connection has been closed -
When the server crashed is that onClose event is always called?
Both server and clients can send pings, however there is no method in the Javascript API to send such control frames.
A common approach is to send protocol level pings from the server to client regularly. If the server does not get a pong frame in a given time, the server disconnects the client.
However, clients should also know if the server is unreachable or the connection is half open, so having an application level ping/pong (i.e.: some user data representing a ping or pong) would allow both server and client figure out if the other end is not reachable anymore. As before, the server can send pings and expect pongs in a given time, but also the client can expect to have pings in a given time or consider itself disconnected, and then try to reconnect again.
About closing reasons, worth to check : getting the reason why websockets closed
If the server crashes and the connection remains half open, you will have to detect this situation yourself and call .close() on the WebSocket object, and then the onclose event will be called.

Delphi Indy TCP Client/Server communication best approach

I have a client and a server application that is communicating just fine, there is a TIdCmdTCPServer in the server and a TIdTCPClient in the client.
The client has to authenticate in the server, the client asks the server for the newest version information and downloads any updates, and other communications. All this communication with TIdTCPClient.SendCmd() and TIdTCPClient.LastCmdResult.Text.Text.
The way it is, the server receives commands and replies, the clients only receives replies, never commands, and I would like to implement a way to make the client receives commands. But as I heard, if the client uses SendCmd it should never be listening for data like ReadLn() as it would interfere with the reply expected in SendCmd.
I thought of making a command to check for commands, for example, the client would send a command like "IsThereCommandForMe" and the server would have a pool of commands to each client and when the client asks, the server send it in the reply, but I think it would not be a good approach as there would be a big delay between the commands being available and the client asking for it. I also thought of making a new connection with new components, for example a TIdCmdTcpClient, but then there would be 2 connections for each client, I don't like that idea as I think it could easily give problems in the communication.
The reason I want this, is that I want to implement a chat functionality in the client, and it should be receiving messages from the server without asking for it all the time, imagine all clients continually asking the server if there is message for them. And I would like to be able to inform the client when there is an update available instead the client being asking if there is any. And with this I could send more commands to the client too.
what are your thoughts about this ? how can I make the server receiving commands from the clients, but also sends them ?
TCP sockets are bidirectional by design. Once the connection between 'client' and 'server' has been established, they are symmetric and data can be sent at any time from any side over the same socket.
It only depends on the protocol (which is just written 'contract' for the communication) which communication model is used. HTTP for example uses a request/reply model. With Telnet for example, both sides can initate data transmissions. (If you take a look at the Indy implementation for Telnet, you will see that it uses a background thread to listen for server data, but it uses the same socket connection in the main thread to send data from client to server).
A "full duplex" protocol which supports both request/response and server push, and also is firewall-friendly, is WebSockets. With WebSockets (a HTTP upgrade), the server can send data to the connected client(s) any time. This would meet your 'chat' requirement.
If you use TIdTCPClient / TIdCmdTCPServer, corporate firewalls might block the communication.

is it possible for http server to respond http client on same connection fd each time?

I want to write iterative HTTP server code that accepts one HTTP Client on the same conn_fd (file descriptor) every time, but for different clients it should create new_fd, based on checking the client address. Is the possible?
I'm not sure I understand your question, but this is basically how sockets works: you create a master socket and set it to a listening state. Then, everytime you accept a new client, a new socket is created for that client, while the master socket remains the same.
For a nice intro about Unix sockets, see http://beej.us/guide/bgnet/
Each new connection will result in a new socket. So if the same client connects multiple times it will be a new socket (and file descriptor), but if it connects one time and sends multiple requests over the same connection (HTTP keep alive) it will be the same fd.

How to detect when socket connection is lost?

I have a script (I don't have the code example here at the moment but I used IO::Async) which connects to socket on a remote server and listens. Client usually just listens for new data.
Problem is that the client is not able to detect if network problems occur and the socket connection is gone.
I used IO::Async and I also tried it with IO::Socket. Handle is always "connected" after the initial connection is established.
If the network connection is established again the socket connection is naturally still lost because the script has no idea that it should reconnect.
I was thinking to create some kind of "keepAlive" which "pings" (syswrite) the socket every X seconds (if nothing new came through socket) to check whether the connection is still there.
Is this the correct way to do it or is there maybe an another more creative or cleaner solution?
You can set the SO_KEEPALIVE socket option which, for TCP, sends periodic keepalive messages, and may help detect this condition. If this is detected, you will be delivered an EOF condition (most likely causing the containing IO::Async::Stream to fire on_read_eof).
For a better solution you might consider some sort of application-level keepalive message, such as IRC's PING command.
The short answer is there is no default way to automatically detect a dropped socket in perl.
Your approach of pinging would probably work pretty well; you could run a continuous thread in the background that sends ping requests and if it doesn't receive a response the main thread can be notified and a reconnect should be issued.
If you want to get messy you can work with select() to detect keep alive messages; however this may require some OS configuration depending upon your platform.
See this thread for more details: http://www.perlmonks.org/?node_id=566568