Setting up the source Port/IP on an TCP/IP connection - sockets

I have setup a TCP/IP client/server connection that will open and close the connection every time a request is exchaged. It works perfectly; the client app opens the connection, sends the request and waits. The server application receives the request produces a response and sends it back and closes the connection. Cient and server apps do that hundreds of times.
Now I was trying to go to the next step: setup the source IP address and port.
The code was supposed to work on both Linux and Windows, so SO_BINDTODEVICE is out of question, since it is only supported on Linux/Unix.
I tried to bind the source port and ANYADRR on the client socket. And it works... For a while. Eventually it thorws error 10038. I've read over the internet several articles but without clear answer... The selection of the source IP remains unclear.
Please, note that I also have a UNICAST and MULTICAST mode on the same library (connectionless UDP communication modes), a sender and receiver, and I was able to setup the source port/IP on the MULTICAST mode, UNICAST I didn't try yet.
Anyway, anyone know anything that could be of help? I'm using WinSock 2.2 and trying to be as much as possible platform indemendent.

Winsock error 10038 is WSAENOTSOCK, which means you have a bug in your code somewhere. You are trying to do something with a SOCKET handle that is not pointing at a valid socket object. That has nothing to do with the bind() function itself. Either you are calling socket() and not checking its result for an error, or you are trying to use a SOCKET handle that has already been closed by your app, or you have a memory overflow somewhere that is corrupting your SOCKET handle.

Related

Network packet loss causes client code to act strange

I am facing some issues which I need some help on coming with a best way to resolve this.
here is the problem -
I have server code running which has a socket that is listening to accept new incoming connections.
I then attempt to start a client, which also has a socket that is listening to accept new incoming connections.
The client code begins with accepting a new connection on the listening socket file descriptor and gets a new socket file descriptor for I/O.
The server does the same thing and gets a new socket file descriptor for I/O.
Note: The client is not completely up, yet. It needs to receive some bytes from the server and send some before it can start.
I then introduce some packet loss over the TCP/IP network connection. This causes the certain errors (example: the recv() system call in the client process sees no received bytes and then closes the socket connection on the client side and the associated new socket file descriptor is closed.) However, this leaves the client process hanging since there are other descriptors in the FD_SET but none of them are I/O ready. So pselect() keeps returning 0 file descriptors ready for I/O. The client needs to send and receive certain bytes over the connection before it can start up.
My question is more of what should I do here ?
I did research on the SO_KEEPALIVE option when I create the new socket connection during the accept() system call. But I do not think that would resolve my problem here especially if the network packet loss is ongoing.
Should I kill the client process here if I realize there are no file descriptors ready for I/O and never will be ? Is there a better way to approach this ?
If I'm reading the question correctly, the core of the question is: "what should your client program do when a TCP connection that is central to its functionality has been broken?"
The answer to that question is really a matter of preference -- what would you like your client program to do in that case? Or to put it another way, what behavior would your users find most useful?
In many of my own client programs, I have logic included such that if the TCP connection to the server is ever broken, the client will automatically try to create a new TCP connection to the server and thereby recover its connectivity and useful functionality as soon as possible.
The other obvious option would be to just have the client quit when the connection is broken; perhaps with some sort of error indication so that the user will know why the client went away. (perhaps an error dialog that asks if the user would like to try to reconnect?)
SO_KEEPALIVE is probably not going to help you much in this scenario, by the way -- despite its name, its purpose is to help a program discover in a more timely manner that TCP connectivity has been lost, not to try harder to keep a TCP connection from being lost. (And it doesn't even serve that purpose particularly well, since in many TCP stacks only one keepalive packet is sent per hour, or so, which means that even with SO_KEEPALIVE enabled it can be a very long time before your program starts receiving error messages reflecting the loss of network connectivity)

Difference between closing a socket and closing a network stream (System.Net.Sockets)

I have a proxy server implemented, after sending the final response to client if I directly close the socket (System.Net.Sockets TCPClient.Client.Close()) then client end receives connection aborted error but instead if I use System.Net.Sockets TCPClient.getStream().Close(), it works successfully.I want to understand what's the difference and why is client side receiving an error in the first scenario?
I would say, that Close of sockets is not trivial operation as most people think :)
First of all, you should understand the how the close should be done correctly. Basically, you have to consider that close is a kind of message like any other message sent out your socket. Or other words close() is an information on the other side of communication that the peer finished some kind of work.
Now the important thing to understand that having a TCP socket you can inform the peer that you finished sending or finished listening.
On this page, you can check out how it works in the background (note that ACK and FIN are IP layer messages so even using plain sockets implementation you will never see them): http://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm
So now the more practical step. Please consider that you have a client and server. The server needs to receive a message and close the connection. Please consider that client is just going to send a message and then closes the connection. If you will also consider that networks need some time to process your communication, you will realize that if you do it quickly, client will close the connection before server received your message. If you can the TCPClient.Client.Close() client will stop listening for anything (that means also for information about that the server closed the connection). So here comes the TCP stack to play (windows does it for you) - in case you will close this way the socket, TCP stack, needs to inform the server site that whatever server has sent goes to dump. So that's why you have an exception.
So the correct way is to:
inform the server that client finished sending any data (FIN)
wait until server confirms that he knows that client will not send any data (ACK)
now server should inform client that will stop sending data (FIN)
now the client can say - "ok I got it, I will not listen anymore" (ACK)
Anyway, the C# TCPClient seems to hide the logic of the background socket closing routine, but if you will not call the close sequence correct way, you'll end up with errors.
I hope that this little bit long explanation will help you understand how it works in the background and finally let you understand why.
It's also a good way to read more about TCP protocol details if you wish to learn more: http://www.tcpipguide.com/free/t_TCPIPTransmissionControlProtocolTCP.htm
I suppose that in order to close connection, you need to send some special bytes sequence. And looks like it is implemented only by tcpclient library , and not implemented by socket library. Probably something like Eof should be sent.
You may check it by some net traffic utilities like tcpdump.
Good luck!

Client port changes with each request

I am trying to establish a TCP/IP connection between a controller (client) and a program in my PC (server) using C++, I used a sniffer to see how client’s requests are being sent and I found out that each connect request from the controller is sent from a different port and known IP, it starts with random port number and increment by 1 with each request till I restart the controller or the server receives the request, I have some questions.
1- Is that a standard behaviour and what is the idea behind this knowing that the controller is a Mitsubishi controller?
2- Is there any way I can get the new port of the controller without using accept?
This is not so much the behaviour of the controller as it is the network stack running on top of the controller and may be integrated into the controller hardware (Search keyword: TCP offload).
This is expected behaviour. To prevent all sorts of nasty side effects, a simple example is late packets from a previous connection trying to sneak in as legitimate packets for a later connection, a port is not recycled for reuse for a lengthy period after the socket using the port is closed. Your port may not be available for use. A simple solution is to do exactly what OP's network stack did: sequentially assign the next port number.
Not with BSD-style sockets. accept accepts a connection with the client. If you do not accept, you don't get a socket to handle the connection and with the socket, you should not care what the port is. It's all abstracted away and hidden out of sight.
If this is a problem, consider using a connectionless protocol like UDP. You don't get automatic re-transmission when packet loss is detected and all of the other nice things TCP does for you, but there is no connection overhead.

client/server socket reconnection

I developed a client/server application based on sockets.
The client side is in Delphi. The server side is on an IBM I (as400)
Sometimes, the client and the server get disconnected. I'm not really sure why, but I think it's because of a machine between them (a proxy, a router, a firewall) sending a RST packet.
Anyway, I'm trying to reconnect the client with the same process on the server. (not another one, the same, that's important).
To do that, I create a new connection from the client. So, I have two processes on the server. I'll call them the "LostProcess" and the "HelperProcess".
The LostProcess is waiting for data in a data queue.
The client tells the HelperProcess that it was connected to the LostProcess.
The HelperProcess sends data to the LostProcess (via the data queue).
The HelperProcess makes a giveDescriptor, and the LostProcess makes a takeDescriptor.
Then the HelperProcess stops and the LostProcess sends data to the client (to say “I'm back”).
So far, it works, but when the client sends data , the LostProcess (we can call it the RebornProcess now) never receives them (I tried not to stop the HelperProcess, and that he is who receives the data).
With Wireshark, I could see that the client sends data with a different local port, so I guess that's why the RebornProcess does not receive them.
I tried to force the local port of the new client socket to be the same as the first one, but then the new client socket cannot connect for a while, and if I wait long enough, I have the same problem as before.
Does somebody have an idea how to make the reconnection work?
What you are doing is generally not possible. Once a TCP connection has been lost, it is gone forever. Both apps must close their respective sockets for the lost connection, and the client app must create a new socket connection to continue exchanging data with the server.
If the client app wants to reuse the same local port via bind() (which is generally not advisable in most cases), but does not want to wait for the OS to release the port first, then the client can enable the SO_REUSEADDR option via setsockopt() on the new socket before calling bind() and connect().
Pretty sure the answer is you can't.
There'd be all kinds of security issues if TCP/IP allowed a new connection to reconnect to an existing processes connection.
You should have the lost process terminate and just use the new process instead.

How to detect when socket connection is lost?

I have a script (I don't have the code example here at the moment but I used IO::Async) which connects to socket on a remote server and listens. Client usually just listens for new data.
Problem is that the client is not able to detect if network problems occur and the socket connection is gone.
I used IO::Async and I also tried it with IO::Socket. Handle is always "connected" after the initial connection is established.
If the network connection is established again the socket connection is naturally still lost because the script has no idea that it should reconnect.
I was thinking to create some kind of "keepAlive" which "pings" (syswrite) the socket every X seconds (if nothing new came through socket) to check whether the connection is still there.
Is this the correct way to do it or is there maybe an another more creative or cleaner solution?
You can set the SO_KEEPALIVE socket option which, for TCP, sends periodic keepalive messages, and may help detect this condition. If this is detected, you will be delivered an EOF condition (most likely causing the containing IO::Async::Stream to fire on_read_eof).
For a better solution you might consider some sort of application-level keepalive message, such as IRC's PING command.
The short answer is there is no default way to automatically detect a dropped socket in perl.
Your approach of pinging would probably work pretty well; you could run a continuous thread in the background that sends ping requests and if it doesn't receive a response the main thread can be notified and a reconnect should be issued.
If you want to get messy you can work with select() to detect keep alive messages; however this may require some OS configuration depending upon your platform.
See this thread for more details: http://www.perlmonks.org/?node_id=566568