Closing and reopening port immediately - sockets

While programming for sockets I came across a doubt on usage of setsocketopt(). If we provide
setsockopt( socket_no, SOL_SOCKET,SO_REUSEADDR , (char *) &optval, (socklength) sizeof( optval ) );//To reuse addr
Followed by another socket option for the same socket,
setsockopt( socket_no, IPPROTO_IPV6,IPV6_V6ONLY , (char *) &optval, (socklength) sizeof( optval ) );//To use only IPv6
(1)Does setting socket option again, removes the reuse address option which is set already ?
In few situation, there is a need to close and reopen a static port immediately without any delay.
(2)Does closing and reopening a port immediately causes problem ?
(3)If closing and reopening a port immediately causes problem, can it be avoided using SO_REUSEPORT/SO_REUSEADDR in socket option. As it overcomes time_wait set by TCP protocol. Or is there some alternative to overcome this problem ?

(1)Does setting socket option again, removes the reuse address option which is set already ?
No. Each socket option is independent.
(2)Does closing and reopening a port immediately causes problem ?
No, not unless there was at least one TCP connection to or from that port recently.
(3)If closing and reopening a port immediately causes problem, can it be avoided using SO_REUSEPORT/SO_REUSEADDR in socket option. As it overcomes time_wait set by TCP protocol. Or is there some alternative to overcome this problem ?
It has no effect on time_wait. It just lets you re-open the port immediately. The existing connection is unaffected and continues to timeout normally, it just doesn't prevent you from re-opening the port.

Related

Can another client app close TCP connection which a crashed client app opened with a server?

Consider the following sequence:
Client app (web browser) opens several TCP connections to different web servers;
Ethernet cable then becomes disconnected;
Client app is then closed;
Ethernet cable remains disconnected for a few hours;
Ethernet cable is reconnected;
I see "TCP keep-alive" packets (every 60 seconds, for hours) from a few of the servers to which the long-closed client app had connected!
Normally, when an app is closing, the app would initiate the closure of each open socket, and the TCP layer would then attempt to send a FIN packet to each remote endpoint. If it is physically possible to send the FIN packet, and such sending actually happens, then the local endpoint goes from the ESTABLISHED state to the FINWAIT_1 state (and awaits receiving an ACK from the remote endpoint, etc.). But, if the physical link is broken, then the TCP local endpoint can't send that FIN, and the server still assumes the TCP connection still exists (and the client-side call to the "close" function would block indefinitely until the physical link was reestablished, assuming the socket were set to blocking mode, right?).
In any case, upon reconnecting the Ethernet cable after some time with all conventional networked apps (e.g., web browsers) long closed, I am receiving "TCP Keep-Alive" packets from three separate web servers at precisely 60-second intervals for HOURS!
Wireshark shows the local port numbers to which these TCP Keep-Alive packets are being sent, but neither TCPView nor netstat -abno show those local port numbers being used by any application. Looking at the "TCP/IP" property of every single running process using Process Explorer also does not show any matching port numbers. I don't think the ports are being held because of a zombie "process record" (of, say, the web browser process) due to any ongoing child process (e.g., plugin app), but I'm not sure if my observations with TCPView/netstat/Process Explorer were sufficient to rule out this possibility.
Given the identities of the remote web servers (e.g., Akamai servers), I believe the connections were established by "recent" use of a web browser. But, these keep-alives keep coming from those three web servers, even though the browser had been closed, and the physical link had been broken for hours.
If the connections appeared in TCPView, I could simply select them and manually close them. However, the client-side TCP endpoints seem long gone.
Meanwhile, I am baffled why the servers are retrying so many times to get a reply to their keep-alive packets.
TCP keep-alive behavior is typically controlled by three parameters: \
(1) Time to wait until the next "burst" or "probe" attempts;
(2) Time interval between sending each keep-alive packet during a single "probe" attempt;
(3) The maximum number of "probe" attempts before the "burst" is considered a failure (and the TCP connection is consequently considered permanently broken).
For the TCP keep-alive packets I am seeing from the three different servers, the time interval between "probe" retries is exactly 60 seconds. But, it seems like the maximum number of "probe" retries is infinite, which seems like a really bad choice for any server!
Although I am curious about how this relentless stream of keep-alives was created and sustained, I am more immediately interested in how I might use a client-side application to force the server-side endpoints to close, given that there aren't existing local TCP endpoints receiving those keep-alive packets.
My rough idea is to create an app which creates a TCP-mode socket, binds (with port-number reuse allowed) to the port number to which the incoming keep-alives are directed, and then call "open" followed by "close", hoping that the server endpoint will make the TCP state transitions to reach the closed state one way or another! Another way might be to create a raw-mode socket, and receive the TCP keep-alive packet (which is just an ACK), and then form and send an appropriate FIN packet (with proper sequence number, etc., to pick up where the long-terminated client app evidently left off), and then receive an ACK and FIN before sending the final ACK.
One final note -- and I know there will be eye-rolling and ridicule: the working environment here is Windows XP SP3 running in VirtualBox on Windows 7! So, I'd prefer code or an open-source app which could achieve the goal (closing half-open TCP connection) within Windows XP SP3. Sure, I could restart the snapshot, which might close the connections -- but I am more interested in learning how to get more information about the state of network connections, and what I can do to handle this kind of TCP state problem.
I succeeded in provoking the closing of each apparent half-open TCP connection by writing a simple program (full code appears below) which binds a local socket to the port to which the server believes it is already connected, attempts to establish a new connection, and then closes the connection.
(Note: If the connection succeeds, I make an HTTP GET request, just because the phantom TCP keep-alives in my case are apparently originating from plain HTTP servers, and I was wondering what response I might get back. I think the "send" and "recv" calls could be removed without affecting the ability of the code to achieve the desired result.)
In the following code, the src_port_num variable represents the client-side port number (currently unused) to which the server is sending "TCP keep-alive" packets, and dst_ip_cstr is the IP address of the server (e.g., an Akamai web server), and dst_port_num is the port number (which, in my situation, happens to be a plain HTTP server at port 80).
CAUTION! By sharing this code I do not mean to imply that its theory of operation can be rigorously explained by an understanding of the TCP protocol specification. I just guessed that claiming an abandoned local port to which a remote endpoint is sending TCP keep-alive packets, and attempting to establish a new connection to that very same remote endpoint, would, one way or another, prod the remote endpoint to close the stale half-open connection -- and it happened to work for me.
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <winsock2.h>
#pragma comment(lib, "ws2_32.lib")
void main()
{
// Local IP and port number
char * src_ip_cstr = "10.0.2.15";
int src_port_num = 4805;
// Remote IP and port number
char * dst_ip_cstr = "23.215.100.98";
int dst_port_num = 80;
int res = 0;
WSADATA wsadata;
res = WSAStartup( MAKEWORD(2,2), (&(wsadata)) );
if (0 != res) { printf("WSAStartup() FAIL\n"); return; }
printf( "\nSRC IP:%-16s Port:%d\nDST IP:%-16s Port:%d\n\n",
src_ip_cstr, src_port_num, dst_ip_cstr, dst_port_num );
sockaddr_in src;
memset( (void*)&src, 0, sizeof(src) );
src.sin_family = AF_INET;
src.sin_addr.S_un.S_addr = inet_addr( src_ip_cstr );
src.sin_port = htons( src_port_num );
sockaddr_in dst;
memset( (void*)&dst, 0, sizeof(dst) );
dst.sin_family = AF_INET;
dst.sin_addr.S_un.S_addr = inet_addr( dst_ip_cstr );
dst.sin_port = htons( dst_port_num );
int s = socket( PF_INET, SOCK_STREAM, IPPROTO_TCP );
if ((-1) == s) { printf("socket() FAIL\n"); return; }
int val = 1;
res = setsockopt( s, SOL_SOCKET, SO_REUSEADDR,
(const char*)&val, sizeof(val) );
if (0 != res) { printf("setsockopt() FAIL\n"); return; }
res = bind( s, (sockaddr*)&src, sizeof(src) );
if ((-1) == res) { printf("bind() FAIL\n"); return; }
res = connect( s, (sockaddr*)&dst, sizeof(dst) );
if ((-1) == res) { printf("connect() FAIL\n"); return; }
char req[1024];
sprintf( req, "GET / HTTP/1.1\r\nHost: %s\r\nAccept: text/html\r\n"
"Accept-Language: en-us,en\r\nAccept-Charset: US-ASCII\r\n\r\n",
dst_ip_cstr );
printf("REQUEST:\n================\n%s\n================\n\n", req );
res = send( s, (char*)&req, strlen(req), 0 );
if ((-1) == res) { printf("send() FAIL\n"); return; }
const int REPLY_SIZE = 4096;
char reply[REPLY_SIZE];
memset( (void*)&reply, 0, REPLY_SIZE );
res = recv( s, (char*)&reply, REPLY_SIZE, 0 );
if ((-1) == res) { printf("recv() FAIL\n"); return; }
printf("REPLY:\n================\n%s\n================\n\n", reply );
res = shutdown( s, SD_BOTH );
res = closesocket( s );
res = WSACleanup();
}
HILARIOUS / SHAMEFUL / FASCINATING DISCLOSURES
As I mentioned in my original question, I observed these "TCP keep-alive" packets with Wireshark within VirtualBox running Windows XP SP3, where the host OS was Windows 7.
When I woke up this morning and looked at the phenomenon again with a cup of coffee and fresh eyes, with the "TCP keep-alive" packets still appearing every 60 seconds even after 24 hours, I made a hilarious discovery: These packets continued to arrive from the three different IP addresses, precisely at 60-second intervals (but staggered for the three IPs), even when I disconnected the Ethernet cable from the Internet! My mind was blown!
So, although the three IP addresses did correspond to real-world web servers to which my web browser connected long ago, the TCP keep-alive packets were clearly originating from some local software component.
This revelation, as shocking as it was, did not change my thinking about the situation: from my client-side software perspective, there were "server-side" half-open TCP connections that I wanted to provoke to closing.
Within VirtualBox, choosing "Devices" -> "Network" -> "Connect Network Adapter" toggles the virtual network adapter on or off, as if a virtual Ethernet cable were connected or disconnected. Toggling to a disconnected state caused the phantom TCP keep-alive packets to stop arriving to Wireshark. Subsequently toggling to a connected state caused the TCP keep-alive packets to resume arriving in Wireshark.
Anyway, I sometimes needed to run the code above TWICE to succeed in closing the half-open connection. When running the code a first time, Wireshark would show a packet with an annotation "[TCP ACKed unseen segment]", which is just the kind of TCP gas-lighting confusion I hoped to create, haha! Because the new client endpoint is unexpected by the remote endpoint, the call to "connect" hangs for maybe 30 seconds before failing. For a couple of the zombie/phantom half-open connections, running the program just once was enough to also cause an RST packet.
I needed to modify the program repeatedly to change the combination of local port number, remote IP, and remote port number, to match each phantom TCP keep-alive packet I observed in Wireshark. (I leave implementing user-friendly command-line parameters to the dear reader (that's you!).) After a few rounds of modifying and running the program, all zombie keep-alive packets were stopped. "Silence of the Packets", one might say.
EPILOGUE
[In tuxedo, martini glass in hand, gazing wistfully at the ocean from the deck of a yacht, in the company of fellow hackers] "I never did figure out where those zombie packets came from... Was it the 'VirtualBox Host-Only Network' virtual Ethernet adapter? Only the Oracle knows!"
There is nothing you need to do to close the remote socket, it is already built into the TCP protocol. If the system receives TCP packets which don't create a new connection (i.e. have SYN set) and don't belong to any established connection, it will reply with a RST packet. This way the peer will know that the endpoint is no longer there and abandon the connection.

SO_EXCLUSIVEADDRUSE and SO_REUSEADDR confusion

(Running on VS2017, Win7 x64)
I am confused about the point of SO_REUSEADDR and SO_EXCLUSIVEADDRUSE. And yes, I've read the MSDN documentation, but I'm obviously not getting it.
I have the following simple code in two separate processes. As expected, because I enable SO_REUSEADDR on both sockets, the second process's bind succeeds. If I don't enable this on any one of these sockets, the second bind will not succeed.
#define PORT 5150
SOCKET sockListen;
if ((sockListen = WSASocket(AF_INET, SOCK_STREAM, 0, NULL, 0, WSA_FLAG_OVERLAPPED)) == INVALID_SOCKET)
{
printf("WSASocket() failed with error %d\n", WSAGetLastError());
return 1;
}
int optval = 1;
if (setsockopt(sockListen, SOL_SOCKET, `SO_REUSEADDR`, (char*)&optval, sizeof(optval)) == -1)
return -1;
SOCKADDR_IN InternetAddr;
InternetAddr.sin_family = AF_INET;
InternetAddr.sin_addr.s_addr = inet_addr("10.15.20.97");
InternetAddr.sin_port = htons(PORT);
if (::bind(sockListen, (PSOCKADDR)&InternetAddr, sizeof(InternetAddr)) == SOCKET_ERROR)
{
printf("bind() failed with error %d\n", WSAGetLastError());
return 1;
}
So doesn't having to enable SO_REUSEADDR for both sockets make SO_EXCLUSIVEADDRUSE unnecessary - if I don't want anyone to foricibly bind to my port, I just don't enable SO_REUSEADDR in that process?
The only difference I can see is that if I enable SO_EXCLUSIVEADDRUSE in the first process, then attempt a bind in the second process, that second bind will fail with
a) WSAEADDRINUSE if I don't enable SO_REUSEADDR in that second process
b) WSAEACCES if I do enable SO_REUSEADDR in that second process
So I tried enabling both SO_EXCLUSIVEADDRUSE and SO_REUSEADDR in the first process but found that whichever one I attempted second failed with WSAEINVAL.
Note also that I have read this past question but what that says isn't what I'm seeing: it states
A socket with SO_REUSEADDR can always bind to exactly the same source
address and port as an already bound socket, even if the other socket
did not have this option set when it was bound
Now if that were the case then I can definitely see the need for SO_EXCLUSIVEADDRUSE.
I'm pretty sure I'm doing something wrong but I cannot see it; can someone clarify please?
As stated in the docs, SO_EXCLUSIVEADDRUSE became available on Windows NT4 SP4; before that there was only SO_REUSEADDR. So both being present has (also) historical reasons.
I think of SO_REUSEADDR as the intention to share an address (which is only really useful for UDP multicast. For unicast or TCP it really doesn´t do much since the bahaviour is non-deterministic for both sockets).
SO_EXCLUSIVEADDRUSE is a security measure to avoid my (server) application´s traffic being hijacked / rendered useless by a later binding to the same IP/port.
As I see it, you need SO_REUSEADDR for UDP multicats, and you need SO_EXCLUSIVEADDRUSE as a security measure for server applications.

Time Gap Between Socket Calls ie. Accept() and recv/send calls

I am implementing a server in which i listen for the client to connect using the accept socket call.
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
The send calls to the client fails with errno = 32 i.e broken pipe.
Since i don't control the client, i have set socket option *SO_KEEPALIVE* in the accepted socket.
const int keepAlive = 1;
acceptsock = accept(sock, (struct sockaddr*)&client_addr, &client_addr_length)
if (setsockopt( acceptsock, SOL_SOCKET, SO_KEEPALIVE, &keepAlive, sizeof(keepAlive)) < 0 )
{
print(" SO_KEEPALIVE fails");
}
Could anyone please tell what may be going wrong here and how can we prevent the client socket from closing ?
NOTE
One thing that i want to add here is that if there is no time gap or less than 5 seconds between the accept and send/recv calls, the client server communication occurs as expected.
connect(2) and send(2) are two separate system calls the client makes. The first initiates TCP three-way handshake, the second actually queues application data for transmission.
On the server side though, you can start send(2)-ing data to the connected socket immediately after successful accept(2) (i.e. don't forget to check acceptsock against -1).
After the accept happens and I receive the socket, i wait for around 10-15 seconds before making the first recv/send call.
Why? Do you mean that the client takes that long to send the data? or that you just futz around in the server for 10-15s between accept() and recv(), and if so why?
The send calls to the client fails with errno = 32 i.e broken pipe.
So the client has closed the connection.
Since I don't control the client, i have set socket option SO_KEEPALIVE in the accepted socket.
That won't stop the client closing the connection.
Could anyone please tell what may be going wrong here
The client is closing the connection.
and how can we prevent the client socket from closing ?
You can't.

socket program setup

I am writing my first socket program to connect to my host to server running on other PC.
I am referring following link but did not got what is the meaning of this line.
http://www.thegeekstuff.com/2011/12/c-socket-programming/
The call to the function ‘listen()’ with second argument as ’10′
specifies maximum number of client connections that server will queue
for this listening socket.
Means to say that it will listen 10 times to new connection request. what actually happen at listen :?:
We will enter while loop once some client connect onto the socket right And inside while loop does accept blocks if no client is requesting to connect to socket on second loop of while :?:
When we are inside while loop does listen() system call is still working or terminates :?:
Also when we will get out of while loop :?:
Please can someone on forum can help me to understand this.
What the listen call does is tell the system the size of the queue it should use for new connections. This queue is only used for connections you have not accepted yet, so it's not the number of total connections you will have.
Besides setting the size of the incoming-connections queue, it also sets a flag on the socket that says it's a passive listening socket.
The stuff that listen does is set on the socket, so as long as the socket is open the queue and the flag is valid.

How can I test TCP socket status in Perl?

I've got a TCP socket which reads data. When an error occurs when reading the data, I return an undef (NULL) value. Errors can be caused by badly formatted messages or broken sockets. Can someone tell me if there is a specific function which returns the status of a socket?
There are three ways to detect whether the socket is open or closed, but neither of them are 100% full proof.
The first is to attempt a read on the socket as follows:
my $ret = recv($sockfd, $buff, 1, MSG_PEEK | MSG_NOWAIT);
If the socket has went through an orderly shutdown, i.e. the peer called shutdown for writing or called close AND the FIN packet has arrived then this call will result in a 0 length read indicating a closed socket. This also helps if your peer application crashed since the OS will close the connection and send a FIN. However, if your peer machine has crashed or your peer application has locked up this won't help you since each end of the connection maintains independent state.
A second way to detect a broken connection is by probing your peer. If you send a 0 length packet to your peer ( which is should be able to handle ) and the application has crashed then you send a second 0 length packet your application will get the SIG_PIPE signal indicating a broken pipe.
Another way to deal with this issue is to use an application level heartbeat in which the peers periodically send a heartbeat packet to each other indicating that they are alive and functioning properly.
One last option is to use the SO_KEEPALIVE socket option, although this is of limited use since it will only detect a broken socket after approximately 2 hours of inactivity.
If you really must know fairly quickly when a connection is broken, then the most reliable option is probably going to be the application level heartbeat.
doh!, the answer was obvious in retrospect, use the connected call.
$socket = IO::Socket::INET(localhost, 1000);
die "no connection" unless $socket -> connected();
$socket -> send("your face here for $20");
die "socket is dead" unless $socket -> connected();
$socket -> recv($data);