What happens to not accepted connection? - sockets

Assume a listening socket on localhost:80 and a client connecting using: telnet localhost 80
The problem is that I want only to accept a limited number of concurrent clients, assume only one.
After that I simply don't accept any.
The problem that I saw using: netstat -a is that next client connection was established. Yes I don't process it, but on a system level it is there shown as ESTABLISHED, client can send data and probably cause extra overhead to the system.
The only way I see is to continue accepting clients but disconnect them.
Am I right?

The listen() function has a backlog parameter that specifies how many outstanding sockets are allowed to hang around in the operating system kernel waiting for the server to accept() them.
On my Linux system the man page for listen() says that most of the time the client will get a connection refused error - just the same as if the socket wasn't listening at all.
If you only ever want to handle one connection, that's fine, you can just do:
listen(s, 0);
while ((new_fd = accept(s)) >= 0) {
process(new_fd);
}
It would be somewhat harder if you want to handle more than one. You can't just set the backlog parameter to the number of concurrent connections, since the parameter doesn't take into account how many connections are already active.

If you stop listening on that port it should not be allowing any more incoming connections. Make sure the listener closes after accepting the first connection.
Two other options:
Use Raw Sockets (if you OS supports them). And manually handle the TCP connections. This will involve a lot of extra code and processing though.
Use UDP. They are stateless connections but then you will have to accept/reject packets based on something else. This doesn't have the overhead of a TCP connection though. Also you will not be able to use things like telnet for testing.

You should simply close the listening socket when you no longer wish to accept more connections and open it again when you do wish to accept connections. The listen backlog wont help you at all as it's simply for 'half open' connections that the TCP/IP stack has accepted but that the application program hasn't yet accepted.

Related

why many libraries does not detect dead TCP connections?

TCP has a keep-alive mechanism to detect dead connections, but it surprised me that this option is turned off by default and many libraries/tools do not utilize this feature.
If I am understanding correctly, a TCP connection blocked in a recv call won't be able to detect if a connection has been actually aborted by peer if all the FIN/RST packets from peer have been lost.
A timeout parameter on client side may alleviate the issue but many libraries does not have a option to set timeout either. One example is that the mysql-python connector does not have a recv timeout option. Another example is that a Nginx server talks to a gunicorn backend with proxy_pass, gunicorn workers may stop responding due to dead connections on it, but there is no way for gunicorn workers to detect it.
Could anyone can explain the reason or correct me if I am wrong?
The term "dead connection" is a bit ambiguous -- it could mean any of the following:
The peer program closed its socket (or the peer program exited or crashed, and the peer computer's OS closed the socket as part of its standard process-cleanup)
Connectivity to the peer computer has suddenly been lost (this could happen because the peer computer lost power, or somebody pulled out the Ethernet cord that was connecting the peer computer to the router, or the peer's ISP had a router failure, or your ISP had a router failure, or etc)
The peer program is still running but simply decided (for some reason, probably due to a bug) to stop calling recv() on his TCP socket anymore.
The packet-path between your program and the remote peer still exists, sort of, but something along that path is dropping so many packets that the effective transmission rate of the TCP connection has dropped to approximately zero.
So the first question to answer is, which of the above conditions will the TCP layer detect on its own?
Condition (1) is the easy case -- the peer's TCP stack will send you the FIN packets, and when your program's network stack receives them, it will know for sure that the TCP connection is closed and act accordingly, and therefore your recv() call will return 0 very quickly.
In condition (2), the answer is "sometimes" -- in particular, if your program has any TCP data in the socket's output buffer that it is trying to send to the peer, and it never gets any ACK packets back regarding that data, then after a certain number of timeouts (and subsequent packet-resend attempts), your computer's TCP stack will give up, declare the connection dead, and unilaterally close the TCP connection; at which point recv() will return 0. If there are no outgoing TCP data packets trying to be sent, on the other hand, then the local TCP stack won't be waiting for any ACKs to come back, and therefore it won't time out when it doesn't get them, and therefore it won't ever give up and close the TCP connection. In this scenario, your recv() call could well block indefinitely, because the TCP connection is idle and the TCP stack has no way of knowing that the peer is gone (as opposed to simply not sending any data right now). It is this scenario that the SO_KEEPALIVE option was meant to handle, but since the designers of the SO_KEEPALIVE option wanted to conserve bandwidth by default, and sending automatic keepalive packets uses up additional bandwidth, they decided to make the keepalive option disabled by default. Also, the default send-a-keepalive interval is often quite long by modern standards (e.g. hours) and on some OS's it is difficult to change except on a system-wide basis, which make SO_KEEPALIVE of limited usefulness for many applications.
For conditions (3) and (4), the TCP connection isn't really "dead", it's just that some device (either the peer program, or a piece of networking gear somewhere between your program and the peer) is being uncooperative. Since the TCP layer can't know what the applications that are using it are trying to achieve, it wisely doesn't try to second-guess them in this regard, and it leaves the TCP connection open unless you explicitly tell it to close() the connection.
So now that we've described the TCP layer's behavior, what about the applications and API's that use it? i.e. why don't they try to improve on the basic TCP-stack behavior by offering better detection? The answer is that some of them do; e.g. by periodically sending dummy "ping" messages across any socket that would otherwise be idle, simply to "stimulate" the TCP stack into detecting when no ACKs are coming back as described in the paragraph about condition (2), above. Some go even further and expect the remote peer to send a corresponding "pong" message to come back on the same socket within (so many) seconds, and if it doesn't, the program will unilaterally close the socket. This sort-of works, but it also makes assumptions about the performance of your network, and that can lead to false positives and therefore unwanted disconnections when the peer is connecting via a slow or unreliable network, which is why many applications/libraries don't implement this (or at least don't enable it by default).
It's not surprising to me that keep-alive is turned off by default.
Because it's always possible that the peer program can freeze due to a bug or error, etc. In this case recv also blocks forever even if the TCP connection is alive. So keep-alive may be not so useful after all (except to prevent router from dropping connection). Various reasons might cause your recv to block forever anyway.
Besides, a low-level underlying protocol for general purpose should probably be kept as simple as possible.
In addition, I'm not surprised by your examples about not being able to set timeout either. Look at the most popular software tools in this world. They are polished, evolved, optimized, and used for such a long time. Yet many of them still freeze, crash, or misbehave rather frequently. Writing correct code is meticulous work. Not to mention further requirements like security, cross-platform, backward compatibility. Programmer's life is not easy.

How to Close TCP socket abruptly and reconnecting

We use Linux 2.6.33 on device. A PC application will be used which connects to the device over TCP socket. There will be a maximum of 1 MB of data transferred upon request from application.
Consider If the application is connected and working fine and in between the IP address of the device gets changed (which is possible with a command). Now the connection is broken abruptly at the application but on device it doesn't. But user expects to reconnect to the device with new IP address. In some cases the "send()' was halfway and may not have sent all data. Then this closing does not happen immediately. And hence user will not be able to reconnect until the socket is closed on device.
I use "shutdown(sock, RDWR), close(sock)" sequence.
The netstat output shows:
$ netstat -nt
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 29200 xx.xx.xx.xx:3000 xx.xx.xx.xx:50639 ESTABLISHED
As it shows the Send-Q is still not empty and so the connection is still in ESTABLISED state. This socket is closed after some time and I am not sure how much time it is. May be defined by socket implementation.
How can i close this socket immediately so that a new connection over same TCP port is possible from PC application?
It always take time to close a socket.But you can use setsockopt() to reuse the same port.Find more
detail on this link.
How can i close this socket immediately so that a new connection over
same TCP port is possible from PC application?
You can close it by calling close() of course, but what you really want to know is how to detect this condition so that you know when to call close(). The answer to that is: not easily -- as far as your Linux device is concerned, the client at (the_old_ip_address) has simply stopped responding. The Linux device has no way of knowing that this condition is due to the client's IP address having changed, rather than (say) a temporary network outage, so the Linux device will keep trying for a few minutes before it times out.
As far as resolving the problem you are having, probably the best way is to design your server so that it can accept and service multiple TCP connections at once. If you do that, then it won't really matter if the old TCP connection sticks around for a few minutes, because the user will still be able to reconnect immediately from his new IP address and resume his work without having to wait.
That said, the other approach would be to add timeouts to your server such that if no data is sent or received on the TCP socket for N seconds (for whatever value of N you think is appropriate), the server then explicitly calls close() on the socket and listens for a new incoming connection. This isn't a very good solution, though, since it makes an assumption about the performance of the TCP connection (i.e. that when in its "working" state its will always be sending or receiving data at least once every N seconds), and that assumption might not be true -- i.e. if your client and server don't want to send anything for a while, or if the network connectivity is bad enough that no data gets through for more than N seconds. The risk is that this will lead to a false-positive, causing your server to close a TCP connection that was still valid. Therefore I recommend the first approach rather than this one, if possible.

Is it possible to check if a TCP connection disconnected without writing to it?

I am wondering if it possible to determine if an accepted socket connection has been disconnected without trying to write to it.
IO::Select still indicates that the socket can be written to with can_write, even after the socket connection has been lost.
Is it possible to check if a TCP connection has been disconnected without writing to it (in the situation where there is an unplanned internet outage).
This is more a TCP than a Perl issue.
Events like a disconnected cable/internet connection do not lead to a TCP event. Thus you must write to a TCP connection to be sure that it is still connected. You might add a ping/echo message for the sole porpose to know that the connection is still available.
Generally, no. You'll usually only get a failure when you write: if you never write, it will just sit there. If you entirely lose network connectivity I've seen errors pop up (on Windows: haven't tried it on Linux) but you're typically required to try writing to it to verify that its alive.

How do I design a peer-to-peer app that avoids using listening sockets?

I've noticed that if you want to write an application that utilizes listening sockets, you need to create port forwarding rules on your router. If I want to connect two computers without either one of the the computers messing about with router settings, is there a way that I can get the two clients to connect to each other without either of them using listening sockets? There would need to be another server somewhere else telling them to connect but is it possible?
Some clarifications, and an answer:
Routers don't care about, or handle ports, that is the role of a firewall, which do port forwarding. The router/firewall combined device most of us have at home adds to the common misunderstanding.
Can you connect two computers without ServerSocket? No. You can use UDP (a stateless, connectionless communication protocol), but the role of a ServerSocket is to "listen" for incoming connection requests, and generate a Socket from those requests, which creates a communications channel between two endpoints. A Socket has both an InputStream and an OutputStream, so it can both read at write at either end. At that point (once the connection is made), the distinction between client/server is arbitrary, since a Socket is a two-way connection object, which allows both sides to send/receive.
What about proxying? Doesn't that allow connections between two computers without a ServerSocket? Well, no, because the server that's doing the proxying still has to be using a ServerSocket. Depending on what application you're trying to implement, this might be the way to go, or or might just add overhead. Even if there were "another server somewhere else telling them to connect", somebody has to listen for a connection request, which is the job of the ServerSocket.
If connections are happening over already open ports (most publicly accessible servers have ports <1024 not blocked by firewalls, but exceptions exist), then you shouldn't need to change firewall settings to get the connection to work.
So, to reiterate, the ONLY role of a ServerSocket (as far as your question is concerned) is to listen for incoming connection requests, and from those requests, create a Socket, which is a two-way communications channel between the two end points.
To answer the question, "How do I design a peer-to-peer app that avoids using listening sockets?", you don't. In the case of something like Vuze, the software acts as both client and server simultaneously, hence the term "peer", vs. "client" or "server" alone. In Vuze every client is a server, and every server (except for the tracker) is a client.
If you need a TCP connection between the 2 computers and both of them are behind routers (and you don't want to set up port forwarding) I think the only other possibility you have is having a third server somewhere that isn't behind a firewall running a ServerSocket and accepting connections between your 2 other computers and proxying communications between the 2. You can't establish a TCP Connection between the 2 without one listening to a socket and the other connecting to it.
Q: If I want to connect two computers without either one of the the
computers messing about with router settings, is there a way that I
can get the two clients to connect to each other
Yes: have the server listen on an open port :)

C++ Winsock API how to get connecting client IP before accepting the connection?

I am using the Winsock API (not CAsyncSocket) to make a socket that listens for incoming connections.
When somebody tries to connect, how can I get their IP address BEFORE accepting the connection? I am trying to make it only accept connections from certain IP addresses.
Thanks
SO_CONDITIONAL_ACCEPT socket option. Here
Also, pretty sure it's available in XP and Server 2003, not just Vista.
Two reasons why I do not want to accept the connection in order to check the remote IP address:
1). The client would see that there is a listening socket on this port. If i decide to reject the client connection, I would not want them to know that there is a socket listening on this port.
2). This technique is not as efficient and requires more CPU, RAM, and network usage; so it is not good in case of a Denial Of Service attack.
When using ATM, the CONNECT ACK packet will come from the most recent switch, not the end client. So, you would have to call accept() on the socket, then look at the address (based on the passed addr_family), and at that point just close the socket. By the time it reaches the requester, it will probably just get a failure.
And I'm not sure how many resources you think this will take up, but accepting a connection is at a very low level, and will not really be an issue. It's pretty easy to drop them.
If you come under a DoS attack, your code CAN quit listening for a preset amount of time, so the attacker just gets failures, if you are so worried about it.
Does it really matter if the client knows there is a socket listening? Try using telnet to connect to your localhost on port 137 and see how fast the file sharing in windows drops the connection... (If you even have it enabled, and if I remembered the correct port number.. heh..)
But, at the SOCKET level, you are not going to be able to do what you want. You are talking about getting down to the TCP level, and looking at the incoming connection requests, and deal with them there.
This can be done, but you are talking about a Kernel driver to do it. I'm not sure you can do this in user-mode at all.
If you want Kernel help with this, let me know. I may be able to give you some examples, or guidance.
Just my own two cents, and IMVHO...
accept the connection, look at the IP, if it is not allowed, close the connection
Edit:
I'm assuming you're talking about TCP connection. When you listen to the port and a connection comes from a client, the API will perform the TCP 3-way handshake, and the client will know that this port is being listened to.
I am not sure if there is a way to prevent sending any packets (i.e. accepting the connection) so that you can look at the IP address first and then decide.
The only way I can think of is to do packet filtering based on the source IP at the network layer (using firewall, for example).