How do web servers avoid TIME_WAIT? - sockets

I'm writing a simple HTTP server and learning about TIME_WAIT. How do real web servers in heavy environments handle requests from thousands of users without all the sockets getting stuck in TIME_WAIT after a request is handled? (Not asking about keep-alive -- that would help for a single client, but not for thousands of different clients coming through).
I've read that you try and get the client to close first, so that all the TIME_WAITs get spread out among all the clients, instead of concentrated on the server.
How is this done? At some point the server has to call close/closesocket.

The peer that initiates the active close is the one that goes into TIME_WAIT. So as long as the client closes the connection the client gets the TIME_WAIT and not the server. I go into this all in a little more detail in this blog posting. If you are unable to reach that link then the wayback machine has it.

Related

Website loads inconsistently on mobile only

I have a website being served from a custom webserver, and it loads and works fine when loaded from a laptop/desktop browser, but loads inconsistently on mobile browsers. (In my case I tested specifically Samsung Internet and Chrome on Android)
(The exact behaviour is: load the web page, refresh, and then after a couple of refreshes it will sometimes not be able to load a background image, or any resource on the page at all - but only on mobile browsers)
In case this was just some cached data issue, I've cleared all browser data, restarted my phone, asked friends to try on their devices etc, but I've only been able to reproduce this on mobile devices.
My web server is written using liburing, nginx as a reverse proxy, though I doubt that would be the issue
I read Can Anyone Explain These Long Network Stalled Times? and it ocurred to me that an issue could be me using multiple different HTTP requests to get resources (I've not implemented Connection: Keep-Alive), but I also get this issue on WiFi, and I get the issue even when loading a single asset (such as a background image)
Additional possibly relevant info:
I was initially having a similar issue on desktop as well, and I fixed it by using shutdown() before calling close() on the HTTP requests
I'm using the following response headers:
Keep-Alive: timeout=0, max=0
Connection: close
Cache-Control: no-cache
I'm using the following socket options:
SO_REUSEADDR (mainly for debug convenience)
SO_REUSEPORT (sockets in multiple threads bind to and listen on the same port)
SO_KEEPALIVE, TCP_KEEPIDLE, TCP_KEEPINTVL and TCP_KEEPCNT (to kill off inactive clients)
Oddly enough though I think this disappears for a while after restarting my phone
I have tried not using nginx, instead using WolfSSL for TLS, and I get the same issue
I am inclined to think that this could be an issue with what headers I'm setting in responses (or possibly some HTTPS specific detail I'm missing?), but I'm not sure
And here's the actual site if anyone wants to verify the issue https://servertest.erewhon.xyz/
It looks to me like your server does not do a proper TLS shutdown, but is simply shutting down the underlying TCP connection. This causes your server to send a RST (packet 28) when the client is doing the proper TLS shutdown by sending the appropriate close notify TLS alert (packet 27).
This RST will result in a connection close on the client side. Depending on how fast the client has processed the incoming data this can result in abandoning still unread data in the TCP socket buffer, thus causing the problems you see.
The difference in behavior between mobile and desktop might just be caused by the performance of the systems and maybe by the underlying TCP stack. But no matter if the desktop works fine - your web server behaves wrong.
For details on how the connection close should happen at the HTTP level see RFC 7230 section 6.6. Note especially the following parts of this section:
If a server performs an immediate close of a TCP connection, there is
a significant risk that the client will not be able to read the last
HTTP response. If the server receives additional data from the
client on a fully closed connection, such as another request that was
sent by the client before receiving the server's response, the
server's TCP stack will send a reset packet to the client;
unfortunately, the reset packet might erase the client's
unacknowledged input buffers before they can be read and interpreted
by the client's HTTP parser.
To avoid the TCP reset problem, servers typically close a connection
in stages. First, the server performs a half-close by closing only
the write side of the read/write connection. The server then
continues to read from the connection until it receives a
corresponding close by the client, or until the server is reasonably
certain that its own TCP stack has received the client's
acknowledgement of the packet(s) containing the server's last
response. Finally, the server fully closes the connection.

TCP Server is overwhelmed by clients that only "connect" without sending any data

I have created a TCP server using .NET TcpListener.
I have some concerns on how it could be abused by spamming a lot of bogus connections similar to a DoS-like kind of attack.
I created a small console app to repeatedly initiate a connection to the server (only "connect" without transmitting other kind of data). The "max allowable concurrent connections limit" which is a setting in the server to prevent it from being overwhelmed, was met in an instant. This rendered my server pretty much useless since it could not accept new connections unless the other fake connections disconnect. This proves that my concern is not unfounded.
Is there any way we can do from the application level to prevent this?
I was thinking to require clients to send a kind of token when connecting and the server would refuse connections that don't but I don't think TCP works that way.
Is relying on external solutions the only way? Eg. VPN, firewall, NAT etc?
Set a read timeout on every accepted socket, and close it if it triggers.

why the cookies continues working

I have a question about the reason the web applications continues setting cookies, because the persistent HTTP conections use sockets, i.e.: websocket.
HTTP 1.1 and 2 uses persistent http conections, with sockets in the client and server. These sockets are active a necessary time for loading a complete web page (HTML, CSS, images, etc), then the sockets are killed by the server. It is logic due to the server does not know what is doing the client. So, in this scenario, the use of the cookies is justified.
But, with websocket i think the scenary is different, because it uses only one socket, so it means that after the conection is done, the server and the client uses the sockets for sending data.
So, the question is... why are the cookies necessary if the server know who is the client?
This question is impossibly broad, since many different web applications work in many different ways.
In general, cookies are used to store data that needs to persist beyond the momentary connection between the client and the server.
More specifically, the connection between the client and the server can be very transient. The server receives a request, sends a page, and moves on to the next request. It doesn't maintain a constant connection to every browser that contacts it.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

How many sockets do Google open for every request it receives?

The following is my recent interview experience with a reputed network software company. I was asked questions about interconnecting TCP level and web requests and that confused me a lot. I really would like to know expert opinions on the answers. It is not just about the interview but also about a fundamental understanding of how networking work (or how application layer and transport layer cross-talk, if at all they do).
Interviewer: Tell me the process that happens behind the scenes when
I open a browser and type google.com in it.
Me: The first thing that happens is a socket is created which is
identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}. The
SRC-PORT number is a random number given by the browser. Usually the TCP/IP
connection protocol (three-way handshake is established). Now
both the client (my browser) and the server (Google) are ready to handle
requests. (TCP connection is established).
Interviewer: Wait, when does the name resolution happen?
Me: Yep, I am sorry. It should have happened before even the socket is created.
DNS name resolve happens first to get the IP address of Google to
reach at.
Interviewer: Is a socket created for DNS name resolution?
Me: hmm, I actually do not know. But all I know DNS name resolution is
connectionless. That is, it's not TCP but UDP. Only a single
request-response cycle happens. (So there is a new socket created for DNS
name resolution).
Interviewer: google.com is open for other requests from other
clients. So is establishing your connection with Google blocking
other users?
Me: I am not sure how Google handles this. But in a typical socket
communication, it is blocking to a minimal extent.
Interviewer: How do you think this can be handled?
Me: I guess the process forks a new thread and creates a socket to handle my
request. From now on, my socket endpoint of communication with
Google is this child socket.
Interviewer: If that is the case, is this child socket’s port number
different than the parent one?
Me: The parent socket is listening at 80 for new requests from
clients. The child must be listening at a different port number.
Interviewer: How is your TCP connection maintained since your destination port number has changed. (That is the src-port number sent on Google's packet) ?
Me: The dest-port that I see as a client is always 80. When
a response is sent back, it also comes from port 80. I guess the OS/the
parent process sets the source port back to 80 before it sends back the
post.
Interviewer: How long is your socket connection established with
Google?
Me: If I don’t make any requests for a period of time, the
main thread closes its child socket and any subsequent requests from
me will be like I am a new client.
Interviewer: No, Google will not keep a dedicated child socket for
you. It handles your request and discards/recycles the sockets right
away.
Interviewer: Although Google may have many servers to serve
requests, each server can have only one parent socket opened at port 80. The number of clients to access Google's webpage must be larger than the number of servers they have. How is this usually handled?
Me: I am not sure how this is handled. I see the only way it could
work is spawn a thread for each request it receives.
Interviewer: Do you think the way Google handles this is different from
any bank website?
Me: At the TCP-IP socket level, it should be
similar. At the request level, slightly different because a session
is maintained to keep state between requests for banks' websites.
If someone can give an explanation of each of the points, it will be very helpful for many beginners in networking.
How many sockets do Google open for every request it receives?
This question doesn't actually appear in the interview, but it's in your title so I'll answer it. Google doesn't open any sockets at all. Your browser does that. Google accepts connections, in the form of new sockets, but I wouldn't describe that as 'opening' them.
Interviewer : Tell me the process that happens behind the scene when I open a browser and type google.com in it.
Me : The first thing that happens is a socket is created which is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}.
No. The connection is identified by the tuple. The socket is an endpoint to the connection.
The SRC-PORT number is a random number given by browser.
No. By the operating system.
Usually TCP/IP connection protocol (three way handshake is established). Now the both client (my browser) and server (google) are ready to handle requests. (TCP connection is established)
Interviewer: wait, when does the name resolution happens?
Me: yep, I am sorry. It should have happened before even the socket is created. DNS name resolve happens first to get the IP address of google to reach at.
Interviewer : Is a socket created for DNS name resolution?
Me : hmm, Iactually do not know. But all I know DNS name resolution is a connection-less. That is it not TCP but UDP. only a single request-response cycle happens. (so is a new socket created for DNS name resolution).
Any rationally implemented browser would delegate the entire thing to the operating system's Sockets library, whose internal functioning depends on the OS. It might look at an in-memory cache, a file, a database, an LDAP server, several things, before going out to a DNS server, which it can do via either TCP or UDP. It's not a great question.
Interviewer: google.com is open for other requests from other clients. so is establishing you connection with google is blocking other users?
Me: I am not sure how google handles this. But in a typical socket communication, it is blocking to a minimal extent.
Wrong again. It has very little to do with Google specifically. A TCP server has a separate socket per accepted connection, and any rationally constructed TCP server handles them completely independently, either via multithreading, muliplexed/non-blocking I/O, or asynchronous I/O. They don't block each other.
Interviewer : How do you think this can be handled?
Me : I guess the process forks a new thread and creates a socket to handle my request. From now on, my socket endpoint of communication with google is this this child socket.
Threads are created, not 'forked'. Forking a process creates another process, not another thread. The socket isn't 'created' so much as accepted, and this would normally precede thread creation. It isn't a 'child' socket.
Interviewer: If that is the case, is this child socket’s port number different than the parent one.?
Me: The parent socket is listening at 80 for new requests from clients. The child must be listening at a different port number.
Wrong again. The accepted socket uses the same port number as the listening socket, and the accepted socket isn't 'listening' at all, it is receiving and sending.
Interviewer: how is your TCP connection maintained since your Dest-port number has changed. (That is the src-port number sent on google's packet) ?
Me: The dest-port as what I see as client is always 80. when request is sent back, it also comes from port 80. I guess the OS/the parent process sets the src port back to 80 before it sends back the post.
This question was designed to explore your previous wrong answer. Your continuation of your wrong answer is still wrong.
Interviewer : how long is your socket connection established with google?
Me : If I don’t make any requests for a period of time, the main thread closes its child socket and any subsequent requests from me will be like am a new client.
Wrong again. You don't know anything about threads at Google, let alone which thread closes the socket. Either end can close the connection at any time. Most probably the server end will beat you to it, but it isn't set in stone, and neither is which if any thread will do it.
Interviewer : No, google will not keep a dedicated child socket for you. It handles your request and discards/recycles the sockets right away.
Here the interviewer is wrong. He doesn't seem to have heard of HTTP keep-alive, or the fact that it is the default in HTTP 1.1.
Interviewer: Although google may have many servers to serve requests, each server can have only one parent socket opened at port 80. The number of clients to access google webpage must be exceeding larger than the number of servers they have. How is this usually handled?
Me : I am not sure how this is handled. I see the only way it could work is spawn a thread for each request it receives.
Here you haven't answered the question at all. He is fishing for an answer about load-balancers or round-robin DNS or something in front of all those servers. However his sentence "the number of clients to access google webpage must be exceeding larger than the number of servers they have" has already been answered by the existence of what you are both incorrectly calling 'child sockets'. Again, not a great question, unless you've reported it inaccurately.
Interviewer : Do you think the way Google handles is different from any bank website?
Me: At the TCP-IP socket level, it should be similar. At the request level, slightly different because a session is maintained to keep state between requests in Banks websites.
You almost got this one right. HTTP sessions to Google exist, as well as to bank websites. It isn't much of a question. He should be asking for facts, not your opinion.
Overall, (a) you failed the interview completely, and (b) you indulged in far too much guesswork. You should have simply stated 'I don't know' and let the interview proceed to things that you do know about.
For point #6, here is how I understand it: if both ends of an end to end connection were the same as that of another socket, there would indeed be no way to distinguish both socket, but if a single end is not the same as that of the other socket, then it's easy to distinguish both. So there is not need to turn destination port 80 (the default) forth and back, since the source ports differ.