Limit connections to server - sockets

I'd like to limit the connections to the websocket server. Namely to 1. The new client kicks the old client out.
This somewhat represents what I want to do. Taking what is in messages and sending it through the websocket. If another client connects or the browser refreshes (which should close the old connection, but doesn't for some reason) there are suddenly 2 connections and only every second message receives at the new client.
I use the snap framework for this.
createServer = forkIO $ httpServe defaultConfig app
app = route [("/", runWebSocketsSnap handler)]
handler pending = do
connection <- acceptRequest pending
loop connection
loop connection = do
msg <- takeMVar messages
sendTextData connection msg
{-# NOINLINE messages #-}
messages = unsafePerformIO newEmptyMVar
sendMessage = putMVar messages

I see two different questions here:
how to limit number of connections, so there is at most N clients at the same time;
make sure old connection will not live forever after browser refresh;
I think you mean #2. In that case you should check that the connection is alive. The best way to do that is to ping the client periodically, e.g. using forkPingThread.
If you really need #1, then you should establish shared MVar with ThreadId of the current client in it. When new client connects, just kill the old one.

Related

Half-Established TCP Connections

Half-Established Connections
With a half-established connection I mean a connection for which the client's call to connect() returned successfully, but the servers call to accept() didn't. This can happen the following way: The client calls connect(), resulting in a SYN packet to the server. The server goes into state SYN-RECEIVED and sends a SYN-ACK packet to the client. This causes the client to reply with ACK, go into state ESTABLISHED and return from the connect() call. If the final ACK is lost (or ignored, due to a full accept queue at the server, which is probably the more likely scenario), the server is still in state SYN-RECEIVED and the accept() does not return. Due to timeouts associated with the SYN-RECEIVED state the SYN-ACK will be resend, allowing the client to resend the ACK. If the server is able to process the ACK eventually, it will go into state ESTABLISHED as well. Otherwise it will eventually reset the connection (i.e. send a RST to the client).
You can create this scenario by starting lots of connections on a single listen socket (if you do not adjust the backlog and tcp_max_syn_backlog). See this questions and this article for more details.
Experiments
I performed several experiments (with variations of this code) and observed some behaviour I cannot explain. All experiments where performed using Erlang's gen_tcp and a current Linux, but I strongly suspect that the answers are not specific to this setup, so I tried to keep it more general here.
connect() -> wait -> send() -> receive()
My starting point was to establish a connection from the client, wait between 1 to 5 seconds, send a "Ping" message to the server and wait for the reply. With this setup I observed that the receive() failed with the error closed when I had a half-established connection. There was never an error during the send() on a half-established connection. You can find a more detailed description of this setup here.
connect() -> long wait -> send()
To see, if I can get errors while sending data on a half-established connection I waited for 4 minutes before sending data. The 4 minutes should cover all timeouts and retries associated with the half-established connection. Sending data was still possible, i.e. send() returned without error.
connect() -> receive()
Next I tested what happens if I only call receive() with a very long timeout (5 minutes). My expectation was to get an closed error for the half-established connections, as in the original experiments. Alas, nothing happend, no error was thrown and the receive eventually timed out.
My questions
Is there a common name for what I call a half-established connection?
Why is the send() on a half-established connection successful?
Why does a receive() only fail if I send data first?
Any help, especially links to detailed explanations, are welcome.
From the client's point of view, the session is fully established, it sent SYN, got back SYN/ACK and sent ACK. It is only on the server side that you have a half-established state. (Even if it gets a repeated SYN/ACK from the server, it will just re-ACK because it's in the established state.)
The send on this session works fine because as far as the client is concerned, the session is established. The sent data does not have to be acknowledged by the far side in order to succeed (the send system call is finished when the data is copied into kernel buffers) but see below.
I believe here that the send actually is generating an error on the connection (probably a RST) because the receiving system cannot ACK data on a session it has not finished establishing. My guess is that any system call referencing the socket on the client side that happens after the send plus a short delay (i.e. when the RST has had a chance to come back) will result in an error.
The receive by itself never causes an error because the client side doesn't need to do anything (I mean TCP protocol-wise) for a receive; it's just idly waiting. But once you send some data, you've forced the server side's hand: it either has completed the session establishment (in which case it can accept the data) or it must send a reset (my guess here that it can't "hold" undelivered data on a session that isn't fully established).

which case we should use The Hybrid Approach Socket in Erlang?

Programing Erlang says in chapter 17.2
Erlang sockets can be opened in one of three modes: active, active once, or passive
...
You might think that using passive mode for all servers is the correct approach. Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.
I just could not understand the sentence This is useless for writing servers that must wait for data from multiple sockets
In my opinion, if I can not convince the clients, I should not use the active mode.
But I can make a Parallel Server with passive mode for each client(one Erlng process for one client).
Maybe it says that a Erlang process for multiple sockets. But I can not imagine the example of this case.
Could you give me more information about it?
Thank you!
Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.
I'd say that's not a very compelling argument against passive sockets. In almost all cases, you'll have one Erlang process per socket, and this problem doesn't arise.
A better argument against passive sockets is that while waiting for data (using gen_tcp:recv), the process cannot receive messages from other Erlang processes. Those messages could be the result of a computation, a request to shut down, etc.
That is, when using active or active-once mode, your receive would look something like this:
receive
{tcp, Socket, Data} ->
%% do something with Data
%% then reactivate the socket
ok = inet:setopts(Socket, [{active,once}]),
loop(Socket);
{result, Result} ->
%% send Result back to socket
ok = gen_tcp:send(Socket, Result),
loop(Socket);
stop ->
%% stop this process
exit(normal)
end
Using this code, whichever event arrives first will be handled first, regardless of whether it's incoming data on the socket or a message from another Erlang process.
If on the other hand you were using gen_tcp:recv to receive the data, you would block on that call, unable to react to {result, Result} and stop in a timely manner.

SSE Server Sent Events - Client keep sending requests (like polling)

How come every site explains that in SSE a single connection stays opened between client and server "With SSE, a client sends a standard HTTP request asking for an event stream, and the server responds initially with a standard HTTP response and holds the connection open"
And then, when server decides it can send data to the client while what I am trying to implement SSE I see on fiddler requests being sent every couple of seconds
For me it feels like long polling and not a one single connection kept opened.
Moreover, It is not that the server decides to send data to the client and it sends it but it sends data only when the client sends next request
If i respond with "retry: 10000" even tough something has happened that the server wants to notify right now, will get to the client only on the next request (in 10 seconds from now) which for me does not really looks like connection that is kept opened and server sends data as soon as he wants to
Your server is closing the connection immediately. SSE has a built-in retry function for when the connection is lost, so what you are seeing is:
Client connects to server
Server myteriously dies
Client waits two seconds then auto-reconnects
Server myteriously dies
Client waits two seconds then auto-reconnects
...
To fix the server-side script, you want to go against everything your parents taught you about right and wrong, and deliberately create an infinite loop. So, it will end up looking something like this:
validate user, set up database connection, etc.
while(true){
get next bit of data
send it to client
flush
sleep 2 seconds
}
Where get next bit of data might be polling a DB table for new records since the last poll, or scan a file system directory for new files, etc.
Alternatively, if the server-side process is a long-running data analysis, your script might instead look like this:
validate user, set-up, etc.
while(true){
calculate next 1000 digits of pi
send them to client
flush
}
This assumes that the calculate line takes at least half a second to run; any more frequently and you will start to clog up the socket with lots of small packets of data for no benefit (the user won't notice that they are getting 10 updates/second instead of 2 updates/second).

Writing a server that queues tasks

I'm writing a server in python that needs to take requests from clients, queue the requests, execute them one at a time, then tell the clients that their particular request has been processed.
Currently the way I've approached it is using a TCP socket server -- however, I'm not sure how to make it so that only one request is being executed at a time from a queue?
The way I would like for it to look:
Client1 -> (a) -> Server
Client2 -> (b) -> Server
Client3 -> (c) -> Server
Server makes queue |a, b, c|
Execute a first. Done? Tell Client 1
Execute b second. Done? Tell Client 2
Execute c third. Done? Tell Client 3
From what I understand, if I have the server recv the client's request, execute it, and respond, that may happen at the same time in different threads. I only want one thread executing all the tasks (because I anticipate many tasks coming in and it'd be slow if everyone was running one at the same time). How do I accomplish that?
There are tons of ways to skin it, but a solution is going to look something like the below:
Client -> Client-Mediator (TCP Port) <--> Server Mediator -> (ServerQ) <- Task Process
The flow would be like this:
Client Process:
Client creates a client mediator on a tcp socket.
Sends whatever info it needs over the port.
Server Mediator receives the request
Creates a response Q for the Task Process
Places the request on the Server Q (command + responseQ)
Wait for response on responseQ
No response after X time timeout ?
Once response comes, read and send response over tcp port.
Server Process:
Reads from Server Q.
Processes command
Write the response to the response Q
Components involved
Client - Simple process that sends requests for tasks to be completed.
Client-Mediator - Creates a connection to the server process.
Server-Mediator - Accepts a client request for task processing, enqueues tasks and waits for response.
Task Process - Reads from ServerQ and waits for a task to come in.
Okay so what Nix said was right but I wasn't sure how to make that exactly happen (my question was how to go about actually making this)
As it turns out I had to start 2 threads: one that executes from the queue, and the other being the main server handler. The server handler spawns threads for each new connection, and the client blocks after sending a request / if the request is successfully queued. This means that the queue needs to be thread-safe / protected with a semaphore or mutex. In the case of python, there is a multiprocessing.Queue class that handles that for you. Whenever a task is executed, the execution thread does a notifyAll() which causes all sleeping threads to wake up and check if their requested task is done. I use a condition variable for that.

When is TCP option SO_LINGER (0) required?

I think I understand the formal meaning of the option. In some legacy code I'm handling now, the option is used. The customer complains about RST as response to FIN from its side on connection close from its side.
I am not sure I can remove it safely, since I don't understand when it should be used.
Can you please give an example of when the option would be required?
For my suggestion, please read the last section: “When to use SO_LINGER with timeout 0”.
Before we come to that a little lecture about:
Normal TCP termination
TIME_WAIT
FIN, ACK and RST
Normal TCP termination
The normal TCP termination sequence looks like this (simplified):
We have two peers: A and B
A calls close()
A sends FIN to B
A goes into FIN_WAIT_1 state
B receives FIN
B sends ACK to A
B goes into CLOSE_WAIT state
A receives ACK
A goes into FIN_WAIT_2 state
B calls close()
B sends FIN to A
B goes into LAST_ACK state
A receives FIN
A sends ACK to B
A goes into TIME_WAIT state
B receives ACK
B goes to CLOSED state – i.e. is removed from the socket tables
TIME_WAIT
So the peer that initiates the termination – i.e. calls close() first – will end up in the TIME_WAIT state.
To understand why the TIME_WAIT state is our friend, please read section 2.7 in "UNIX Network Programming" third edition by Stevens et al (page 43).
However, it can be a problem with lots of sockets in TIME_WAIT state on a server as it could eventually prevent new connections from being accepted.
To work around this problem, I have seen many suggesting to set the SO_LINGER socket option with timeout 0 before calling close(). However, this is a bad solution as it causes the TCP connection to be terminated with an error.
Instead, design your application protocol so the connection termination is always initiated from the client side. If the client always knows when it has read all remaining data it can initiate the termination sequence. As an example, a browser knows from the Content-Length HTTP header when it has read all data and can initiate the close. (I know that in HTTP 1.1 it will keep it open for a while for a possible reuse, and then close it.)
If the server needs to close the connection, design the application protocol so the server asks the client to call close().
When to use SO_LINGER with timeout 0
Again, according to "UNIX Network Programming" third edition page 202-203, setting SO_LINGER with timeout 0 prior to calling close() will cause the normal termination sequence not to be initiated.
Instead, the peer setting this option and calling close() will send a RST (connection reset) which indicates an error condition and this is how it will be perceived at the other end. You will typically see errors like "Connection reset by peer".
Therefore, in the normal situation it is a really bad idea to set SO_LINGER with timeout 0 prior to calling close() – from now on called abortive close – in a server application.
However, certain situation warrants doing so anyway:
If a client of your server application misbehaves (times out, returns invalid data, etc.) an abortive close makes sense to avoid being stuck in CLOSE_WAIT or ending up in the TIME_WAIT state.
If you must restart your server application which currently has thousands of client connections you might consider setting this socket option to avoid thousands of server sockets in TIME_WAIT (when calling close() from the server end) as this might prevent the server from getting available ports for new client connections after being restarted.
On page 202 in the aforementioned book it specifically says: "There are certain circumstances which warrant using this feature to send an abortive close. One example is an RS-232 terminal server, which might hang forever in CLOSE_WAIT trying to deliver data to a stuck terminal port, but would properly reset the stuck port if it got an RST to discard the pending data."
I would recommend this long article which I believe gives a very good answer to your question.
The typical reason to set a SO_LINGER timeout of zero is to avoid large numbers of connections sitting in the TIME_WAIT state, tying up all the available resources on a server.
When a TCP connection is closed cleanly, the end that initiated the close ("active close") ends up with the connection sitting in TIME_WAIT for several minutes. So if your protocol is one where the server initiates the connection close, and involves very large numbers of short-lived connections, then it might be susceptible to this problem.
This isn't a good idea, though - TIME_WAIT exists for a reason (to ensure that stray packets from old connections don't interfere with new connections). It's a better idea to redesign your protocol to one where the client initiates the connection close, if possible.
When linger is on but the timeout is zero the TCP stack doesn't wait for pending data to be sent before closing the connection. Data could be lost due to this but by setting linger this way you're accepting this and asking that the connection be reset straight away rather than closed gracefully. This causes an RST to be sent rather than the usual FIN.
Thanks to EJP for his comment, see here for details.
Whether you can remove the linger in your code safely or not depends on the type of your application: is it a „client“ (opening TCP connections and actively closing it first) or is it a „server“ (listening to a TCP open and closing it after the other side initiated the close)?
If your application has the flavor of a „client“ (closing first) AND you initiate & close a huge number of connections to different servers (e.g. when your app is a monitoring app supervising the reachability of a huge number of different servers) your app has the problem that all your client connections are stuck in TIME_WAIT state. Then, I would recommend to shorten the timeout to a smaller value than the default to still shutdown gracefully but free up the client connections resources earlier. I would not set the timeout to 0, as 0 does not shutdown gracefully with FIN but abortive with RST.
If your application has the flavor of a „client“ and has to fetch a huge amount of small files from the same server, you should not initiate a new TCP connection per file and end up in a huge amount of client connections in TIME_WAIT, but keep the connection open and fetch all data over the same connection. Linger option can and should be removed.
If your application is a „server“ (close second as reaction to peer‘s close), on close() your connection is shutdown gracefully and resources are freed up as you don‘t enter TIME_WAIT state. Linger should not be used. But if your sever app has a supervisory process detecting inactive open connections idleing for a long time („long“ is to be defined) you can shutdown this inactive connection from your side - see it as kind of error handling - with an abortive shutdown. This is done by setting linger timeout to 0. close() will then send a RST to the client, telling him that you are angry :-)
I just saw that in the websockets RFC (RFC 6455), it explicitly states that the server should call close() on the TCP socket first(!)
I was in awe, as I hold the answer/posts by #mgd in this thread as de facto, and the RFC clearly goes against that. But, perhaps this would be a case where setting a linger time of 0 would be acceptable.
The underlying TCP connection, in most normal cases, SHOULD be closed
first by the server, so that it holds the TIME_WAIT state and not the
client
I'm very interested to hear any thoughts/insight on this.
In servers, you may like to send RST instead of FIN when disconnecting misbehaving clients. That skips FIN-WAIT followed by TIME-WAIT socket states in the server, which prevents from depleting server resources, and, hence, protects from this kind of denial-of-service attack.
I like Maxim's observation that DOS attacks can exhaust server resources. It also happens without an actually malicious adversary.
Some servers have to deal with the 'unintentional DOS attack' which occurs when the client app has a bug with connection leak, where they keep creating a new connection for every new command they send to your server. And then perhaps eventually closing their connections if they hit GC pressure, or perhaps the connections eventually time out.
Another scenario is when 'all clients have the same TCP address' scenario. Then client connections are distinguishable only by port numbers (if they connect to a single server). And if clients start rapidly cycling opening/closing connections for any reason they can exhaust the (client addr+port, server IP+port) tuple-space.
So I think servers may be best advised to switch to the Linger-Zero strategy when they see a high number of sockets in the TIME_WAIT state - although it doesn't fix the client behavior, it might reduce the impact.
The listen socket on a server can use linger with time 0 to have access to binding back to the socket immediately and to reset any clients whose connections are not yet finished connecting. TIME_WAIT is something that is only interesting when you have a multi-path network and can end up with miss-ordered packets or otherwise are dealing with odd network packet ordering/arrival-timing.