How to deploy a WebSocket server? - deployment

When deploying a web application running on a traditional web server, you usually restart the web server after the code updates. Due to the nature of HTTP, this is not a problem for the users. On the next request they will get the latest updates.
But what about a WebSocket server? If I restart or kill the old process all connected users will get disconnected. So my question is, what kind of strategy have you used to deploy a WebSocket server smoothly?

You're right, every connected user will be disconnected if the server restarts.
I think the less bad solution is to tell to the client to reconnect in the onClose method of the client.

WebSockets is just a transport mechanism. Libraries like socket.io exist to build on that transport -- and provide heartbeats, browser fallbacks, graceful reconnects and handle other edge-cases found in real-time applications.
In our WebSocket-enabled application, socket.io is central to ensuring our continuous deployment setup doesn't break users' active socket connections.

If clients are connected directly to sever that does all sockets networking and application logic, then yes - they will be disconnected, due to TCP layer that holds connection.
If you have gateway that clients will be connecting to, and that gateway application is running on another server, but will communicate and forward messages to logical server, then logical server will send them back and gateway will send back to client responses. With such infrastructure, you have to implement stacking of packets on gateway until it will re-establish connection with logical server. Logical server might notify gateway server before restart. That way client will have connection, it will just wont receive any responses.
Or you can implement on client side reconnection.
With HTTP, every time you navigate away, browser actually is creating socket connection to server, transmits all data and closes it (in most cases). And then all website data is local, until you navigate away.
With WebSockets it is continuous connection, and there is no reconnection on requests. Thats why you have to implement simple mechanics when WebSockets getting closing event, you will try to reconnect periodically on client side.
It is more based on your specific needs.

Related

ZeroMQ broadcast to specific PULL client across firewall

I'm building a message broker which communicates with clients over ZeroMQ PUSH/PULL sockets and has the ability to exclude clients from messages they're not subscribed to from the server side (unlike ZeroMQ pub/sub which excludes messages on the client side).
Currently, I implement it in the following way:
Server: Binds ZeroMQ PULL socket on a fixed port
Client: Binds a ZeroMQ PULL socket on a random or fixed port
Client: Connects to the server's PULL socket and sends a handshake message containing the new client's address and port.
Server: Recieves handshake from client and connects a PUSH socket to the client's PULL server. Sends handshake response to the client's socket.
Client: Recieves handshake. Connected!
Now the client and server can communicate bidirectionally and the server can send messages to only a certain subset of clients. It works great!
However, this model doesn't work if the clients binding PULL sockets are unable to open a port in their firewall so the server can connect to them. How can I resolve this with minimal re-architecting (as the current model works very well when the firewall can be configured correctly)
I've considered the following:
Router/dealer pattern? I'm fairly ignorant on this and documentation I found was sparse.
Some sort of transport bridging? The linked example provides an example for PUB/SUB.
I was hoping to get some advice from someone who knows more about ZeroMQ than me.
tl;dr: I implemented a message broker that communicates with clients via bidirectional push/pull sockets. Each client binds a PULL socket and the server keeps a map of PUSH sockets so that it can address specific subscribers. How do I deal with a firewall blocking the client ports?
You can use the router/dealer to do this like you say. By default the ROUTER socket tracks every connection it has. The way it does this is by having the caller stick the connection identity information in front of each message it recieves. This makes things like pub/sub fairly trivial as all you need to do is handle a few messages server side that the DEALER socket sends it. In the past I have done something like
1.) Server side is a ROUTER socket. The ROUTER handles 2 messages from DEALER sockets SUB/UNSUB. This alongside the identity info sent as the first part of a frame allows the router to know the messages that a client is interested in.
2.) The server checks the mapping to see which clients should be sent a particular type of data using the map and then forwards the message to the correct client by appending the identity again to the start of the message.
This is nice in that it allows a single port to be exposed on the server. Client side we do not need to expose ports, simply just connect to the server ROUTER socket.
See https://zguide.zeromq.org/docs/chapter3/ for more info.

socket connections closing when manually deploying

We made a chat module in our project using socket.io. When the load is balanced and the manual deployed, if socket connections are switched to different servers, socket connections are disconnected and the messaging events are partially not processed. I solved the load balance problem with socket.io-redis library. It acts as a gateway and solves this problem thanks to redis.
Another problem is that when I deploy it manually, the pid of the servers changes and socketio connections are instantly disconnected on the client and then it is not connected even though it says connected.
Do you think that using tools such as Travis CI solves the problems in manual deploy process?
Another question is, if a system that goes to 3 servers with load balance then goes back to 2 servers, the socket connections will be closed again, what method may be required to solve this? I thought of separating the socket.io service from the monolithic structure and keeping it on a single server, and scaling the server vertically when the load increased.
We are using an Aws Elastic Beanstalk(EBS), it automatically performs load balance.

What causes "Transport endpoint is not connected" in ZeroMQ?

I am working on a product which uses ZeroMQ (version 4.0.1).
The server and client communicate based on ZeroMQ ROUTER-socket.
To read socket events, server and client also create socket-monitor sockets (PAIR). There are three ports on which server binds and listens. Out of these three ports, one port is in a non-secured mode. Other two ports are using md5-authentication.
The issue I am facing is that, both the server and the client spontaneously receive socket disconnect for one of the secure port sockets (please see a log below). I have checked multiple times that server and client both have L3 reachability to each other.
What else I should check for?
What really triggers this error scenario?
zmq_print_callback:ZmQ: int zmq::stream_engine_t::read(void*, size_t):923
Stream engine recv():
TCP socket (187) to unknown:0 was disconnected
with error 107 [Transport endpoint is not connected]
Below sequence of events can trigger this error on server
Server receives ACCEPTED event for clientY and gets FD1.
Link-flap/network issue happens and clientY disconnects but server does not receive this disconnect.
Network recovers and clientY connects back to server.
Server receives ACCEPTED event for clientY and gets FD2. However, packets sent to this sockets does not go out of the server.
After 1 min or so, clientY receives "Transport endpoint is not connected error" for FD1.
Application can use this to treat as client disconnect.

In Azure SignalR Service, what is a Concurrent Connection?

The pricing for Azure SignalR Service is based on Concurrent Connections.
However, I can't find the definition of a Concurrent connection.
I have an ASP.Net Core MVC Web Application. I understand that the server application connection to the Azure SignalR Service is one connection. Each client (browser) that connects to my web app is another connection. But are these considered concurrent connections? Or just open connections sitting there waiting for a message to be sent?
I'm hoping that the count of concurrent connections is a count of connections that are actively sending a message. Is that the case?
Ok so I figured out that a Concurrent connection is any connection to the SignalR Service!
I ran through the quickstart tutorial here
And then used the Azure Metric as I connected various clients to the chat room, and this is what I found:
After starting the app in debug mode, it seems the server immediately uses 5 connections. Then, as I open the url in various browser tabs, a new Client connection is established. As expected, the 16th browser tab does not establish a SignalR connection (Because I am on the Free SignalR Service tier, which has a limit of 20 connections.)

Is there a way to wait for a listening socket on win32?

I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).