Detect load balancer/server changes in WebSocket connection from client - sockets

Current configuration
I have a client application connected to a Cloud server through a WebSocket (node.js) connection. I need WebSocket to get real time notifications of incoming messages.
Let use abc.example.com the domain name for the Cloud server for this example.
Cloud server configuration
The Cloud is powered by Amazon Elastic Load Balancer.
This Cloud server have this underlying architecture:
Cloud architecture
On Cloud update, the load balancer switches to another one so all new data posted to the Cloud is handled by a new load balancer and server.
So abc.example.com is always accessible even if the load balancer/server changes. (e.g. Doing an HTTP call)
WebSocket configuration
The WebSocket configuration is connecting to abc.example.com, which connects to a certain server and it stays connected to this one server until something closes it.
Problems
When connected, the WebSocket connection stays open to a server on the Cloud and doesn't detect when the load balancer switches to another one (e.g. on Cloud updates)
So if I send new data to the server for my client (e.g. new message), the connected client doesn't receive it through the WebSocket connection.
Although an HTTP GET query does the work because it resolves the right server.
From my understanding, this is a normal behavior since the server to which the client is connected with WebSocket is still on and didn't close the connection ; also nothing wrong happened.
Actually, I tried to switch the load balancer and the server (the initial server to which the client is connected) still send a pong response to the client when pinged periodically.
So is there any way to detect when the load balancer has switched from the client side ? I'm not allowed to modify the Cloud configuration but I can suggest it if there is a fairly easy solution.
Bottom line is: I don't want to miss any notifications when the Cloud updates.
Other observations:
At t0:
Client app is connected to server1 through WebSocket thanks to ELB1
Reception succeeds through WebSocket when sending new messages to the Cloud
At t1:
Cloud update: Switch from ELB1 to ELB2
Failure to receives new messages through WebSocket
At t2:
Cloud update: Switch from ELB2 to ELB1
Reception succeeds through WebSocket when sending new messages to the Cloud
Any suggestions/help is appreciated,
*This answer helped me understand the network structure but I'm still running out of ideas.
*Apologies if the terminology is not entirely appropriate.

Did you consider using a Pub/Sub server/database such as Redis?
This will augment the architecture in a way that allows Websocket connections to be totally independent from HTTP connections, so events on one server can be pushed to a websocket connection on a different server.
This is a very common network design for horizontal scaling and should be easy enough to implement using Redis or MongoDB.
Another approach (which I find as less effective but could offer scaling advantages for specific databases and designs) would be for each server to "poll" the database (of "subscribe" to database changes), allowing the server to emulate a pub/sub subscription and push data to connected clients.
A third approach, which is by far the most complicated, is to implement a "gossip" protocol and an internal pub/sub service.
As you can see, all three approaches have one thing in common - they never assume that HTTP connections and Websocket connections are routed to the same server.
EDIT (a quick example using redis):
Using Ruby and the iodine HTTP/Websocket server, here's a quick example for an application that uses the first approach to push events to clients (a common Redis Database for Pub/Sub).
Notice that it doesn't matter which server originates an event, the event is pushed to the waiting client.
The application is quite simple and uses a single event "family" (pub/sub channel called "chat"), but it's easy to filter events using multiple channels, such as a channel per user or a channel per resource (i.e. blog post etc').
It's also possible for clients to listen to multiple event types (subscribe to multiple channels) or use glob matching to subscribe to all the (existing and future) matching channels.
save the following to config.ru:
require 'uri'
require 'iodine'
# initialize the Redis engine for each Iodine process.
if ENV["REDIS_URL"]
uri = URI(ENV["REDIS_URL"])
Iodine.default_pubsub = Iodine::PubSub::RedisEngine.new(uri.host, uri.port, 0, uri.password)
else
puts "* No Redis, it's okay, pub/sub will support the process cluster."
end
# A simple router - Checks for Websocket Upgrade and answers HTTP.
module MyHTTPRouter
# This is the HTTP response object according to the Rack specification.
HTTP_RESPONSE = [200, { 'Content-Type' => 'text/html',
'Content-Length' => '32' },
['Please connect using websockets.']]
WS_RESPONSE = [0, {}, []]
# this is function will be called by the Rack server (iodine) for every request.
def self.call env
# check if this is an upgrade request.
if(env['upgrade.websocket?'.freeze])
env['upgrade.websocket'.freeze] = WS_RedisPubSub.new(env['PATH_INFO'] && env['PATH_INFO'].length > 1 ? env['PATH_INFO'][1..-1] : "guest")
return WS_RESPONSE
end
# simply return the RESPONSE object, no matter what request was received.
HTTP_RESPONSE
end
end
# A simple Websocket Callback Object.
class WS_RedisPubSub
def initialize name
#name = name
end
# seng a message to new clients.
def on_open
subscribe channel: "chat"
# let everyone know we arrived
# publish channel: "chat", message: "#{#name} entered the chat."
end
# send a message, letting the client know the server is suggunt down.
def on_shutdown
write "Server shutting down. Goodbye."
end
# perform the echo
def on_message data
publish channel: "chat", message: "#{#name}: #{data}"
end
def on_close
# let everyone know we left
publish channel: "chat", message: "#{#name} left the chat."
# we don't need to unsubscribe, subscriptions are cleared automatically once the connection is closed.
end
end
# this function call links our HelloWorld application with Rack
run MyHTTPRoute
Make sure you have the iodine gem installed (gem install ruby).
Make sure you have a Redis database server running (mine is running on localhost in this example).
From the terminal, run two instances of the iodine server on two different ports (use two terminal windows or add the & to demonize the process):
$ REDIS_URL=redis://localhost:6379/ iodine -t 1 -p 3000 redis.ru
$ REDIS_URL=redis://localhost:6379/ iodine -t 1 -p 3030 redis.ru
In this example, I'm running two separate server processes, using ports 3000 and 3030.
Connect to the two ports from two browser windows. For example (a quick javascript client):
// run 1st client app on port 3000.
ws = new WebSocket("ws://localhost:3000/Mitchel");
ws.onmessage = function(e) { console.log(e.data); };
ws.onclose = function(e) { console.log("closed"); };
ws.onopen = function(e) { e.target.send("Yo!"); };
// run 2nd client app on port 3030 and a different browser tab.
ws = new WebSocket("ws://localhost:3000/Jane");
ws.onmessage = function(e) { console.log(e.data); };
ws.onclose = function(e) { console.log("closed"); };
ws.onopen = function(e) { e.target.send("Yo!"); };
Notice that events are pushed to both websockets, without any concern as to the event's origin.
If we don't define the REDIS_URL environment variable, the application won't use the Redis database (it will use iodine's internal engine instead) and the scope for any events will be limited to a single server (a single port).
You can also shut down the Redis database and notice how events are suspended / delayed until the Redis server restarts (some events might be lost in these instances while the different servers reconnect, but I guess network failure handling is something we have to decide on one way or another)...
Please note, I'm iodine's author, but this architectural approach isn't Ruby or iodine specific - it's quite a common approach to solve the issue of horizontal scaling.

Related

ZeroMQ broadcast to specific PULL client across firewall

I'm building a message broker which communicates with clients over ZeroMQ PUSH/PULL sockets and has the ability to exclude clients from messages they're not subscribed to from the server side (unlike ZeroMQ pub/sub which excludes messages on the client side).
Currently, I implement it in the following way:
Server: Binds ZeroMQ PULL socket on a fixed port
Client: Binds a ZeroMQ PULL socket on a random or fixed port
Client: Connects to the server's PULL socket and sends a handshake message containing the new client's address and port.
Server: Recieves handshake from client and connects a PUSH socket to the client's PULL server. Sends handshake response to the client's socket.
Client: Recieves handshake. Connected!
Now the client and server can communicate bidirectionally and the server can send messages to only a certain subset of clients. It works great!
However, this model doesn't work if the clients binding PULL sockets are unable to open a port in their firewall so the server can connect to them. How can I resolve this with minimal re-architecting (as the current model works very well when the firewall can be configured correctly)
I've considered the following:
Router/dealer pattern? I'm fairly ignorant on this and documentation I found was sparse.
Some sort of transport bridging? The linked example provides an example for PUB/SUB.
I was hoping to get some advice from someone who knows more about ZeroMQ than me.
tl;dr: I implemented a message broker that communicates with clients via bidirectional push/pull sockets. Each client binds a PULL socket and the server keeps a map of PUSH sockets so that it can address specific subscribers. How do I deal with a firewall blocking the client ports?
You can use the router/dealer to do this like you say. By default the ROUTER socket tracks every connection it has. The way it does this is by having the caller stick the connection identity information in front of each message it recieves. This makes things like pub/sub fairly trivial as all you need to do is handle a few messages server side that the DEALER socket sends it. In the past I have done something like
1.) Server side is a ROUTER socket. The ROUTER handles 2 messages from DEALER sockets SUB/UNSUB. This alongside the identity info sent as the first part of a frame allows the router to know the messages that a client is interested in.
2.) The server checks the mapping to see which clients should be sent a particular type of data using the map and then forwards the message to the correct client by appending the identity again to the start of the message.
This is nice in that it allows a single port to be exposed on the server. Client side we do not need to expose ports, simply just connect to the server ROUTER socket.
See https://zguide.zeromq.org/docs/chapter3/ for more info.

How to load balance a connected socket?

I need some help on my problem : I am working in a windows 2019 server environment.
I have a solution in place that require a connected protocol between client and server : a socket is opened on TCP/IP and the client start a session then there is a dialog between the client and the server : the dialog is always initiated by the client. The client act also as a server because it expose a service over internet that require to open and close the socket during a session of pin verification. Here a current logical view of my system in place :
As there is a need to have a connected socket between my client and my BE Service, I am forced to have an affinity in place : I am searching a way to make ANY of my two clients able to issue commands to my BE service and receive response as if they were connected via a persistent socket. I need a solution that do not introduce a SPOF : for example I was thinking in using a HA PROXY but if there is a problem with it I could loose all my BE services. The question is : is there anyway to put in place a mediator between my clients and my BE services so that any client could emit command and receive response on a existing opened session and also preverve hight avaibility of my system ?
The final solution would be something like that :

Why is redis or other required for socket.io

I'm currently using Heroku auto scale for my servers. And I need to setup a scalable app using socket.io to allow instant updates of data (bear in mind that it's only for updating the frontend displays, no data is processed).
The way I was going to set it up was as follows:
In the image, all the servers have a socket connection to the "main" socket.io server and to the user.
A user would do an action through an API, the server would do its' thing (save to mondoDB or compute...) and pass it to a "main" socket.io server through its socket.io connection but would not send anything back to the user. The "main" server would receive the request through the socket.io connection and emit it back to the servers, which would then emit it to their users.
So the flow would be: User > Server > Main socket.io server > Server > User
My questions are:
Would this work?
Why do all the docs refer to db type redis?

How to implement bidirectional channel using camel netty4

Here is my use case:
I have two endpoints: one with MQ and the second with TCP/IP
I have to replace a legacy server which accepts queries from remote TCP/IP clients. Once the socket is open with the client, data is exchanged in both sides. the server sends asynchronously MQ data through TCP/IP and receive data from clients asynchronously also. Each data message sent has to be acknowledged. The constraint here is that I have to use the same socket.
I created two routes
from("netty4:tcp://ipAddress:port?sync=true").to("wmq:queue:toQueue")
from("wmq:queue:fromQueue").to("netty4:tcp://ipAddress:port?sync=true")
I start the first queue to receive session open request from clients and then I start the second route to start sending data but I cannot use the same channel.
I tried to get the remote port of the first route and used it in the second route but I have a ConnectException because netty4 tries to open a new socket which is already open.
I found that netty4 can be used asynchronously using the AsyncProcessor but I didn't find any example dealing with my use case.
The only idea I found is that I have to create a standalone server which open the sockets with the clients and make it communicate with the two endpoints.
Is there any way to implement this situation using camel only?
any help on this subject is really appreciated.
Your code won't be able to run as it is for your use case. I also suspect you are trying to use Camel as IP server framework and not an integration in this case.
Lets review Apache Camel's concept of producers and consumers. In the integration world we talk about client and servers as consumers and producers. This might seem like a language difference until you realise a consumer(typically a client) can also be a producer(server).
Some helpful definitions:
1. Producer: A producer is an entity capable of creating and sending a message to an endpoint. A typical example would be code like .to("file:data/outbox") as this produces a file.
2. Consumer: A consumer is an entity that receives messages produced by a producer, it wraps these messages in an exchange and sends them to be processed. A typical example would be code like from(jms:topic:xmlOrders)
A rule of thumb is that typically consumers are the source of the messages being routed.
BIG NOTE:
These two definitions are not set in stone a producer can also be an endpoint using the from and a consumer can be an endpoint using the to.
So in your case let's break up the route:
from("netty4:tcp://ipAddress:port?sync=true").to("wmq:queue:toQueue")
In this route you are creating a Netty server that sends a message to a queue. Here your netty endpoint acts as a consumer(yes it is in the from clause) however this creates a Netty4 Server at the IP address and endpoint you specified. This then send a message to another consumer which is the MQ client which act as a consumer again. So two consumers? Where is the producer? The client connecting to the netty server will act as producer.
Let's look at the second piece of the route:
from("wmq:queue:fromQueue").to("netty4:tcp://ipAddress:port?sync=true")
Here you are creating a client/consumer for the MQ services and then creating a client/producer to the netty server. Essentially you are creating a NEW client here that connects to the SERVER you created in the first route.
So in short your route creates a Netty server that send a message to MQ then creates a MQ client that sends a message to a Netty client which connects to the server you have created. It wont work like this.
Go read about message exchange patterns for further reading, but I would suggest that if you are just using Netty and MQ then maybe Camel is a bit overkill as it is a integration platform and not a IP server platform.

How to deploy a WebSocket server?

When deploying a web application running on a traditional web server, you usually restart the web server after the code updates. Due to the nature of HTTP, this is not a problem for the users. On the next request they will get the latest updates.
But what about a WebSocket server? If I restart or kill the old process all connected users will get disconnected. So my question is, what kind of strategy have you used to deploy a WebSocket server smoothly?
You're right, every connected user will be disconnected if the server restarts.
I think the less bad solution is to tell to the client to reconnect in the onClose method of the client.
WebSockets is just a transport mechanism. Libraries like socket.io exist to build on that transport -- and provide heartbeats, browser fallbacks, graceful reconnects and handle other edge-cases found in real-time applications.
In our WebSocket-enabled application, socket.io is central to ensuring our continuous deployment setup doesn't break users' active socket connections.
If clients are connected directly to sever that does all sockets networking and application logic, then yes - they will be disconnected, due to TCP layer that holds connection.
If you have gateway that clients will be connecting to, and that gateway application is running on another server, but will communicate and forward messages to logical server, then logical server will send them back and gateway will send back to client responses. With such infrastructure, you have to implement stacking of packets on gateway until it will re-establish connection with logical server. Logical server might notify gateway server before restart. That way client will have connection, it will just wont receive any responses.
Or you can implement on client side reconnection.
With HTTP, every time you navigate away, browser actually is creating socket connection to server, transmits all data and closes it (in most cases). And then all website data is local, until you navigate away.
With WebSockets it is continuous connection, and there is no reconnection on requests. Thats why you have to implement simple mechanics when WebSockets getting closing event, you will try to reconnect periodically on client side.
It is more based on your specific needs.