websocket communication between clients in distributed system - sockets

I'm trying to build instant messaging app. Clients will not only send messages but also often send audios. And I've decided to use websocket connection to communicate with clients. It is fast and allows to send binary data.
The main idea is to receive from client1 message and notify about it client2. But here's the thing. My app will be running on GAE. And what if client1's socket is opened on server1 and client2's is opened on server2. This servers don't know about each others clients.
I have one idea how to solve it, but I am sure it is shitty way. I am going to use some sort of communication between servers(for example JMS or open another websocket connection between servers, doesn't matter right now).
But it surely will lead to a disaster. I can't even imagine how often those servers will speak to each other. For each message server1 should notify server2, server2 should notify client2. But things become even worse when serverN comes into play.
Another way I see this to work is Firebase. But it restricts message size to 4KB. So I can't send audios via it. As a solution I can notify client about new audio and he goes to my server for it.
Hope I clearly explained the problem. Does anyone know how to solve it? Or maybe there are another ways to build such apps?

If you are building a messaging cluster and expect communicating clients to connect to different instances of the server then server-server communication is inevitable. Usually it's not a problem though.
First, if you don't use any load balancing your clients will connect to the same server 50% of time on average (in case of 2 servers).
Second, intra-datacenter links are fast and free in all known public clouds.
Third, you can often do something smart on the frontend to make sure two likely to communicate clients connect to the same server. For instance direct all clients from the same country to the same server using DNS load balancing.
The second part of the question is about passing large media files. It's a common best practice to send it out of band - store on the server and only pass the reference to it. Like someone suggested in the comment, save the audio on the server and just send a message like "audio is available, fetch it from here ...". You don't need to poll the server for that. Just fetch it once when the receiving client requests it.
In general, it seems like you are trying to reinvent the wheel. Just use something off the shelf.

Let all client get connected to multiple servers and each server keeps this metadata
A centralized system like zookeeper stores active servers details
When a client c1 sends a message to client c2:
the message is received by a server (say s1, we can add a load balancer to distribute incoming requests)
s1 will broadcast this information to all other servers to get which server the client c2 is connected to OR a better approach to use consistent hashing which decides which server the client can connect to & in this approach message broadcast is not required
the corresponding server responses to server s1 (say s2)
now s1 sends the message m to s2 and server s2 to client c2
Cons of the above approach:
Each server will have a connection with the n-1 servers, creating a mesh topology
Centralized system (zookeeper) becomes a single point of failures (which is solvable)
Apps like Whatsapp, G-Talk uses XMPP and TCP/IP.

Related

Is there an out of the box solution for Client-Client communication using the QUICKFIX library?

I'm trying to build a entirely contained trading simulator using quickfix/J. The systems ought to consist of 2 client applications (a market/exchange and a broker) as well as a router (server/acceptor). In particular I'd like to know:
Client-Client communication
How the two clients can communicate to each other, but the server handling all the messaging logic, ie. messages should go through server and it should decide where and how to forward messages. I ought to be able to pass a targetID in FIX message, and the server app should handle routing to desired client.
Multiple clients on same port
Have multiple clients connected on same port but messages should only go to a particular sender comp Id ie. clients should not be privy of communication from other clients.
I've already set up the acceptor, and 2 clients. I know I could do this programmaticaly using plain old Java but I'd like to leverage the quickfix library and would like a relativly out of the box solution.
MVP: client (broker) sends fix message through the acceptor(router), message is processed and forwarded to a particular market, market recieves message through server and does some business logic, market sends fix message back to client through acceptor.
ps: I like the quickfix library but I'm very flexible if there any other library/languages you'd recommend
Short answer: QuickFIX/J (as far as I can tell similarly QuickFIX or quickfix/n) will not route messages based on tags. This has to be implemented in your application code.
Edit: with regard to your second point. There is no problem having your FIX server listening for multiple FIX connections on the same port (This applies for QuickFIX/J and I guess also the other language variants.) Sessions are addressed via the SessionID so it is ensured that only the correct FIX Session gets its messages.

ZeroMQ mixed PUB/SUB DEALER/ROUTER pattern

I need to do the following:
multiple clients connecting to the SAME remote port
each of the clients open 2 different sockets, one is a PUB/SUB, the
other is a ROUTER/DEALER ( the server can occasionally send back to client heartbeats, different server related information ).
I am completely lost whether it can be done in ZeroMQ or not.
Obviously if I can use 2 remote ports, that is not an issue, but I fail
to understand if my setup can be achieved with some kind of envelope
usage in ZeroMQ.
Can it be done?
Thanks,
Update:
To clarify what I wish to achieve.
Multiple clients can communicate with the server
Clients operate on request-response basis mostly(on one socket)
Clients create a session socket, which means that whenever this
type of socket is created, a separate worker thread needs to be created
and from that time on the client communicates with this worker thread
with regards to requests processing, e.g. server thread must not block
the connection of other clients when dealing with the request of one client
However clients can receive occasional messages from the worker thread with regards to heartbeats of the worker.
Update2:
Actually I could sort it out. What I did:
identify clients obviously, so ROUTER/DEALER is used, e.g. clients
are indeed dealers, hence async processing is provided
clients send messages to the one and only local port, where the router sits
router peeks into messages (kinda the lazy pirate example), checks whether a new client comes in; if yes it offloads to a separate thread, and connects the separate thread with an internal "inproc:" socket
router obviously polls for the frontend and all connected clients' backends and sends messages back and forth.
What bugs me is that it is an overkill if I compare this with a "regular" socket solution, where I could have connected the client with the worker thread DIRECTLY (e.g. worker thread could recv from the socket opened by the client directly), hence I could spare the routing completely.
What am I missing?
There was a discussion on the ZeroMQ mailing list recently about multiplexing multiple services on one TCP socket. The proposed solutions is essentially what you implemented.
The discussion also mentions Malamute with its brokers which essentially provides a framework based on ZeroMQ which also provides the functionality you need. I haven't had the time to look into it myself, but it looks promising.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

Topics in ZeroMQ REP sockets

When using ØMQ socket of type SUB, one may use
sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'topic')
Is the same possible also with REP sockets, allowing a worker to only handle specific topics, leaving other topics to different workers?
I'm very afraid that it is impossible, quoting http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html:
In the current versions of ØMQ, filtering happens at the subscriber side, not the publisher side.
But still, I'm asking if there is some trick to achieve that, because such a functionality would have a huge impact on my infrastructure.
Nope. Can I assume that you've got a REQ or DEALER server socket that sends work to REP workers, that then respond with the completed work back to the server? And that you're looking for a way to make your server communicate to specific clients rather than just pass out tasks in a round-robin fashion?
Can't do it. See here, those sockets are only, always, round-robin. If you want to communicate to a specific client, you must either have a socket that talks only to that client, or you must start the communication from the client (switch your socket pairing so the worker requests whatever work its ready for, and the server responds with it, and then the worker creates a new request with the completed work). Doing anything else gets much more complicated.

Is it worth customizing an XMPP server? (vs. having client workers)

I've been asked about the possibilities for writing an ejabber module for an internal application. I am opposed to the idea, but I'm not sufficiently familiar with xmpp to support my response, and perhaps I'm wrong.
When google did wave they chose xmpp; and I understand that choice; real time communication between multiple people. Same goal here.
...but it feels to me like a customized server plugin isn't the right answer.
The issues I see are:
1) You lose sync with the server development and have to go through merge hell to ensure security updates, patches, etc. on the server are patched.
2) Any heavy customization of the server means you're probably expecting to be passing special mark up messages to interact with the server plugin; that means you'll have to do heavy client customization as well.
There is an alternative route:
Standard XMPP server. two customized xmpp clients; one for the client and one for the server.
The server client opens a connection to the XMPP server and sits and waits.
Multiple front end clients open connections to the XMPP server and then use xmpp to open connections optionally: 1) to each other and 2) to the server client user.
The front end can then perform real time updates by talking to the server client. It can even subscribe to multiple server client users and have incoming 'activity streams' for multiple different concurrent tasks.
This has the advantages of:
1) You only need to solve the XMPP problem once (client library)
2) Your application server is never externally visible; only the XMPP server is externally visible, which is massive security win.
3) You can use whatever XMPP server infrastructure you want without any issue.
4) You will never have a server update that causes your application server to become 'legacy' and unable to use those apis any more (short of a complete XMPP protocol update).
Downside:
You application server client needs to be complicated enough to handle multiple requests, or have multiple workers or something (but this scales using resource fields and have multiple application servers from different machines connecting to the XMPP network).
...but, I'm not that familiar with the technology.
Is there any reason why the alternative I've suggested would be worse than a customized xmpp server?
XMPP is used in Google Wave/Wave in a Box only for Federation, i.e. only for server to server communications. This is in order to take advantage of existing XMPP capabilities like discovery protocol. The messages are transported in binary form between servers inside XMPP packets. The Web clients use WebSockets/Socket.IO to communicate with the server. Actually that was the reason to argue about developing an alternative pure HTTP based Federation protocol.