Zeromq which socket should bind on PubSub pattern - sockets

I have been reading about ZeroMQ more specifically about NetMQ and almost every Pub/Sub examples I saw used to Bind the Publisher socket and then the Subscriber socket connects to the other.
So i'm wondering if it is possible to do the reverse, i mean Bind the Subscriber socket and then publishers connect to it.
Is this possible ? (I didn't found anything clear on documentation)
What are the disadvantages using this connection strategy ?
Any help will be usefull.

Yes, you can reverse it and there are no disadvantages to using that connection strategy... provided it suits your purpose.
In ZMQ, the driving concept behind "binding" and "connecting" is that one side is often considered to be more reliable (and usually there will be fewer nodes), and the other side is considered to be more transient (and there could be more numerous nodes). The reliable side would be considered your "server", and you should bind() on that side, the transient side would be considered your "client" (or clients), and you should connect() on that side.
Typically, we think of a stable "server" publishing information constantly, to many "client" subscribers which may come and go. This is represented in the examples that you see: bind on pub, connect on sub.
But, you could just as easily have a stable "server" subscribing to any output from many "client" publishers that connect to it, accepting any information that they're sending while they are available. Bind on sub, connect on pub.
You're not limited to one server, either, it's just the simplest example - however, you're more limited if you're running all of your sockets on the same computer. Binding on the same address with more than one socket will produce a conflict, but you can connect as many sockets to the same address as you like.
In many cases, both sides of the communication are really intended to be reliable and long running, in which case it's useful to think of the node which sends the information as the server, and the one which receives it as the client. In which case, we're back to bind on pub, connect on sub.

Related

How to implement multicast sockets in swift?

I'm writing a server that, among other things, needs to be constantly sending data in different multicast addresses. The packages being sent might be received by a client side (an app) which will be switching between the mentioned addresses.
I'm using Perfect (https://github.com/PerfectlySoft/Perfect) for writing the server side, however had no luck using the Perfect-Net module nor using CocoaAsyncSocket. How could i implement both the sender and the receiver using swift? Any could snippet would be really useful.
I've been reading about multicasting and when it comes to the receiver, i've notice that in most languages (i.e. java or c#) the receiver often indicates a port number and a multicast ip-address, but when is the connection with the server being made? When does the socket bind to the real server ip-address?
Thanks in advance
If we talk about the TCP/IP stack, only IP and UDP support broadcasts and multicasts. They're both connectionless, and this is why you see only sending and receiving to special multicast addresses, but no binds and connects. You see it in different languages because (a) protocols are language-agnostic and (b) most implementations put reasonable efforts in trying to be compatible with BSD sockets interface.
If you want that true multicast, you'll need to find a swift implementation of sockets that allow setting options. Usual names for this operation is setsockopt. Multicast sender side doesn't need anything beyond a basic UDP socket (I suggest using UDP, not IP), while sender needs to be added to a multicast group. This Python example pretty much describes it.
However, it's worth noting that routers don't route broadcasts and multicasts. Hence you cannot use it over internet. If you need to use internet in your project, I'd advise you to use TCP - or websockets if your clients are browsers - and send messages to "groups" of them manually.
I guess you actually want Perfect-Kafka or Perfect-Mosquitto - Message Queue which allows a server to publish live streams to the client side subscribers. Low-level sockets will not easily fulfill your requirement.

Connect sockets directly after introduction through server

I'm looking for the name of a protocol and example code that permits handing off IP/port connections to establish unmediated P2P after introduction through a server.
Simple example:
You and I both start chat programs that connect to chatintroduce.com (fictional server). I send you a "Hi! Wanna chat?" message. It doesn't get sent. Instead my chat program tells chatintroduce to send your chat program a request for connection. You respond to a prompt and your chat program tells chatintroduce to broker the connection. Chatintroduce establishes an initial two-way connection between us. Now, this final step is important, chatintroduce releases control and our two chat programs now talk directly to each other without any traffic through chatintroduce.
In other words, I construct packets which have your IP address and you receive them without interference from firewalls, NATs or any other technologies. In other words, true peer-to-peer connection independent of intermediate server.
I need to know what search terms to use to find appropriate technology. An RFC name would suffice. I've been searching for days without success.
I think what you are looking for is TCP/UDP hole punching which typically coordinates the P2P connection using a STUN server to determine the "capabilities" of the firewalls (e.g. is it a full cone nat? symmetric?).
https://en.wikipedia.org/wiki/Hole_punching_(networking)
We employed this at a company I worked for to create a kind of BitTorrent that could circumvent firewalls for streaming video between two peers.
Note that sometimes it is NOT possible to establish a connection without the intermediary.
What you are looking for is ICE protocol. RFC 5245. This protocol is used for connecting two peers through NAT traversal. There are some open source libraries and also some proprietary libraries for this. You can search google with ICE implementation.
You will also need to read about some additional protocols. These are used with ICE protocol. They are STUN and TURN.
For some cases you can't make P2P call 100% time. You will have to use a relay server. Like if the NAT combination of two peers are Symmetric vs Symmetric/PRC. That relay server is called TURN server.
Some technique like Port forwarding and TCP/UDP hole punching will help you to increase P2P rates.
See this answer for more information about which combination of NAT will require a relay server and which don't.
Thank you. I will be looking further into ICE, STUN, TURN, and hole-punching.
I also found n2n which looks like almost exactly what I wanted.
https://github.com/meyerd/n2n
http://xmodulo.com/configure-peer-to-peer-vpn-linux.html
With n2n, one makes a VPN with a super node that all other edge nodes know.
But once the introductions are made, the super node can be absent.
This was exactly what I wanted. I hope it works across platforms (linux, MacOS, Windows).
Again, I am still researching before implementation, so your advice was very important to me.
Thank you.
Use PJNATH. Its open source.
http://www.pjsip.org/pjnath/docs/html/
There is not much open source on NAT Traversal. As far as I know PJNATH is good.
For server you can use Google's Open source STUN and TURN server.

Multiple service connections vs internal routing in MMO

The server consists of several services with which a user interacts: profiles, game logics, physics.
I heard that it's a bad practice to have multiple client connections to the same server.
I'm not sure whether I will use UDP or TCP.
The services are realtime, they should reply as fast as possible so I don't want to include any additional rerouting if there are no really important reasons. So are there any reasons to rerote traffic through one external endpoint service to specific internal services in my case?
This seems to be multiple questions in one package. I will try to answer the ones I can identify as separate...
UDP vs TCP: You're saying "real-time", this usually means UDP is the right choice. However, that means having to deal with lost packets and possible re-ordering of packets. But, using UDP leaves a couple of possible delay-decreasing tricks open.
Multiple connections from a single client to a single server: This consumes resources (end-points, as it were) on both the client (probably ignorable) and on the server (possibly a problem, possibly ignorable). The advantage of using separate connections for separate concerns (profiles, physics, ...) is that when you need to separate these onto separate servers (or server farms), you don't need to update the clients, they just need to connect to other end-points, using code that's already tested.
"Re-router" (or "load balancer") needed: Probably not going to be an issue initially. However, it will probably become an issue later. Depending on your overall design and server OS, using UDP may actually become an asset here. UDP packet arrives at the load balancer, dispatched to the right backend and that could then in theory send back a reply with the source IP of the load balancer.
An alternative would be to have a "session broker". The client makes an initial connection to a well-known endpoint, says "I am a client, tell me where my profile, physics, what-have0-you servers are", the broker considers the current load, possibly the location of the client and other things that may make sense and the client then connects to the relevant backends on its own. The downside of this is that it's harder (not impossible, but harder) to silently migrate an ongoing session to a new backend, when there's a load-balancer in the way, this can be done essentially-transparently.

Topics in ZeroMQ REP sockets

When using ØMQ socket of type SUB, one may use
sub_socket.setsockopt_string(zmq.SUBSCRIBE, 'topic')
Is the same possible also with REP sockets, allowing a worker to only handle specific topics, leaving other topics to different workers?
I'm very afraid that it is impossible, quoting http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html:
In the current versions of ØMQ, filtering happens at the subscriber side, not the publisher side.
But still, I'm asking if there is some trick to achieve that, because such a functionality would have a huge impact on my infrastructure.
Nope. Can I assume that you've got a REQ or DEALER server socket that sends work to REP workers, that then respond with the completed work back to the server? And that you're looking for a way to make your server communicate to specific clients rather than just pass out tasks in a round-robin fashion?
Can't do it. See here, those sockets are only, always, round-robin. If you want to communicate to a specific client, you must either have a socket that talks only to that client, or you must start the communication from the client (switch your socket pairing so the worker requests whatever work its ready for, and the server responds with it, and then the worker creates a new request with the completed work). Doing anything else gets much more complicated.

Can socket connections be multiplexed?

Is it possible to multiplex sa ocket connection?
I need to establish multiple connections to yahoo messenger and i am looking for a way to do this efficiently without having to hold a socket open for each client connection.
so far i have to use one socket for each client and this does not scale well above 50,000 connections.
oh, my solution is for a TELCO, so i need to at least hit 250,000 to 500,000 connections
i'm planing to bind multiple IP addresses to a single NIC to beat the 65k port restriction per IP address.
Please i would any help, insight i can get.
**most of my other questions on this site have gone un-answered :) **
Thanks
This is an interesting question about scaling in a serious situation.
You are essentially asking, "How do I establish N connections to an internet service, where N is >= 250,000".
The only way to do this effectively and efficiently is to cluster. You cannot do this on a single host, so you will need to be able to fragment and partition your client base into a number of different servers, so that each is only handling a subset.
The idea would be for a single server to hold open as few connections as possible (spreading out the connectivity evenly) while holding enough connections to make whatever service you're hosting viable by keeping inter-server communication to a minimum level. This will mean that any two connections that are related (such as two accounts that talk to each other a lot) will have to be on the same host.
You will need servers and network infrastructure that can handle this. You will need a subnet of ip addresses, each server will have to have stateless communication with the internet (i.e. your router will not be doing any NAT in order to not have to track 250,000+ connections).
You will have to talk to AOL. There is no way that AOL will be able to handle this level of connectivity without considering cutting your connection off. Any service of this scale would have to be negotiated with AOL so both you and they would be able to handle the connectivity.
There are i/o multiplexing technologies that you should investigate. Kqueue and epoll come to mind.
In order to write this massively concurrent and teleco grade solution, I would recommend investigating erlang. Erlang is designed for situations such as these (multi-server, massively-multi-client, massively-multithreaded telecommunications grade software). It is currently used for running Ericsson telephone exchanges.
While you can listen on a socket for multiple incoming connection requests, when the connection is established, it connects a unique port on the server to a unique port on the client. In order to multiplex a connection, you need to control both ends of the pipe and have a protocol that allows you to switch contexts from one virtual connection to another or use a stateless protocol that doesn't care about the client's identity. In the former case you'd need to implement it in the application layer so that you could reuse existing connections. In the latter case you could get by using a proxy that keeps track of which server response goes to which client. Since you're connecting to Yahoo Messenger, I don't think you'll be able to do this since it requires an authenticated connection and it assumes that each connection corresponds to a single user.
You can only multiplex multiple connections over a single socket if the other end supports such an operation.
In other words it's a function protocol - sockets don't have any native support for it.
I doubt yahoo messenger protocol has any support for it.
An alternative (to multiple IPs on a single NIC) is to design your own multiplexing protocol and have satellite servers that convert from the multiplex protocol to the yahoo protocol.
I'll trow in another approach you could consider (depending on how desperate you are).
Note that operating system TCP/IP implementations need to be general purpose, but you are only interested in a very specific use-case. So it might make sense to implement a cut-down version of TCP/IP (which only handles your use-case, but does that very well) in your application code.
For example, if you are using Linux, you could route a couple of IP addresses to a tun interface and have your application handle the IP packets for that tun interface. That way you can implement TCP/IP (optimised for your use-case) entirely in your application and avoid any operating system restriction on the number of open connections.
Of course, it's quite a bit of work doing the TCP/IP yourself, but it really depends on how desperate you are - i.e. how much hardware can you afford to throw at the problem.
500,000 arbitrary yahoo messenger connections - is your telco doing this on behalf of Yahoo? It seems like whatever solution has been in place for many years now should be scalable with the help of Moore's Law - and as far as I know all the IM clients have been pretty effective for a long time, and there's no pressing increase in demand that I can think of.
Why isn't this a reasonable problem to address with hardware plus traditional solutions?