I am using pub/Sub Socket and currently the server subscribe byte[0] (all topics)
while client subscribe byte[16] - a specific header as topic
However, I cannot stop client to subscribe byte[0] which can receive all other messages.
My application is a like a app game which has one single server using ZMQ as connection
and many clients have a ZMQ sockets to talk with server.
What pattern or socket I should use in this case?
Thanks
" ... cannot stop client to subscribe byte[0] which can receive all other messages."
Stopping a "subscribe to all" mode of the SUB client
For the ZMQ PUB/SUB Formal Communication Pattern archetype, the SUB client has to submit it's subscription request ( via zmq_setsockopt() ).
PUB-side ( a Game Server ) has got no option to do that from it's side.
There is no-subscription state right on a creation of a new SUB socket, thus an absolutely restrictive filter, thas no message pass through. ( For furhter details on methods for SUBSCRIBE / UNSUBSCRIBE ref. below )
ZeroMQ specification details setting for this:
int zmq_setsockopt ( void *socket,
int option_name,
const void *option_value,
size_t option_len
);
Caution: only ZMQ_SUBSCRIBE
ZMQ_UNSUBSCRIBE
ZMQ_LINGER
take effect immediately,
other options are active only for subsequent socket bind/connects.
ZMQ_SUBSCRIBE: Establish message filter
The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket. Newly created ZMQ_SUB sockets shall filter out all incoming messages, therefore you should call this option to establish an initial message filter.
An empty option_value of length zero shall subscribe to all incoming messages.
A non-empty option_value shall subscribe to all messages beginning with the specified prefix.
Multiple filters may be attached to a single ZMQ_SUB socket, in which case a message shall be accepted if it matches at least one filter.
ZMQ_UNSUBSCRIBE: Remove message filter
The ZMQ_UNSUBSCRIBE option shall remove an existing message filter on a ZMQ_SUB socket. The filter specified must match an existing filter previously established with the ZMQ_SUBSCRIBE option. If the socket has several instances of the same filter attached the ZMQ_UNSUBSCRIBE option shall remove only one instance, leaving the rest in place and functional.
How to enforce an ad-hoc, server-dictated, ZMQ_SUBSCRIBE restrictions?
This is possible via extending the messaging layer and adding a control-mode socket, that will carry server-initiated settings for the client ZMQ_SUB messages filtering.
Upon receiving a new, the server-dictated, ZMQ_SUBSCRIBE/ZMQ_UNSUBSCRIBE setting, the ZMQ_SUB client side code will simply handle that request and add zmq_setsockopt() accordingly.
FSA-driven grammars for this approach are rich of further possibilites, so will allow any Game Server / Game Community to smoothly go this way.
What pattern or socket I should use?
ZeroMQ is rather a library of LEGO-style elements to get assembled into a bigger picture.
Expecting such a smart library to have a one-size-fits-all ninja-element is on a closer look an oxymoron.
So, to avoid a "Never-ending-story" of adding "although this ... and also that ..."
Review all requirements and & list features for the end-to-end scaleable solution,
Design a messaging concept & validate it to meet all the listed requirements & cover all features in [1]
Implement [2]
Test [3] & correct it for meeting 1:1 the end-to-end specification [1]
Enjoy it. You have done it end-to-end right.
Related
one connection send many request to server
How to process request concurrently.
Please use a simple example like timeserver or echoserver in netty.io
to illustrate the operation.
One way I could find out is to create a separate threaded handler that will be called as in a producer/consumer way.
The producer will be your "network" handler, giving message to the consumers, therefore not waiting for any wanswear and being able then to proceed with the next request.
The consumer will be your "business" handler, one per connection but possibly multi-threaded, consuming with multiple instances the messages and being able to answer using the Netty's context from the connection from which it is attached.
Another option for the consumer would be to have only one handler, still multi-threaded, but then message will come in with the original Netty's Context such that it can answear to the client, whatever the connection attached.
But the difficulties will come soon:
How to deal with an answear among several requests on client side: let say the client sends 3 requests A, B and C and the answears will come back, due to speed of the Business handler, as C, A, B... You have to deal with it, and knowing for which request the answer is.
You have to ensure all the ways the context given in parameter is still valid (channel active), if you don't want to have too many errors.
Perhaps the best way would be to however handle your request in order (as Netty does), and keep the answear's action as quick as possible.
There are various options for IPC.
Over a network:
for client-server, can use TCP
for pub sub, can use UDP multicast
Locally:
for client-server, can use unix domain sockets
for pub sub, can use ???
I suppose what I'd be interested in is some kind of file descriptor that supports many readers (subscribers) and many writers (publishers) simultaneously. Is this usage pattern feasible/efficient on unix?
After much googling I haven't found a whole lot in the way of ipc multicast, so I have decided to write a program pubsub that takes as arguments a publisher address and a subscriber address, listens and accepts connections on these 2 addresses, and then for each payload received on a publisher connection write it to each of the subscriber connections. It wouldn't surprise me if this is inefficient or reinventing the wheel but I have not come across a better solution.
I was looking for solutions to a similar problem and found /dev/fanout. Fanout is a kernel module that replicates its input out to all processes reading from it. You can think of it as IPC Broadcast mechanism. Works well for small data payloads according to the author. Multiple processes can write to the device and multiple processes can read from it. I am not sure of atomicity of writes though. Small writes from multiple processes should occur atomically as with FIFOs, etc.
More about Fanout:
http://compgroups.net/comp.linux.development.system/-dev-fanout-a-one-to-many-multi/2869739
http://www.linuxtoys.org/fanout/fanout.html
There are Posix message queues too. As man mq_overview puts it:
POSIX message queues allow processes to exchange data in the form of messages. This API is distinct from that provided by
System V message queues (msgget(2), msgsnd(2), msgrcv(2), etc.), but provides similar functionality.
Message queues are created and opened using mq_open(3); this function returns a message queue descriptor (mqd_t), which is
used to refer to the open message queue in later calls. Each message queue is identified by a name of the form /somename;
that is, a null-terminated string of up to NAME_MAX (i.e., 255) characters consisting of an initial slash, followed by one
or more characters, none of which are slashes. Two processes can operate on the same queue by passing the same name to
mq_open(3).
Messages are transferred to and from a queue using mq_send(3) and mq_receive(3). When a process has finished using the queue, it closes it using mq_close(3), and when the queue is no longer required, it can be deleted using mq_unlink(3).
Queue attributes can be retrieved and (in some cases) modified using mq_getattr(3) and mq_setattr(3). A process can request asynchronous notification of the arrival of a message on a previously empty queue using mq_notify(3).
A message queue descriptor is a reference to an open message queue description (cf. open(2)). After a fork(2), a child inherits copies of its parent's message queue descriptors, and these descriptors refer to the same open message queue descriptions as the corresponding descriptors in the parent. Corresponding descriptors in the two processes share the flags (mq_flags) that are associated with the open message queue description.
Each message has an associated priority, and messages are always delivered to the receiving process highest priority first.
Message priorities range from 0 (low) to sysconf(_SC_MQ_PRIO_MAX) - 1 (high). On Linux, sysconf(_SC_MQ_PRIO_MAX) returns 32768, but POSIX.1 requires only that an implementation support at least priorities in the range 0 to 31; some implementations provide only this range.
A more friendly introduction by Michael Kerrisk is available here: http://man7.org/conf/lca2013/IPC_Overview-LCA-2013-printable.pdf
I have this Play app that connects to a distant server in order to consume a given API. In order to load balance my requests to the distant server, I connect multiple accounts to that same server. Each account can query the API a given number of times. Each account is handled by an Akka actor and these actors are behind an Akka Round Robin router. Thus when wanting to consume the distant API, I "ask" the RR router for the wanted info.
This implementation runs fine until, one account gets disconnected. Basically, when one account is disconnected, the actor returns a given object that says "something was wrong with the connection", and then I send a second request to the RR router again to be handled by another account.
My question is, instead of having to have the "retry" logic outside the router-routee group, is there a way to do it inside? I am thinking that for example at router level, define a logic that handles these "something was wrong with the connection" messages by automatically forwarding the request to the next routee to be handled by it, and only return a final response once all routees have been tried and none worked?
Does Akka provide a simple way of achieving this or should I just carry on with my implementation?
I'm not sure if I fully understand your design but I think you should try using first complete model supported by the ScatterGatherFirstCompleted routing logic.
router.type-mapping {
...
scatter-gather-pool = "akka.routing.ScatterGatherFirstCompletedPool"
scatter-gather-group = "akka.routing.ScatterGatherFirstCompletedGroup"
..
}
in the simple form
---> Route
--> SGFC-1
RR ->
or possibly combined with round robin router.
---> Route
--> SGFC-1
RR ->
--> SGFC-2
---> Route
The same as in your proposal connections are represented by the routes. SGFC-1 and SGFC-2 should have access to the same pool of routees (conenctions.) or part of the pool.
The scenario is publisher/subscriber, and I am looking for a solution which can give the feasibility of sending one message generated by ONE producer across MULTIPLE consumers in real-time. the light weight this scenario can be handled by one solution, the better!
In case of AMQP servers I've only checked out Rabbitmq and using rabbitmq server for pub/sub pattern each consumer should declare an anonymous, private queue and bind it to an fanout exchange, so in case of thousand users consuming one message in real-time there will be thousands or so anonymous queue handling by rabbitmq.
But I really do not like the approach by the rabbitmq, It would be ideal if rabbitmq could handle this pub/sub scenario with one queue, one message , many consumers listening on one queue!
what I want to ask is which AMQP server or other type of solutions (anyone similar including XMPP servers or Apache Kafka or ...) handles the pub/sub pattern/scenario better and much more efficient than RabbitMQ with consuming (of course) less server resource?
preferences in order of interest:
in case of AMQP enabled server handling the pub/sub scenario with only ONE or LESS number of queues (as explained)
handling thousands of consumers in a light-weight manner, consuming less server resource comparing to other solutions in pub/sub pattern
clustering, tolerating failing of nodes
Many Language Bindings ( Python and Java at least)
easy to use and administer
I know my question may be VERY general but I like to hear the ideas and suggestions for the pub/sub case.
thanks.
In general, for RabbitMQ, if you put the user in the routing key, you should be able to use a single exchange and then a small number of queues (even a single one if you wanted, but you could divide them up by server or similar if that makes sense given your setup).
If you don't need guaranteed order (as one would for, say, guaranteeing that FK constraints wouldn't get hit for a sequence of changes to various SQL database tables), then there's no reason you can't have a bunch of consumers drawing from a single queue.
If you want a broadcast-message type of scenario, then that could perhaps be handled a bit differently. Instead of the single user in the routing key, which you could use for non-broadcast-type messages, have a special user type, say, __broadcast__, that no user could actually have, and have the users to broadcast to stored in the payload of the message along with the message itself.
Your message processing code could then take care of depositing that message in the database (or whatever the end destination is) across all of those users.
Edit in response to comment from OP:
So the routing key might look something like this message.[user] where [user] could be the actual user if it were a point-to-point message, and a special __broadcast__ user (or similar user name that an actual user would not be allowed to register) which would indicate a broadcast style message.
You could then place the users to which the message should be delivered in the payload of the message, and then that message content (which would also be in the payload) could be delivered to each user. The mechanism for doing that would depend on what your end destination is. i.e. do the messages end up getting stored in Postgres, or Mongo DB or similar?
I have been working with qpid and now i am trying to move to broker less messaging system , but I am really confused about network traffic in a Pub Sub pattern. I read the following document :
http://www.250bpm.com/pubsub#toc4
and am really confused how subscription forwarding is actually done ?
I thought zero mq has to be agnostic for the underlying network topology but it seems it is not. How does every node knows what to forward and what to not (for e.g. : in eth network , where there can be millions subscriber and publisher , message tree does not sound a feasible to me . What about the hops that do not even know about the existence of zero mq , how would they forward packets to subscribers connected to them , for them it would be just a normal packet , so they would just forward multiple copies of data packets even if its the same packet ?
I am not networking expert so may be I am missing something obvious about message tree and how it is even created ?
Could you please give certain example cases how this distribution tree is created and exactly which nodes are xpub and xsub sockets created ?
Is device (term used in the link) something like a broker , in the whole article it seemed like device is just any general intermediary hop which does not know anything about zero mq sockets (just a random network hop) , if it is indeed a broker kind of thing , does that mean for pub sub , all nodes in messaging tree have to satisfy the definition of being a device and hence it is not a broke less design ?
Also in the tree diagram (from the link , which consist P,D,C) , I initially assumed C and C are two subscribers and P the only publisher (D just random hop), but now it seems that we have D as the zero mq . Does C subscribes to D and D subscribes to P ? or both the C just subscribe to P (To be more generic , does each node subscribe to its parent only in the ). Sorry for the novice question but it seems i am missing on something obvious here, it would be nice if some one can give more insights.
zeromq uses the network to establish a connection between nodes directly (e.g via tcp), but only ever between 1 sender and 1-n receivers. These are connected "directly" and can exchange messages using the underlying protocol.
Now when you subscribe to only certain events in a pub-sub scenario, zeromq used to filter out messages subscriber side causing unnecessary network traffic from the publisher to at least a number of subscribers.
In newer versions of zeromq (3.0 and 3.1) the subscriber process sends its subscription list to the publisher, which manages a list of subscribers and the topics they are interested in. Thus the publisher can discard messages that are not subscribed too by any subscriber and potentially send targeted messages at only interested subscribers.
When the publisher is itself a subscriber of events (e.g. a forwarding or routing device service) it might forward those subscriptions again by similarly subscribing to its connected publishers.
I am not sure whether zeromq still does client side filtering in newer versions even if it "forwards" its subscriptions though.
A more efficient mechanism for pub/sub to multiple subscribers is to use multicast whereby a single message traverses the network and is received by all subscribers (who can then filter what they wish).
ZeroMQ supports a standardised reliable multicast called Pragmatic General Multicast.
These references should give you an idea how it all works. Note that multicast generally only works on a single subLAN and may need router configuration or TCP bridges to span multiple subLANs.