What's the best way to subsample a ZMQ PUB SUB connection? - sockets

I have a ZMQ_PUB socket sending messages out at ~50Hz. One destination needs to react to each message, so it has a standard ZMQ_SUB socket with a while(true) loop checking for new messages. A second destination should only react once a second to the "most recent" message. That is, my second destination needs to subsample.
For the second destination, I believe I'd want to have a time-based loop that is called at my desired rate (1Hz) and recv() the latest message, dropping the rest. I believe this is done via a ZMQ_HWM on the subscriber. Is there another option that needs to be set somewhere?
Do I need to worry about the different subscribers having different HWMs? Will the publisher become angry? It's a shame ZMQ_RATE only applies to multicast sockets.
Is there a best way to accomplish what I'm attempting?
zmq v3.2.4

The HighWaterMark will not be a fantastic solution for your problem. Setting it on the subscriber to, let's say, 10 and reading 1 message per second, will just give you the old messages first, slowly, and throw away all the new, because it's limit are reached.
You could either use a topic on you publisher that makes you able to filter out every 50th message like making the topic messageCount % 50 and subscribe to 0.
Otherwise maybe you shouldn't use zmq's pub/sub, but instead do you own look alike with router/dealer that allows you to subscribe to sampled messages.
Lastly you could also just send them all. 50 m/s is hardly anything in zmq (if they aren't heavy on data, like megs) and then only use every 50th message.

Related

Kafka to Kafka mirroring with sampling

Any idea how to make kafka-to-kafka mirroring but with a sampling (for example only 10% of the messages)?
You could use MirrorMakerMessageHandler (which is configured by message.handler parameter):
https://github.com/apache/kafka/blob/1.0/core/src/main/scala/kafka/tools/MirrorMaker.scala#L430
The handler itself would need to make a decision whether to forward a message. A simple implementation would be just a counter of messages received, and forwarding if 0 == counter % 10.
However this handler is invoked for every message received, so it means that you'd be receiving all of messages & throwing away 90% of them.
The alternative is to modify main loop, where the mirror maker consumer receives the message, and forwards it to producers (that send the message to mirror cluster) is here
https://github.com/apache/kafka/blob/1.0/core/src/main/scala/kafka/tools/MirrorMaker.scala#L428
You would need to modify the consumer part to either-or:
forward only N-th (10th) message/offset
seek to only N-th message in log
I prefer the former idea, as in case of multiple MM instances in the same consumer group, you would still get reasonable behaviour. Second choice would demand more work from you to handle reassignments.
Also, telling which message is from 10% is non-trivial, I just assumed that it's every 10th message received.

Streaming Of ZeroMQ Events Back To Client

I have a use case where by i wish to have a ZeroMQ Request / Reply socket 'stream' back results, is this possible with MultiPart messages (i.e. The Reply sockets streams the frames back before HasMore = false?) or am i approaching this incorrectly?
The situation:
1) Client makes a query (Request) for some records
2) Server looks up Database for results and responds with the current large amount records (Reply) split into frames
3) Server must wait until a Server Side event is generated before the final Frame is sent (HasMore = false)
4) Client wont get the previous Frames until the Final Event has been generated and HasMore = false
Thanks for your help.
As far as I understand what you're aiming for, it sounds like what you have will work the way you expect. See here for more discussion on message frames. The salient points:
As you say, all of the frames will be sent to the client at one time, they will be stored on the server until HasMore is set to false.
One important thing to remember here, if it's a truly large amount of data, you must be able to fit the entire data set into memory, because it'll be stored in your server memory until the entire message with all frames is complete, and then it'll be received into memory before it's processed on the client side.
I assume primarily what you're looking for is a way to iteratively build up a message before you send it? And perhaps to be able to deal with the data on the client iteratively as well? Also you get a guarantee that you won't lose part of the data in the middle, you either get the whole message or lose the whole message (as opposed to instead sending each frame as a separate message). This is one of the primary use cases for frames, so you've done well.
The only thing I object to is using the word "stream", as that implies that the data is being sent to the client continuously as it's being processed on the server, and that's explicitly not what you're trying to do (nor is it possible with ZMQ message frames).

TCP Socket Read Variable Length Data w/o Framing or Size Indicators

I am currently writing code to transfer data to a remote vendor. The transfer will take place over a TCP socket. The problem I have is the data is variable length and there are no framing or size markers. Sending the data is no problem, but I am unsure of the best way to handle the returned data.
The data is comprised of distinct "messages" but they do not have a fixed size. Each message has an 8 or 16 byte bitmap that indicates what components are included in this message. Some components are fixed length and some are variable. Each variable length component has a size prefix for that portion of the overall message.
When I first open the socket I will send over messages and each one should receive a response. When I begin reading data I should be at the start of a message. I will need to interpret the bitmap to know what message fields are included. As the data arrives I will have to validate that each field indicated by the bitmap is present and of the correct size.
Once I have read all of the first message, the next one starts. My concern is if the transmission gets cut partway through a message, how can I recover and correctly find the next message start?
I will have to simulate a connection failure and my code needs to automatically retry a set number of times before canceling that message.
I have no control over the code on the remote end and cannot get framing bytes or size prefixes added to the messages.
Best practices, design patterns, or ideas on the best way to handle this are all welcomed.
From a user's point of view, TCP is a stream of data, just like you might receive over a serial port. There are no packets and no markers.
A non-blocking read/recv call will return you what has currently arrived at which point you can parse that. If, while parsing, you run out of data before reaching the end of the message, read/recv more data and continue parsing. Rinse. Repeat. Note that you could get more bytes than needed for a specific message if another has followed on its heels.
A TCP stream will not lose or re-order bytes. A message will not get truncated unless the connection gets broken or the sender has a bug (e.g. was only able to write/send part and then never tried to write/send the rest). You cannot continue a TCP stream that is broken. You can only open a new one and start fresh.
A TCP stream cannot be "cut" mid-message and then resumed.
If there is a short enough break in transmission then the O/S at each end will cope, and packets retransmitted as necessary, but that is invisible to the end user application - as far as it's concerned the stream is contiguous.
If the TCP connection does drop completely, both ends will have to re-open the connection. At that point, the transmitting system ought to start over at a new message boundary.
For something like this you would probably have a lot easier of a time using a networking framework (like netty), or a different IO mechansim entirely, like Iteratee IO with Play 2.0.

Boost Asio UDP retrieve last packet in socket buffer

I have been messing around Boost Asio for some days now but I got stuck with this weird behavior. Please let me explain.
Computer A is sending continuos udp packets every 500 ms to computer B, computer B desires to read A's packets with it own velocity but only wants A's last packet, obviously the most updated one.
It has come to my attention that when I do a:
mSocket.receive_from(boost::asio::buffer(mBuffer), mEndPoint);
I can get OLD packets that were not processed (almost everytime).
Does this make any sense? A friend of mine told me that sockets maintain a buffer of packets and therefore If I read with a lower frequency than the sender this could happen. ยก?
So, the first question is how is it possible to receive the last packet and discard the ones I missed?
Later I tried using the async example of the Boost documentation but found it did not do what I wanted.
http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/tutorial/tutdaytime6.html
From what I could tell the async_receive_from should call the method "handle_receive" when a packet arrives, and that works for the first packet after the service was "run".
If I wanted to keep listening the port I should call the async_receive_from again in the handle code. right?
BUT what I found is that I start an infinite loop, it doesn't wait till the next packet, it just enters "handle_receive" again and again.
I'm not doing a server application, a lot of things are going on (its a game), so my second question is, do I have to use threads to use the async receive method properly, is there some example with threads and async receive?
One option is to take advantage of the fact that when the local receive buffer for your UDP socket fills up, subsequently received packets will push older ones out of the buffer. You can set the local receive buffer size to be large enough for one packet, but not two. This will make the newest packet to arrive always cause the previous one to be discarded. When you then ask for the packet using receive_from, you'll get the latest (and only) one.
Here are the API docs for changing the receive buffer size with Boost:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/reference/basic_datagram_socket/receive_buffer_size.html
The example appears to be wrong, in that it shows a tcp socket rather than a udp socket, but changing that back to udp should be easy (the trivially obvious change should be the right one).
With Windows (certainly XP, Vista, & 7); if you set your recv buffer size to zero you'll only receive datagrams if you have a recv pending when the datagram arrives. This MAY do what you want but you'll have to sit and wait for the next one if you post your recv just after the last datagram arrives ...
Since you're doing a game, it would be far better, IMHO, is to use something built on UDP rather than UDP itself. Take a look at ENet which supports reliable data over UDP and also unreliable 'sequenced' data over UDP. With unreliable sequenced data you only ever get the 'latest' data. Or something like RakNet might be useful to you as it does a lot of games stuff and also includes stuff similar to ENet's sequenced data.
Something else you should bear in mind is that with raw UDP you may get those datagrams out of order and you may get them more than once. So you're likely gonna need your own sequence number in their anyway if you don't use something which sequences the data for you.
P2engine is a flexible and efficient platform for making p2p system development easier. Reliable UDP, Message Transport , Message Dispatcher, Fast and Safe Signal/Slot...
You're going about it the wrong way. The receiving end has a FIFO queue. Once the queue gets filled new arriving packets are discarded.
So what you need to do on the receiver is just to keep reading the packets as fast as possible and process them as they arrive.
Your receiving end should easily be able to handle receiving a packet every 500ms. I'd say you've got a bug in your code and from what you describe yes you do.
It could be this, make sure in handle_receive that you only call async_receive_from if there is no error.
I think that I have your same problem, to solve the problem I read the bytes_available and compare with packet width until I receive the last package:
boost::asio::socket_base::bytes_readable command(true);
socket_server.io_control(command);
std::size_t bytes_readable = command.get();
Here is the documentation.

Why don't I get all the data when with my non-blocking Perl socket?

I'm using Perl sockets in AIX 5.3, Perl version 5.8.2
I have a server written in Perl sockets. There is a option called "Blocking", which can be set to 0 or 1. When I use Blocking => 0 and run the server and client send data (5000 bytes), I am able to recieve only 2902 bytes in one call. When I use Blocking => 1, I am able to recieve all the bytes in one call.
Is this how sockets work or is it a bug?
This is a fundamental part of sockets - or rather, TCP, which is stream-oriented. (UDP is packet-oriented.)
You should never assume that you'll get back as much data as you ask for, nor that there isn't more data available. Basically more data can come at any time while the connection is open. (The read/recv/whatever call will probably return a specific value to mean "the other end closed the connection.)
This means you have to design your protocol to handle this - if you're effectively trying to pass discrete messages from A to B, two common ways of doing this are:
Prefix each message with a length. The reader first reads the length, then keeps reading the data until it's read as much as it needs.
Have some sort of message terminator/delimiter. This is trickier, as depending on what you're doing you may need to be aware of the possibility of reading the start of the next message while you're reading the first one. It also means "understanding" the data itself in the "reading" code, rather than just reading bytes arbitrarily. However, it does mean that the sender doesn't need to know how long the message is before starting to send.
(The other alternative is to have just one message for the whole connection - i.e. you read until the the connection is closed.)
Blocking means that the socket waits till there is data there before returning from a recieve function. It's entirely possible there's a tiny wait on the end as well to try to fill the buffer before returning, or it could just be a timing issue. It's also entirely possible that the non-blocking implementation returns one packet at a time, no matter if there's more than one or not. In short, no it's not a bug, but the specific 'why' of it is the old cop-out "it's implementation specific".