How to spread sending messages with defined periodicity over time (General Programing Question) - scheduled-tasks

I need to send ~50 messages in endless loop. Each message has defined send periodicity:
[350, 800, 1000, 10000] milliseconds.
I need to calculate "start sending offset" which will ensure that messages will be simultaneously sent as rare as possible.
With no offsets in the very beginning all messages will be sent in their first iteration.
What would be a solution for this task? Programing Language is not important.

Related

Jmeter JSR223 Pre processor and sampler

I have a jsr223 preprocessor in the concurrency thread group which creates data to send it to Kafka producer. and I have a JSR sampler that uses Kafka client 2.7.0 to send messages to Kafka procedure.
The message sent to Kafka should be different each time for e.g. it has device information which should be different and events with time (which is the current time). These are been generated without any issues as I tested it with few
(50) threads. The problem I am having is when I want to send more messages like 6000 messages per second. How to resolve this issue
below is my setup
You're showing us a screenshot of the Concurrency Thread Group configured to start 6000 threads (virtual users) and hold them for 20 seconds.
It will result in 6000 messages per second only if your JSR223 PreProcessor and Sampler cumulative response time is exactly 1 second. If it will be less - you will generate more messages per second and vice versa.
For example:
if PreProcessor and Sampler execution time is 500ms - you will end with 12000 messages per second
if PreProcessor and Sampler execution time is 2000ms - you will send 3000 messages per second
If you're sending less messages than you need - consider following JMeter Best Practices, at least disable all the Listeners and run your test in non-GUI mode. Still not enough? Increase the concurrency. Increased concurrency, lacking resources and still not enough - go for Distributed Testing
If you're sending more messages than 6000 per second you can limit JMeter sampler execution rate to the desired throughput using Throughput Shaping Timer
You can see your current throughput using i.e. Transactions per Second listener

Kafka producer future metadata in callback

In my application when I send messages I use the Metadata in the callback to save the offset of the record for future usage. However sometimes the metadata.offset() returns -1 which makes things hard later.
Why does this happen and is there a way to get the offset without consuming the topic to find it.
Edit: I am on ack 0 currently, when I pass to ack 1 I don't have these errors anymore however my performance drops drastically. From 100k message in 10 sec to 1 min.
acks=0 If set to zero then the producer will not wait for any
acknowledgment from the server at all. The record will be immediately
added to the socket buffer and considered sent. No guarantee can be
made that the server has received the record in this case, and the
retries configuration will not take effect (as the client won't
generally know of any failures). The offset given back for each
record will always be set to -1.
This is not exactly true as out of 100k messages I got 95k with offsets but I guess it's normal.
Still will need to find another solution to get the offset with ack=0

Kafka to Kafka mirroring with sampling

Any idea how to make kafka-to-kafka mirroring but with a sampling (for example only 10% of the messages)?
You could use MirrorMakerMessageHandler (which is configured by message.handler parameter):
https://github.com/apache/kafka/blob/1.0/core/src/main/scala/kafka/tools/MirrorMaker.scala#L430
The handler itself would need to make a decision whether to forward a message. A simple implementation would be just a counter of messages received, and forwarding if 0 == counter % 10.
However this handler is invoked for every message received, so it means that you'd be receiving all of messages & throwing away 90% of them.
The alternative is to modify main loop, where the mirror maker consumer receives the message, and forwards it to producers (that send the message to mirror cluster) is here
https://github.com/apache/kafka/blob/1.0/core/src/main/scala/kafka/tools/MirrorMaker.scala#L428
You would need to modify the consumer part to either-or:
forward only N-th (10th) message/offset
seek to only N-th message in log
I prefer the former idea, as in case of multiple MM instances in the same consumer group, you would still get reasonable behaviour. Second choice would demand more work from you to handle reassignments.
Also, telling which message is from 10% is non-trivial, I just assumed that it's every 10th message received.

Using many consumers in SQS Queue

I know that it is possible to consume a SQS queue using multiple threads. I would like to guarantee that each message will be consumed once. I know that it is possible to change the visibility timeout of a message, e.g., equal to my processing time. If my process spend more time than the visibility timeout (e.g. a slow connection) other thread can consume the same message.
What is the best approach to guarantee that a message will be processed once?
What is the best approach to guarantee that a message will be processed once?
You're asking for a guarantee - you won't get one. You can reduce probability of a message being processed more than once to a very small amount, but you won't get a guarantee.
I'll explain why, along with strategies for reducing duplication.
Where does duplication come from
When you put a message in SQS, SQS might actually receive that message more than once
For example: a minor network hiccup while sending the message caused a transient error that was automatically retried - from the message sender's perspective, it failed once, and successfully sent once, but SQS received both messages.
SQS can internally generate duplicates
Simlar to the first example - there's a lot of computers handling messages under the covers, and SQS needs to make sure nothing gets lost - messages are stored on multiple servers, and can this can result in duplication.
For the most part, by taking advantage of SQS message visibility timeout, the chances of duplication from these sources are already pretty small - like fraction of a percent small.
If processing duplicates really isn't that bad (strive to make your message consumption idempotent!), I'd consider this good enough - reducing chances of duplication further is complicated and potentially expensive...
What can your application do to reduce duplication further?
Ok, here we go down the rabbit hole... at a high level, you will want to assign unique ids to your messages, and check against an atomic cache of ids that are in progress or completed before starting processing:
Make sure your messages have unique identifiers provided at insertion time
Without this, you'll have no way of telling duplicates apart.
Handle duplication at the 'end of the line' for messages.
If your message receiver needs to send messages off-box for further processing, then it can be another source of duplication (for similar reasons to above)
You'll need somewhere to atomically store and check these unique ids (and flush them after some timeout). There are two important states: "InProgress" and "Completed"
InProgress entries should have a timeout based on how fast you need to recover in case of processing failure.
Completed entries should have a timeout based on how long you want your deduplication window
The simplest is probably a Guava cache, but would only be good for a single processing app. If you have a lot of messages or distributed consumption, consider a database for this job (with a background process to sweep for expired entries)
Before processing the message, attempt to store the messageId in "InProgress". If it's already there, stop - you just handled a duplicate.
Check if the message is "Completed" (and stop if it's there)
Your thread now has an exclusive lock on that messageId - Process your message
Mark the messageId as "Completed" - As long as this messageId stays here, you won't process any duplicates for that messageId.
You likely can't afford infinite storage though.
Remove the messageId from "InProgress" (or just let it expire from here)
Some notes
Keep in mind that chances of duplicate without all of that is already pretty low. Depending on how much time and money deduplication of messages is worth to you, feel free to skip or modify any of the steps
For example, you could leave out "InProgress", but that opens up the small chance of two threads working on a duplicated message at the same time (the second one starting before the first has "Completed" it)
Your deduplication window is as long as you can keep messageIds in "Completed". Since you likely can't afford infinite storage, make this last at least as long as 2x your SQS message visibility timeout; there is reduced chances of duplication after that (on top of the already very low chances, but still not guaranteed).
Even with all this, there is still a chance of duplication - all the precautions and SQS message visibility timeouts help reduce this chance to very small, but the chance is still there:
Your app can crash/hang/do a very long GC right after processing the message, but before the messageId is "Completed" (maybe you're using a database for this storage and the connection to it is down)
In this case, "Processing" will eventually expire, and another thread could process this message (either after SQS visibility timeout also expires or because SQS had a duplicate in it).
Store the message, or a reference to the message, in a database with a unique constraint on the Message ID, when you receive it. If the ID exists in the table, you've already received it, and the database will not allow you to insert it again -- because of the unique constraint.
AWS SQS API doesn't automatically "consume" the message when you read it with API,etc. Developer need to make the call to delete the message themselves.
SQS does have a features call "redrive policy" as part the "Dead letter Queue Setting". You just set the read request to 1. If the consume process crash, subsequent read on the same message will put the message into dead letter queue.
SQS queue visibility timeout can be set up to 12 hours. Unless you have a special need, then you need to implement process to store the message handler in database to allow it for inspection.
You can use setVisibilityTimeout() for both messages and batches, in order to extend the visibility time until the thread has completed processing the message.
This could be done by using a scheduledExecutorService, and schedule a runnable event after half the initial visibility time. The code snippet bellow creates and executes the VisibilityTimeExtender every half of the visibilityTime with a period of half the visibility time. (The time should to guarantee the message to be processed, extended with visibilityTime/2)
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
ScheduledFuture<?> futureEvent = scheduler.scheduleAtFixedRate(new VisibilityTimeExtender(..), visibilityTime/2, visibilityTime/2, TimeUnit.SECONDS);
VisibilityTimeExtender must implement Runnable, and is where you update the new visibility time.
When the thread is done processing the message, you can delete it from the queue, and call futureEvent.cancel(true) to stop the scheduled event.

What's the best way to subsample a ZMQ PUB SUB connection?

I have a ZMQ_PUB socket sending messages out at ~50Hz. One destination needs to react to each message, so it has a standard ZMQ_SUB socket with a while(true) loop checking for new messages. A second destination should only react once a second to the "most recent" message. That is, my second destination needs to subsample.
For the second destination, I believe I'd want to have a time-based loop that is called at my desired rate (1Hz) and recv() the latest message, dropping the rest. I believe this is done via a ZMQ_HWM on the subscriber. Is there another option that needs to be set somewhere?
Do I need to worry about the different subscribers having different HWMs? Will the publisher become angry? It's a shame ZMQ_RATE only applies to multicast sockets.
Is there a best way to accomplish what I'm attempting?
zmq v3.2.4
The HighWaterMark will not be a fantastic solution for your problem. Setting it on the subscriber to, let's say, 10 and reading 1 message per second, will just give you the old messages first, slowly, and throw away all the new, because it's limit are reached.
You could either use a topic on you publisher that makes you able to filter out every 50th message like making the topic messageCount % 50 and subscribe to 0.
Otherwise maybe you shouldn't use zmq's pub/sub, but instead do you own look alike with router/dealer that allows you to subscribe to sampled messages.
Lastly you could also just send them all. 50 m/s is hardly anything in zmq (if they aren't heavy on data, like megs) and then only use every 50th message.