Can we add a message for future processing in MSMQ - msmq

I am trying to create a MSMQ solution and for certain message I want them to be processed after 6PM only so is there a way in MSMQ so that the message is processed in Future?

A queue is a first-in, first-out data structure. If your application needs to process some messages after 6 PM, move the messages to a different queue that is only processed after 6 PM.

You could get the application to Peek the message first to read the property that lets you know it's a post-6pm message and act accordingly. If it's after 6pm, receive the message; if it isn't then Peek Next.

Related

Reconsume Kafka Message that failed during processing due to DB error

I am new to Kafka and would like to seek advice on what is the best practice to handle such scenario.
Scenario:
I have a spring boot application that has a consumer method that is listening for messages via the #KafkaListner annotation. Once an incoming message has occurred, the consumer method will process the message, which simply performs database updates to different tables via JdbcTemplate.
If the updates to the tables are successful, I will manually commit the message by calling the acknowledge() method. If the database update fails, instead of calling the acknowledge() method, I will call the nack() method with a given duration (E.g. 10 seconds) such that the message will reappear again to be consumed.
Things to note
I am not concerned with the ordering of the messages. Whatever event comes I just have to consume and process it, that's all.
I am only given a topic (no retryable topic and no dead letter topic)
Here is the problem
If I do the above method, my consumer becomes inconsistent. Let's say if I call the nack() method with a duration of 1min, meaning to say after 1 min, the same message will reappear.
Within this 1 min, there could "x" number of incoming messages to be consumed and processed. The observation made was none of these messages are getting consumed and processed.
What I want to know
Hence, I hope someone will advise me what I am doing wrongly and what is the best practice / way to handle such scenarios.
Thanks!
Records are always received in order; there is no way to defer the current record until later, but continue to process other records after this one when consuming from a single topic.
Kafka topics are a linear log and not a queue.
You would need to send it to another topic; the #RetryableTopic (non-blocking retrties) feature is specifically designed for this use case.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic
You could also increase the container concurrency so at least you could continue to process records from other partitions.

Message order issue in single consumer connected to ActiveMQ Artemis queue

Any possibility of message order issue while receive single queue consumer and multiple producer?
producer1 publish message m1 at 2021-06-27 02:57:44.513 and producer2 publish message m2 at 2021-06-27 02:57:44.514 on same queue worker_consumer_queue. Client code connected to the queue configured as single consumer should receive message in order m1 first and then m2 correct? Sometimes message receive in wrong order. version is ActiveMQ Artemis 2.17.0.
Even though I mentioned that multiple producer, message publish one after another from same thread using property blockOnDurableSend=false.
I create and close producer on each message publish. On same JVM, my assumption is order of published messages in queue, from same thread or from different threads even with async. timestamp is getJMSTimestamp(). async publish also maintain any internal queue has order?
If you use blockOnDurableSend=false you're basically saying you don't strictly care about the order or even if the message makes it to the broker at all. Using blockOnDurableSend=false basically means "fire and forget."
Furthermore, the JMSTimetamp is not when the message is actually sent as noted in the javax.jms.Message JavaDoc:
The JMSTimestamp header field contains the time a message was handed off to a provider to be sent. It is not the time the message was actually transmitted, because the actual send may occur later due to transactions or other client-side queueing of messages.
With more than one producer there is no guarantee that the messages will be processed in order.
More producers, ActiveMQ Artemis and one consumer are a distributed system and the lack of a global clock is a significant characteristic of distributed systems.
Even if producers and ActiveMQ Artemis were on the same machine and used the same clock, ActiveMQ Artemis could not receive the messages in the same order producers would create and send their messages. Because the time to create a message and the time to send a message include variable time latencies.
The easiest solution is to trust the order of the messages received by ActiveMQ Artemis, adding a timestamp with an interceptor or enabling the ingress timestamp, see ARTEMIS-2919 for further details.
If the easiest solution doesn't work, the distributed solution is to implement a distributed system total ordering algorithm as lamport timestamps.
Well, as it seams it is not a bug within Artemis, when it comes to a millisecond difference it is more like a network lag or something like this.
So to workaround I got to the idea, you could create a algorythm in which a recieved message will wait for ~100ms before it is really worked through (whatever you want to be doing with this message) and check if there is another message which your application recieved afterwards but is send before. So basicly have your own receiver queue with a delay.
IF there is message that was before, you could simply move that up in your personal algorythm. You could also think about to reject the first message back to your bus, depending on your settings on queues and topics it would be able to recieve it afterwards again.

Is batch message sending possible with Sink.actorRefWithAck?

I'm using Akka Streams and I came across Sink.actorRefWithAck. I understand that it sends a message and only tries pulling in another element from the stream when an acknowledgement for the previous message has been received. Is there a way to batch-process messages with this sink? Example: pull five messages and only pull the next five once the first five have been acknowledged. I've thought about something like
source.grouped(5).to(Sink.actorRefWithAck(...))
But that would require the receiver to change to work with sequences, which let's assume is out of the question.
No, that is not possible with Sink.actorRefWithAck() while keeping individual messages being queued in the actor mailbox rather than the entire batch.
One idea to queue up messages in the actor inbox more eagerly would be to use source.mapAsync(n)(ask-actor).to(Sink.ignore). This would send n to the actor and then as soon as the first one gets a response from the actor, it would pull and enqueue a new element.

Kafka message loss because of later message

So I got some annoying offset commiting case with my kafka consumers.
I use 'kafka-node' for my project.
I created a topic.
Created 2 consumers within a consumer-group over 2 servers.
Auto-commit set to false.
For every mesaage my consumers get, they start an async process which can take between 1~20sec, when the process done the consumer commits the offset..
My problem is:
There is a senarios in which,
Consumer 1 gets a message and takes 20sec to process.
In the middle of the process he gets another message which takes 1s to process.
He finish the second message processing, commit the offset, then crashes right away.
Causing the previous message processing to fail.
If I re run the consumer, hes not reading the first message again, because the second message already commited the offsst which is greater than the first.
How can i avoid this?
Kafkaconsumer.on('message', async(message)=>{
await SOMETHING_ASYNC_1~20SEC;
Kafkaconsumer.commit(()=>{});
});
You essentially want to throttle messages and handle concurrency by utilizing async.queue.
Create a async.queue with message processor and concurrency of one (the message processor itself is wrapped with setImmediate so it will not freeze up the event loop)
Set the queue.drain to resume the consumer
The handler for consumer's message event pauses the consumer and pushes the message to the queue.
The kafka-node README details this here.
An example implementation, similar to your problem, can be found here.

How to drop queue message when there is no consume?(ActiveMQ)

Use ActiveMQ :
Senario:
Server will send many messages to client through Queue.
However ,i nedd to drop the message in the queue if there is no consumer(client)
Thanks in advance!
You can use non persistent messaging and the message is dropped if there is no active consumers.
Another alternative could be to use message expiry, so the message expires after X period, if they are not consumed from the queue.
Set a JMSExpiration on each message for some duration (30 seconds? 5 minutes?), and then any message that's not consumed after that amount of time (whether because there's no consumer or because the consumer's running behind) will be sent to the DLQ. Or if you don't want it in the DLQ, then configure the dead letter strategy to set processExpired=false or use a Discarding DLQ Plugin, both documented at http://activemq.apache.org/message-redelivery-and-dlq-handling.html.