Use ActiveMQ :
Senario:
Server will send many messages to client through Queue.
However ,i nedd to drop the message in the queue if there is no consumer(client)
Thanks in advance!
You can use non persistent messaging and the message is dropped if there is no active consumers.
Another alternative could be to use message expiry, so the message expires after X period, if they are not consumed from the queue.
Set a JMSExpiration on each message for some duration (30 seconds? 5 minutes?), and then any message that's not consumed after that amount of time (whether because there's no consumer or because the consumer's running behind) will be sent to the DLQ. Or if you don't want it in the DLQ, then configure the dead letter strategy to set processExpired=false or use a Discarding DLQ Plugin, both documented at http://activemq.apache.org/message-redelivery-and-dlq-handling.html.
Related
Any possibility of message order issue while receive single queue consumer and multiple producer?
producer1 publish message m1 at 2021-06-27 02:57:44.513 and producer2 publish message m2 at 2021-06-27 02:57:44.514 on same queue worker_consumer_queue. Client code connected to the queue configured as single consumer should receive message in order m1 first and then m2 correct? Sometimes message receive in wrong order. version is ActiveMQ Artemis 2.17.0.
Even though I mentioned that multiple producer, message publish one after another from same thread using property blockOnDurableSend=false.
I create and close producer on each message publish. On same JVM, my assumption is order of published messages in queue, from same thread or from different threads even with async. timestamp is getJMSTimestamp(). async publish also maintain any internal queue has order?
If you use blockOnDurableSend=false you're basically saying you don't strictly care about the order or even if the message makes it to the broker at all. Using blockOnDurableSend=false basically means "fire and forget."
Furthermore, the JMSTimetamp is not when the message is actually sent as noted in the javax.jms.Message JavaDoc:
The JMSTimestamp header field contains the time a message was handed off to a provider to be sent. It is not the time the message was actually transmitted, because the actual send may occur later due to transactions or other client-side queueing of messages.
With more than one producer there is no guarantee that the messages will be processed in order.
More producers, ActiveMQ Artemis and one consumer are a distributed system and the lack of a global clock is a significant characteristic of distributed systems.
Even if producers and ActiveMQ Artemis were on the same machine and used the same clock, ActiveMQ Artemis could not receive the messages in the same order producers would create and send their messages. Because the time to create a message and the time to send a message include variable time latencies.
The easiest solution is to trust the order of the messages received by ActiveMQ Artemis, adding a timestamp with an interceptor or enabling the ingress timestamp, see ARTEMIS-2919 for further details.
If the easiest solution doesn't work, the distributed solution is to implement a distributed system total ordering algorithm as lamport timestamps.
Well, as it seams it is not a bug within Artemis, when it comes to a millisecond difference it is more like a network lag or something like this.
So to workaround I got to the idea, you could create a algorythm in which a recieved message will wait for ~100ms before it is really worked through (whatever you want to be doing with this message) and check if there is another message which your application recieved afterwards but is send before. So basicly have your own receiver queue with a delay.
IF there is message that was before, you could simply move that up in your personal algorythm. You could also think about to reject the first message back to your bus, depending on your settings on queues and topics it would be able to recieve it afterwards again.
I have scenario where i want to send message to a alert service that would process the message and would send it to hipchat.
But I want the message to be active only for a minute. If hipchat is down (hypothetical) then the message should not be sent to hipchat.
I am using kafka so one of the service sends the message to kafka then the message is consumed by alert service(it polls the service) which processes the message (kafka consumer) while processing it checks that the time now and the time of the message is not greater than one minute. If not, it sends the message to hipchat aynchronously.
Enhancement:
I want a way to construct a self destruction message so that i automatically disappears after one minute. Is there a way to do it with kafka ? OR is there a better alternate than kafka (flink/sqs). If yes, how?
You can make use of the Kafka topic configurations retention.ms and delete.retention.ms as described in the Topic Level Configs.
The retention.ms should be set to 1 minute (60000 ms) and the delete.retention.ms should be set to 0 in your case. That way, the messages will stay in the Kafka Topic for one minute before they get deleted. However, that also means that you might loose messages if your consumer takes more then one minute to consume all messages (especially when reading a topic from beginning).
Details on those configurations are:
delete.retention.ms: The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).
retention.ms: This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied.
If ActiveMQ Artemis is configured with a redelivery-delay > 0 and a JMS listener uses ctx.rollback() or ctx.recover() then the broker will redeliver the message as expected. But if a producer pushes a message to the queue during a redelivery then the receiver gets unordered messages.
For example:
Queue: 1 -> message 1 is redelivered as expected
Push during the redelivery phase
Queue: 2,3 -> the receiver gets 2,3,1
With a redelivery-delay of 0 everything is ok, but the frequency of redeliveries on consumer side is too high. My expectation is that every delivery to the consumer should be stopped until the unacknowledged message is purged from the queue or acknowledged. We are using a queue for connection with single devices. Every device has it's own I/O queue with a single consumer. The word queue suggest strict ordering to me. It could be nice to make this behavior configurable like "strict_redelivery_order".
What you're seeing is the expected behavior. If you use a redelivery-delay > 0 then delivery order will be broken. If you use a redelivery-delay of 0 then delivery order will not be broken. Therefore, if you want to maintain strict order then use a redelivery-delay of 0.
If the broker blocked delivery of all other messages on the queue during a redelivery delay that would completely destroy message throughput performance. What if the redelivery delay were 60 seconds or 10 minutes? The queue would be blocked that entire time. This would not be tenable for an enterprise message broker serving hundreds or perhaps thousands of clients each of whom may regularly be triggering redeliveries on shared queues. This behavior is not configurable.
If you absolutely must maintain message order even for messages that cannot be immediately consumed and a redelivery-delay of 0 causes redeliveries that are too fast then I see a few potential options (in no particular order):
Configure a dead-letter address and set a max-delivery-attempts to a suitable value so after a few redeliveries the problematic message can be cleared from the queue.
Implement a delay of your own in your client. This could be as simple as catching any exception and using a Thread.sleep() before calling ctx.rollback().
I am trying to deliver a JMS message after some time passes, my initial idea was to use expiry queue and to put the messages in a queue that doesn't have any consumers. So I have 3 default queues:
WaitQueue - (expiry queue for this one is set to SendQueue)
SendQueue - this one has consumers that process the messages(by default this one has expiryQueue as its timeout queue)
ExpiryQueue - default jboss queue for all messages that really expired(not intentionally)
In insert a message into the WaitQueue with my intended delay as TimeToLive, after the time expires I expect to see the messages in SendQueue(and the consumers to process them), however it stays empty and the messages directly go to ExpiryQueue, any ideas what is wrong?
The statistics for SendQueue shows that "Received messages" increase, but current messages stays at 0, so they arrive but get forwarded immediately to the last ExpiryQueue.
Instead of using expiry queue approach which is more resource intensive; you could consider using delivery delay at the Message level.
In case of HornetQ, you can set the property _HQ_SCHED_DELIVERY.
https://docs.jboss.org/hornetq/2.3.0.Final/docs/user-manual/html/scheduled-messages.html
TextMessage message = session.createTextMessage("This is a scheduled message message which will be delivered in 5 sec.");
message.setLongProperty("_HQ_SCHED_DELIVERY", System.currentTimeMillis() + 5000);
producer.send(message);
Since JMS2.0 (JavaEE7) this property can also be set on MessageProducer. See https://github.com/jboss/jboss-jms-api_spec/blob/master/src/main/java/javax/jms/MessageProducer.java#L285
I have an activemq instance set up with tomcat for background message processing. It is set up to retry failed messages every 10 minutes for a retry period.
Now some dirty data has entered the system because of which the messages are failing. This is ok and can be fixed in the future. However, the problem is that none of the new correct incoming messages are getting processed and the error messages are constantly getting retried.
Any tips on what might be the issue, or how the priority is set? I haven't controlled the priority of the messages manually.
Thanks for your help.
-Pulkit
EDIT : I was able to solve the problem. The issue was that by the time all the dirty messages were handled, it was time for them to be retried. Thus none of the new messages were being consumed by the queue.
A dirty message was basically a message that was throwing an exception out due to some dirty data in the system. the redelivery settings was to do redeliveries every 10 mins for 1 day.
maximumRedeliveries=144
redeliveryDelayInMillis=600000
acknowledge.mode=transacted
ActiveMQ determines redelivery for a consumer based on the configuration of the RedeliveryPolicy that's assigned the ActiveMQConnectionFactory. Local redelivery halts new message dispatch until the rollbed back transaction messages are successfully redelivered so if you have a message that's causing you some sort of error such that you are throwing an exception or rolling back the transaction then it will get redelivered up to the max re-deliveries setting in the policy. Since your question doesn't provide much information on your setup and what you consider an error message I can't really direct you to a solution.
You should look at the settings available in the Redelivery Policy. Also you can configure redelivery to not block new message dispatch using the setNonBlockingRedelivery method.