We are using the broker ActiveMQ Artemis 2.26.0, and I'm trying to set up a redelivery mechanism on a queue.
I would like for some messages to be retried for 72h maximum with progressive back-off. After 72h the message should be sent to a DLQ.
The doc states that both mechanisms of message redelivery and dead-letter queue can be combined, so I tried the following, using the examples provided with ActiveMQ Artemis:
broker.xml:
<address-settings>
<!--override the redelivery-delay for the example queue-->
<address-setting match="exampleQueue">
<redelivery-delay>30000</redelivery-delay>
<redelivery-delay-multiplier>2.5</redelivery-delay-multiplier>
<dead-letter-address>deadLetterQueue</dead-letter-address>
<max-redelivery-delay>259200000</max-redelivery-delay>
</address-setting>
</address-settings>
<addresses>
<address name="deadLetterQueue">
<anycast>
<queue name="deadLetterQueue"/>
</anycast>
</address>
<address name="exampleQueue">
<anycast>
<queue name="exampleQueue"/>
</anycast>
</address>
</addresses>
It seems that with this configuration the message are sent to deadLetterQueue after 10 redeliveries (default value of max-delivery-attempts).
How do I combine these values to fit my scenario?
TL; DR; you need to set a max-delivery-attempts value greater then 10
The total delivery delay is a geometric series so it is <redelivery-delay>*(1-<redelivery-delay-multiplier>^<max-delivery-attempts>)/(1-<redelivery-delay-multiplier>)
In your case the total delivery delay is 30000*(1-2.5^10)/(1-2.5)=190714863 that is less then 72h (259200000) so to fit your scenario you need to set a max-delivery-attempts value greater then 10, i.e. with max-delivery-attempts = 11 the total delivery delay is 476817158 (132h) that is greater than 72h.
Related
Let's say that I have 3 ActiveMQ Artemis brokers in one cluster:
Broker_01
Broker_02
Broker_03
In a given point of time I have a number of consumers for each broker:
Broker_01 has 50 consumers
Broker_02 has 10 consumers
Broker_03 has 10 consumers
Let's assume at this given point of time there are 70 messages to be sent to a queue in this cluster.
We are expecting load balancing done by the cluster so that Broker_01 would receive 50 messages, Broker_02 10 messages, and Broker_03 also 10 messages, but currently we are experiencing that the 70 messages are distributed randomly through all the 3 brokers.
Is there any configuration I can do to distribute the messages based on the number of consumers in each broker?
I just read the documentation. So, as far as I understood, ActiveMQ does load balancing, based on round robin, if we configure cluster connection. Our broker.xml looks like this:
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>amq-v01_connector</connector-ref>
<min-large-message-size>524288</min-large-message-size>
<call-timeout>120000</call-timeout>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.5</retry-interval-multiplier>
<max-retry-interval>2000</max-retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<notification-interval>800</notification-interval>
<notification-attempts>2</notification-attempts>
<static-connectors>
<connector-ref>amq-v02_connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Further, address-setting for the queue looks like this:
<address-setting match="MyQueue">
<address-full-policy>BLOCK</address-full-policy>
<max-size-bytes>50Mb</max-size-bytes>
</address-setting>
Am I missing something, so that load balancing will be done?
The next point would be as stated in documentation, load balancing is always done based on round robin, there is no configuration possible to load balance based on number of consumers in each node.
I assume that I need client-side connection load-balancing since we want to load-balance the messages arriving in the 3 brokers according to the number of consumers in each broker. As stated in the documentation, there are 5 out-of-the-box policies (Round-Robin, First Element, etc.) which we can use. Additionally we could implement our own policy by implementing ConnectionLoadBalancingPolicy. Assuming that I would like to implement my own policy, what would be the idea how to do this according the number of consumer?
There is no out-of-the-box way for the producer to know how many consumers are on each broker and then send messages to those messages to those brokers accordingly.
It is possible for you to implement your own ConnectionLoadBalancingPolicy. However, in order to determine how many consumers exist on the queue load-balancing policy implementation would need to know the URL of all the brokers in the cluster as well as name of the queue to which you're sending messages, and there's no way to supply that information. The ConnectionLoadBalancingPolicy interface is very simple.
I would encourage you to revisit your need for a 3-node cluster in the first place if each node is going to have so few messages on it. A single broker can handle a throughput of millions of messages in certain use-cases. If each node is dealing with less than 50 messages then you probably don't need a cluster at all.
I am experimenting with kafka and comparing at-least-once performance with at-most-once performance. However from my test runs, at-least-once seems to have higher throughput then at-most-once? This makes no sense since at-least-once use acknowledgements etc. Below is the settings i use.
For at-most-once i use the following settings:
Producer
properties.put("acks", "0");
properties.put("retries", 0);
Consumer
properties.put("enable.auto.commit", "true");
For at-least-once i use the following settings:
Producer
(Standard settings)
Consumer
properties.put("enable.auto.commit", "false");
And i do
kafkaConsumer.commitSync();
After each ConsumerRecords poll
How the measurements is performed
In my test-bench i am measuring two values:
(Throughput) The amount of messages that is received by the consumer each second
(Latency) The average latency for the messages received by the consumer each second
When i am running Kafka in at-most-once mode the latency is higher and the throughput is lower
If ActiveMQ Artemis is configured with a redelivery-delay > 0 and a JMS listener uses ctx.rollback() or ctx.recover() then the broker will redeliver the message as expected. But if a producer pushes a message to the queue during a redelivery then the receiver gets unordered messages.
For example:
Queue: 1 -> message 1 is redelivered as expected
Push during the redelivery phase
Queue: 2,3 -> the receiver gets 2,3,1
With a redelivery-delay of 0 everything is ok, but the frequency of redeliveries on consumer side is too high. My expectation is that every delivery to the consumer should be stopped until the unacknowledged message is purged from the queue or acknowledged. We are using a queue for connection with single devices. Every device has it's own I/O queue with a single consumer. The word queue suggest strict ordering to me. It could be nice to make this behavior configurable like "strict_redelivery_order".
What you're seeing is the expected behavior. If you use a redelivery-delay > 0 then delivery order will be broken. If you use a redelivery-delay of 0 then delivery order will not be broken. Therefore, if you want to maintain strict order then use a redelivery-delay of 0.
If the broker blocked delivery of all other messages on the queue during a redelivery delay that would completely destroy message throughput performance. What if the redelivery delay were 60 seconds or 10 minutes? The queue would be blocked that entire time. This would not be tenable for an enterprise message broker serving hundreds or perhaps thousands of clients each of whom may regularly be triggering redeliveries on shared queues. This behavior is not configurable.
If you absolutely must maintain message order even for messages that cannot be immediately consumed and a redelivery-delay of 0 causes redeliveries that are too fast then I see a few potential options (in no particular order):
Configure a dead-letter address and set a max-delivery-attempts to a suitable value so after a few redeliveries the problematic message can be cleared from the queue.
Implement a delay of your own in your client. This could be as simple as catching any exception and using a Thread.sleep() before calling ctx.rollback().
I am trying out ActiveMQ Artemis for a messaging design. I am expecting messages with embedded file content (bytes). I do not expect them to be any bigger than 10MB. However, I want to know if there is a configurable way to handle that in Artemis.
Also is there a default maximum message size it supports?
I tried and searched for an answer but could not find any.
Also, my producer and consumer are both .Net AMQP implementations.
ActiveMQ Artemis itself doesn't place a limit on the size of the message. It supports arbitrarily large messages. However, you will be constrained by a few things:
The broker's heap space: If the client sends the message all in one chunk and that causes the broker to exceed it's available heap space then sending the message will fail. The broker has no control over how the AMQP client sends the message. I believe AMQP supports sending messages in chunks but I'm not 100% certain of that.
The broker's disk space: AMQP messages that are deemed "large" by the broker (i.e. those that cannot fit into a single journal file) will be stored directly on disk in the data/largemessages directory. The ActiveMQ Artemis journal file size is controlled by the journal-file-size configuration parameter in broker.xml. The default journal-file-size is 10MB. By default the broker will stop providing credits to the producer when disk space utilization hits 90%. This is controlled by the max-disk-usage configuration parameter in broker.xml.
Can we control the transaction retry interval in MDB? If so, please provide an example or direct me to the documentation. We want to set up a time interval of 3 min for MDB transactions. The desire is that if the query fails \first time, then it retries after 3 min of time has elapsed.
Vairam;
Take a look at the Hornet Documentation for Message Redelivery. The issues you need to consider are:
The redelivery delay (you indicated 3 minutes).
The number of times the message should be redelivered.
If you elect not to redeliver indefinitely, the final action that occurs when the last redelivery attempt fails which could be:
Drop the message.
Enqueue the message to the designated DLQ.
Enqueue the message to some other queue.
Setting the redelivery delay
Delayed redelivery is defined in the address-setting configuration.
Example:
<!-- delay redelivery of messages for 3m -->
<address-setting match="jms.queue.exampleQueue">
<redelivery-delay>300000</redelivery-delay>
</address-setting>
Setting the maximum number of redeliveries and DLQ configuration
This can be defined declaratively by specifying the DLQ configuration in the address-setting configuration:
Example:
<!-- undelivered messages in exampleQueue will be sent to the dead letter address
deadLetterQueue after 3 unsuccessful delivery attempts
-->
<address-setting match="jms.queue.exampleQueue">
<dead-letter-address>jms.queue.deadLetterQueue</dead-letter-address>
<max-delivery-attempts>3</max-delivery-attempts>
</address-setting>
If you want to drop the message after the designated number of redelivery failures, check the message header value of "JMSXDeliveryCount" and if that number is equal to the maximum redeliveries, simply supress any exceptions and commit the transaction.