We are facing an issue in ActiveMQ Artemis 2.17.0. One of the queue went to paging state when it reached the max memory settings - max-size-bytes. After some time message count subside very much when consumer processed it. But it is not recovering the paging state until the queue is empty. Is it an expected behavior?
This is the expected behavior. All the messages associated with the address must be consumed before the broker will leave the configured paging mode.
Related
When reading artemis docs understood that - artemis stores entire current active messages in memory and can offload messages to paging area for a given queue/topic as per the settings & artemis journals are append only.
With respect to this
How and when broker sync messages to and from from journal ( Only during restart ? )
How it identifies the message to be deleted from journal ( For ex : If journal is append only mode , if a consumer of a persistent message ACK the message , then how broker removes a single message from journal without keeping indexing).
Isn't it a performance hit to keep every active message in memory and even makes broker go out of memory. To avoid this , every queue/topic pagination settings have to be set in configuration otherwise broker may fill all the messages. Please correct me if wrong.
Any reference link that can explain about message sync and these information is helpful. Artemis docs explains regarding append only mode though but may be any section/article that explains these storage concepts and I might be missing.
By default, a durable message is persisted to disk after the broker receives it and before the broker sends a response back to the client that the message was received. In this way the client can know for sure that if it receives the response back from the broker that the durable message it sent was received and persisted to disk.
When using the NIO journal-type in broker.xml (i.e. the default configuration), data is synced to disk using java.nio.channels.FileChannel.force(boolean).
Since the journal is append-only during normal operation then when a message is acknowledged it is not actually deleted from the journal. The broker simply appends a delete record to the journal for that particular message. The message will then be physically removed from the journal later during "compaction". This process is controlled by the journal-compact-min-files & journal-compact-percentage parameters in broker.xml. See the documentation for more details on that.
Keeping message data in memory actually improves performance dramatically vs. evicting it from memory and then having to read it back from disk later. As you note, this can lead to memory consumption problems which is why the broker supports paging, blocking, etc. The main thing to keep in mind is that a message broker is not a storage medium like a database. Paging is a palliative measure meant to be used as a last resort to keep the broker functioning. Ideally the broker should be configured to handle the expected load without paging (e.g. acquire more RAM, allocate more heap). In other words, message production and message consumption should be balanced. The broker is designed for messages to flow through it. It can certainly buffer messages (potentially millions depending on the configuration & hardware) but when its forced to page the performance will drop substantially simply because disk is orders of magnitude slower than RAM.
I noted strange behavior in Artemins. I'm not sure if this is a bug or if I don't understand something.
I use Artemis Core API. I set autoCommitAcks to false. I noted that If message is received in MessageHandler but message is not acknowledged and session is rollbacked then Artemis does not consider this message as undelivered, Artemis consider this message as not sent to consumer at all. Parameter max-delivery-attempts does not work in this case. Message is redelivered an infinite number of times. Method org.apache.activemq.artemis.api.core.client.ClientMessage#getDeliveryCount returns 1 each time. Message has false value in Redelivered column in web console. If message is acknowledged before session rollback then max-delivery-attempts works properly.
What exactly is the purpose of message acknowledge? Acknowledge means only that message was received or acknowledge means that message was received and processed successfully? Maybe I can use acknowledge in both ways and it only depends on my requirements?
By message acknowledge I mean calling org.apache.activemq.artemis.api.core.client.ClientMessage#acknowledge method.
The behavior you're seeing is expected.
Core clients actually consume messages from a local buffer which is filled with messages from the broker asynchronously. The amount of message data in this local buffer is controlled by the consumerWindowSize set on the client's URL. The broker may dispatch many thousands of messages to various clients that sit in these local buffers and are never actually seen in any capacity by the consumers. These messages are considered to be in delivery and are not available to other clients, but they are not considered to be delivered. Only when a message is acknowledged is it considered to be delivered to a client.
If the client is auto-committing acknowledgements then acknowledging a message will quickly remove it from its respective queue. Once the message is removed from the queue it can no longer be redelivered because it doesn't exist anymore on the broker. In short, you can't get configurable redelivery semantics if you auto-commit acknowledgements.
However, if the client is not auto-committing acknowledgements and the consumer closes (for any reason) without committing the acknowledgements or calls rollback() on its ClientSession then the acknowledged messages will be redelivered according to the configured redelivery semantics (including max-delivery-attempts).
If ActiveMQ Artemis is configured with a redelivery-delay > 0 and a JMS listener uses ctx.rollback() or ctx.recover() then the broker will redeliver the message as expected. But if a producer pushes a message to the queue during a redelivery then the receiver gets unordered messages.
For example:
Queue: 1 -> message 1 is redelivered as expected
Push during the redelivery phase
Queue: 2,3 -> the receiver gets 2,3,1
With a redelivery-delay of 0 everything is ok, but the frequency of redeliveries on consumer side is too high. My expectation is that every delivery to the consumer should be stopped until the unacknowledged message is purged from the queue or acknowledged. We are using a queue for connection with single devices. Every device has it's own I/O queue with a single consumer. The word queue suggest strict ordering to me. It could be nice to make this behavior configurable like "strict_redelivery_order".
What you're seeing is the expected behavior. If you use a redelivery-delay > 0 then delivery order will be broken. If you use a redelivery-delay of 0 then delivery order will not be broken. Therefore, if you want to maintain strict order then use a redelivery-delay of 0.
If the broker blocked delivery of all other messages on the queue during a redelivery delay that would completely destroy message throughput performance. What if the redelivery delay were 60 seconds or 10 minutes? The queue would be blocked that entire time. This would not be tenable for an enterprise message broker serving hundreds or perhaps thousands of clients each of whom may regularly be triggering redeliveries on shared queues. This behavior is not configurable.
If you absolutely must maintain message order even for messages that cannot be immediately consumed and a redelivery-delay of 0 causes redeliveries that are too fast then I see a few potential options (in no particular order):
Configure a dead-letter address and set a max-delivery-attempts to a suitable value so after a few redeliveries the problematic message can be cleared from the queue.
Implement a delay of your own in your client. This could be as simple as catching any exception and using a Thread.sleep() before calling ctx.rollback().
I am trying out ActiveMQ Artemis for a messaging design. I am expecting messages with embedded file content (bytes). I do not expect them to be any bigger than 10MB. However, I want to know if there is a configurable way to handle that in Artemis.
Also is there a default maximum message size it supports?
I tried and searched for an answer but could not find any.
Also, my producer and consumer are both .Net AMQP implementations.
ActiveMQ Artemis itself doesn't place a limit on the size of the message. It supports arbitrarily large messages. However, you will be constrained by a few things:
The broker's heap space: If the client sends the message all in one chunk and that causes the broker to exceed it's available heap space then sending the message will fail. The broker has no control over how the AMQP client sends the message. I believe AMQP supports sending messages in chunks but I'm not 100% certain of that.
The broker's disk space: AMQP messages that are deemed "large" by the broker (i.e. those that cannot fit into a single journal file) will be stored directly on disk in the data/largemessages directory. The ActiveMQ Artemis journal file size is controlled by the journal-file-size configuration parameter in broker.xml. The default journal-file-size is 10MB. By default the broker will stop providing credits to the producer when disk space utilization hits 90%. This is controlled by the max-disk-usage configuration parameter in broker.xml.
I have an activemq instance set up with tomcat for background message processing. It is set up to retry failed messages every 10 minutes for a retry period.
Now some dirty data has entered the system because of which the messages are failing. This is ok and can be fixed in the future. However, the problem is that none of the new correct incoming messages are getting processed and the error messages are constantly getting retried.
Any tips on what might be the issue, or how the priority is set? I haven't controlled the priority of the messages manually.
Thanks for your help.
-Pulkit
EDIT : I was able to solve the problem. The issue was that by the time all the dirty messages were handled, it was time for them to be retried. Thus none of the new messages were being consumed by the queue.
A dirty message was basically a message that was throwing an exception out due to some dirty data in the system. the redelivery settings was to do redeliveries every 10 mins for 1 day.
maximumRedeliveries=144
redeliveryDelayInMillis=600000
acknowledge.mode=transacted
ActiveMQ determines redelivery for a consumer based on the configuration of the RedeliveryPolicy that's assigned the ActiveMQConnectionFactory. Local redelivery halts new message dispatch until the rollbed back transaction messages are successfully redelivered so if you have a message that's causing you some sort of error such that you are throwing an exception or rolling back the transaction then it will get redelivered up to the max re-deliveries setting in the policy. Since your question doesn't provide much information on your setup and what you consider an error message I can't really direct you to a solution.
You should look at the settings available in the Redelivery Policy. Also you can configure redelivery to not block new message dispatch using the setNonBlockingRedelivery method.