ActiveMQ Artemis redelivery delay - jpa

Setup:
We have a Spring Boot application that is reading messages from an ActiveMQ Artemis JMS queue.
The messages are processing in a JPA transaction.
When there is an exception triggering a rollback in JPA, it also triggers a JMS rollback in Artemis that is setup with redelivery delay.
Our app is running in multiple instances in parallel and this cause optimistic locking issues when processing multiple messages that share common data.
Issue: When X messages are processed in parallel and there is optimistic locking issue, then only 1 message goes through and all the others are re-scheduled with the delay. When redelivery happens, then same as before the X-1 messages will arrive at the same time since the delay is the same and cause the same issue with only one going through.
Question: Does anyone know a way to add variance to the redelivery delay time of ActiveMQ Artemis?
Note: I know that there is an option for that in ActiveMQ 5.x called collisionAvoidanceFactor, but it is missing from ActiveMQ Artemis.

As you note, there is no equivalent for collisionAvoidanceFactor in ActiveMQ Artemis. I know of no way to modify the redelivery delay in a similar manner. There is the redelivery-delay-multiplier, but that is enforced consistently across redeliveries and would not provide the variance that you're looking for.
You may consider using message grouping so that "messages that share common data" are consumed serially by the same consumer and therefore avoid the locking issues in the first place.
After looking at what it would take to implement this feature I opened ARTEMIS-2364. I'll be sending a pull-request soon so it will likely be in the next version of Artemis (i.e. 2.10).

Related

With ActiveMQ Artemis is it possible to find out if a listener has stopped listening to a topic?

I'm using ActiveMQ Artemis 2.18.0 and some Spring Boot clients that communicate with each other via topics. The Spring Boot clients use JMS for all MQTT operations.
I'd like to know if it is possible for a producer with one or more subscribers to find out whether a certain subscriber is actively listening or not. For example, there are 3 clients - SB1, SB2, and SB3. SB1 publishes to test/topic, and SB2 and SB3 are subscribed to test/topic. If SB2 shuts down for any reason would it be possible for SB1 to become aware of this?
I understand that queues would be the way to go for this, but my project is much better suited to the use of topics, and it is set up this way already and works fine. There's just one operation where it must be determined whether the listener is active or not in order to update the listener's online status, a crucial parameter. Right now, clients and the server continually poll a database so that the online status is periodically updated, I want to avoid doing this and use something that Artemis may provide instead.
Apache ActiveMQ Artemis emits notifications to inform listeners of potentially interesting events as consumer created or closed, see Management Notifications at http://activemq.apache.org/components/artemis/documentation/latest/management.html.
A listener of the management notification address would receive a message for each consumer created or closed, see the Management Notification Example at http://activemq.apache.org/components/artemis/documentation/latest/examples.html#management-notification
Part of the point to pub/sub based messaging is to decouple the information producer (publisher) from the consumer (subscriber). As a rule a published REALLY shouldn't care if there even are any subscribers.
If you want to know the status of the subscriber then it's up to the subscriber to update this, not the publisher. Things like the Last Will & Testament feature allow the subscriber to update it's status in the event of a failure to explicitly do it when going offline.

Purge all messages in ActiveMQ Artemis

We have several ActiveMQ Artemis 2.17.0 clusters setup to replicate between data centres with mirroring.
Our previous failover had been an emergency, and it's likely the state had fallen out of sync. When we next performed our scheduled failover tests weeks-old messages were sent to the consumers. I know that mirroring is asynchronous so it is expected that synchronization may not be 100% all the time. However, these messages were not within the time frame of synchronization delays. It is worth noting that we've had several events which I expect might throw mirroring off. We had hit the NFS split brain issue as well as the past emergency fail over
As such, we are looking for a way to purge (or sync) all messages on the standby server after we know that there have been problems with the mirroring to prevent a similar scenario from happening. There are over 5,000 queues, so preferably the operation doesn't need to be run on a queue by queue basis.
Is there any way to accomplish this, either in ActiveMQ Artemis 2.17.0 or a later version?
There's no programmatic way to simply delete all the data from every queue on the broker. However, you can combine a few management operations (e.g. in a script) to get the same result. You can use the getQueueNames method to get the name of every queue and then pass those names to the destroyQueue(String) method.
However, the simplest way to clear all the data would probably be to simply stop the broker, clear the data directory, and then restart the broker.

Redis / Kafka - Why stream consumers are blocked?

Are Kafka stream/redis stream good for reactive architecture?
I am asking this mainly because both redis and kafka seem to be blocking threads when consuming the messages.
Are there any reasons behind this? I was hoping that I could read messages with some callback - so execute when the message was delivered like pub/sub in a reactive manner. Not by blocking thread.
Kafka client is relatively low level, what is "good" as in: it provides you with much flexibility when (and in which thread) you'd do the record processing. In the end, to receive a record, someone needs to block (as actual reading is sending fetch requests over and over). Whether the thread being blocked is a "main-business" thread or some side-i/o dedicated one will depend on developer's choice.
You might take a look at higher-level offerings such as Spring-Kafka or Kafka Stream API / Kafka Connect frameworks that provide "fatter" inversion-of-control containers, effectively answering the above concern.

How long a rollbacked message is kept in a Kafka topic

I came across with this scenario when implementing a chained transaction manager inside our spring boot application interacting with consuming messages from JMS then publishing to a Kafka topic.
My testing strategy was explained on here:
Unable to synchronise Kafka and MQ transactions usingChainedKafkaTransaction
In short I threw a RuntimeException on purpose after consuming messages from MQ and writing them to Kafka just to test transaction behaviour.
However as the rollback functionality worked OK I could see the number of uncommitted messages in the Kafka topic growing forever even if a rollback was happening with each processing. In a few seconds I ended up with hundreds of uncommitted messages in the topic.
Naturally I asked myself if a message is rollbacked why would it still be there taking storage. I understand with transaction isolation set to read_committed they will never get consumed but the idea of a poison message being rollbacked again and again eating up your storage does not sound right to me.
So my question is:
Am I missing something? Is there a configuration in place for a "time to live" or similar for a message that was rollbacked. I tried to read the Kafka docs around this subject but I could not find anything. Is such a setting is not in place what would be a good practice to deal with situations like this and avoid wasting storage.
Thank you in advance for your inputs.
That's just the way Kafka works.
Publishing a record always takes a slot in the partition log. Whether or not a consumer can see that record depends on whether it is committed or not (assuming the isolation level is read_committed).
Kafka achieves its extraordinary throughput because of its simple log architecture.
Rollback is assumed to be somewhat rare.
If you are getting so many rollbacks then your application architecture is probably at fault.
You should probably shut things down for a while if you keep rolling back.
To specifically answer your question, see log-rentention-hours.
The uncommitted records are kept for a week by default.

Apache Camel XA rollback and store on failure queue

I am starting to think this is impossible now so hopefully somebody can give me some guidance.
In Short, I have a Springboot application running apache camel routes, XA is configured using Atomikos and as far as I can tell all the XA specific configuration is as it should be. When a route executes I can see a message removed from the jms queue, a database insert is executed using a #Transacted JPA component and the message is routed to an output jms queue. All fine and I can see log information that supports the transaction manager committing both jms and JPA bits.
The issue comes when I have an exception, I need to be able to attempt re delivery 3 times and if that fails route the message on to a failure queue but not before the database insert is be rolled back.
I have a configured TransactionErrorHandlerBuilder which is setting the redelivery count to 3 and also managing the RedeliveryDelay, I can see all of that working but I never manage to divert the message after 3 delivery attempts to the route that I have setup to deliver to the failure queue. I have set the DeadLetterUri to point to the route but it seems that the transactionErrorHandler never makes use of it, camel just tries to redeliver the message 3 times over and over again until I kill the route.
Is what I am asking not supported? I am really hoping I am missing something obvious.
(Camel 2.19)
(SpringBoot 1.5.3)
thanks
Paul