Prime new topic subscribers with old messages in Apache Artemis - activemq-artemis

I'm configuring an Apache Artemis message-broker. The broker will accept big files and downstream consumers access the topic to process the latest files. Now I'm wondering how to make the latest files available for dev-runs. Because the messages only arrive a few times a day, the test runs would need to access the last few sent messages and can't wait for the next.
For production and staging-systems, I found that durable subscriptions work fine. I've adapted an Apache Camel config to serve as an illustration. Here are two consumers that receive messages, each using a durable subscription:
<route id="inbox">
<from uri="file:inbox"/>
<to uri="activemq:topic:testing"/>
</route>
<route id="outbox-staging">
<from uri="activemq:topic:testing?clientId=staging&durableSubscriptionName=staging"/>
<to uri="file:outbox-staging"/>
</route>
<route id="outbox-production">
<from uri="activemq:topic:testing?clientId=production&durableSubscriptionName=production"/>
<to uri="file:outbox-production"/>
</route>
This is fine. If a consumer is offline it will pick up messages when it comes back online. Now if another consumer joins for testing;
<route id="outbox-testing" streamCache="true">
<from uri="activemq:topic:testing?clientId=my-local-consumer&durableSubscriptionName=my-local-consumer"/>
<to uri="file:outbox-local"/>
</route>
because the subscription didn't exist before, the consumer will have to wait for new messages. What I'm looking for is new subscribers to be immediately primed with the available messages. I found different names for the concept such as prefetchPolicy, consumerWindowSize, or "retroactive consumer". But it's unclear to me which terms apply to Apache Artemis and how to set them up because the examples mostly refer to Apache ActiveMQ.
How can a configure Artemis so that a consumer joining on a new subscription gets past messages?

The prefetchPolicy doesn't apply to ActiveMQ Artemis. It's for ActiveMQ 5.x.
The consumerWindowSize does apply to ActiveMQ Artemis.
However, neither prefetchPolicy nor consumerWindowSize apply to this situation as they're both related to "flow control" and have nothing to do with putting "missed" messages onto a JMS topic subscription.
The "retroactive consumer" feature is for ActiveMQ 5.x. A similar feature (called "retroactive address") will be available in ActiveMQ Artemis 2.11. It was implemented as part of ARTEMIS-2504.
Therefore you have a few options:
Wait for ActiveMQ Artemis 2.11 to be released (should be released in January).
Build your own version of ActiveMQ Artemis based on the master branch which includes the retroactive address feature.
Modify your test environment so that new subscribers don't have to wait so long for messages (e.g. send them more frequently).

Related

ActiveMQ Artemis and multiple DLQs

I have many queues, i.e
my.queue.no.1
my.queue.no.2
my.queue.no.3
my.queue.no.4
And I want to redirect unsuccessful messages to DLQ, but I don't want to mix messages from all queues to a one DLQ.
Is it possible to have multiple DLQs?
i.e
my.queue.no.1
my.queue.no.dlq.1
my.queue.no.2
my.queue.no.dlq.2
my.queue.no.3
my.queue.no.dlq.3
my.queue.no.4
my.queue.no.dlq.4
P.S I'm using Artemis 2.16.0
ActiveMQ Artemis can automatically create the defined dead-letter-address and a corresponding dead-letter queue when a message is undeliverable enabling the auto-create-dead-letter-resources address setting. The dead-letter-queue-prefix address setting can be used to define a prefix used for automatically created dead-letter queues, i.e. to create DLQ.my.queue.no.NNN queues under the DLA address:
<address-settings>
<address-setting match="my.queue.no.#">
<dead-letter-address>DLA</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix>
...

Artemis - How to avoid TransactionRolledBackException for Non-Transactional session

I use live/backup with shared-storage, and I use a non-transacted JMS session. I always send one message, and I always receive one message then acknowledge and receive second message only after successful first acknowledge.
I got this exception in my non-transacted session:
Execution of JMS message listener failed. Caused by: [javax.jms.TransactionRolledBackException - AMQ219030: The transaction was rolled back on failover to a backup server]
javax.jms.TransactionRolledBackException: AMQ219030: The transaction was rolled back on failover to a backup server
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.rollbackOnFailover(ClientSessionImpl.java:904)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.commit(ClientSessionImpl.java:927)
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.acknowledge(ActiveMQMessage.java:719)
It happens because the session was marked as "rollbackOnly". I got this state after the following steps:
I use Spring-JMS. Consumer session works 24/7 (infinite loop session.receive())
The Master Node crashed, then the Master node was restarted
After recovery (After a couple of hours), I sent a message to the queue. The consumer read the message and throw Exception on acknowledge(because was marked as rollback-only)
I read message again (this is not very bad for my task) but Redelivery Count has not been increased
My consumer code:
onMessage(Message message) {
if (redeliveryCount(message) > 0){
processAsDublicate(message); // It's not invoked - it is error in my business logic.
}
}
I migrated from another broker and and I thought not to change the client logic
Question:
How to avoid TransactionRolledBackException for Non-Transactional session? If this is not possible i should change consumer code?
Thank you in Advance
UPDATE AFTER ANSWER:
https://github.com/apache/activemq-artemis/tree/2.14.0/examples/features/ha/replicated-failback
This example is not suitable for my case - I don't have non-acknowledged messages. I got this state after the following steps: 1) Restart server 2) consume message 3) acknoledge message
We use a broker for ~30 applications (24/7) ~ 200 consumers in total
For example, on the weekend we restart the JMS Broker
Will all consumers start getting this exception after consume new messages
(They don't have non-acknowledged messages)
The TransactionRolledBackException is expected as you can see in the replicated-failback example.
To prevent a consumer from receiving the same message more times, an idempotent consumer must be implemented, ie Apache Camel provides an Idempotent consumer component that would work with any JMS provider, see: http://camel.apache.org/idempotent-consumer.html

Redelivering JMS messages from the DLQ

I have two components communicating over an jms queue in a wildfly instance. As soon as the consumer of the queue disconnects or gets stopped, the messages are forwarded to the DLQ (at least when wildfly is restarted).
Is it possible to configure wildfly to automatically redeliver the messages from DLQ as soon as a consumer reconnects to the queue?
Some details
Wildfly version: 8.2.0
standalone.xml - As far as I can tell, nothing special
<jms-destinations>
<jms-queue name="ExpiryQueue">
<entry name="java:/jms/queue/ExpiryQueue"/>
<durable>false</durable>
</jms-queue>
<jms-queue name="DLQ">
<entry name="java:/jms/queue/DLQ"/>
<durable>false</durable>
</jms-queue>
...
<jms-queue name="Q1-Producer-to-Consumer">
<entry name="java:/queue/Q1-Producer-to-Consumer"/>
<entry name="java:jboss/exported/queue/Q1-Producer-to-Consumer"/>
<durable>false</durable>
</jms-queue>
</jms-destinations>
Thanks.
The DLQ only gets messages that have thrown an exception during message processing. If a consumer disconnects, the messages will just still be sitting there awaiting delivery
If you are seeing an Issue whereby during a server restart messages hit the DLQ, this would suggest that your consumer is consuming messages before the resources it requires are available, so is erroring when processing the messages. You would be better to fix your consumer to not start consuming messages to early, rather than trying to fish the failed messages back from DLQ

How to dequeued messages from a Activemq queue?

I'm using the Virtual Topics concepts of Activemq 5.14.5 (https://activemq.apache.org/virtual-destinations) with MQTT protocol.
ActiveMQ will pick up the messages written to the topic and write them to a queue (or multiple queues or topics). See the ${ACTIVEMQ_HOME}/conf/activemq.xml configuration below:
<beans>
<broker>
...
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="MY.QUEUE">
<forwardTo>
<queue physicalName="FOO" />
<topic physicalName="BAR" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
...
</broker>
</beans>
Using Mqtt.fx software (https://mqttfx.jensd.de/) I'm only able to dequeue from a Topic (BAR). How can I dequeue from a Queue (FOO) to see the messages arrived on it?
I am new tot this and learning about the protocol MQTT and broker Activemq.

How to recover lost message in kafka consumer

I'm writing an application in Apache camel. I am consume messages from some Kafka topic via camel Kafka component and dumps into database for recovery in case of any crash/restart happens. Below is the camel URI
kafka:?autoCommitEnable=false&groupId=r&keySerializerClass=org.apache.kafka.common.serialization.StringSerializer&serializerClass=org.apache.kafka.common.serialization.StringSerializer&topic=
My use case is - I have consumed some message(s) from Kafka but could not dumped the same into the database for recovery and crash happens.Now how to get the all the lost messages with the same consumer group ID after restarting the application ?
Thanks
Now how to get the all the lost messages with the same consumer group ID after restarting the application ?
Actually kafka store the consumer offset for you, If you do commit offset in your application. So when you restart the application, It will consume message from the last offset stored in kafka.
You could set the AutocommitEnable=true OR
I also found this https://github.com/apache/camel/blob/camel-2.18.2/components/camel-kafka/src/main/java/org/apache/camel/component/kafka/KafkaConsumer.java.
There are some piece code :
if (endpoint.getConfiguration().isAutoCommitEnable() != null
&& !endpoint.getConfiguration().isAutoCommitEnable()) {
long partitionLastoffset = partitionRecords.get(partitionRecords.size() - 1).offset();
consumer.commitSync(Collections.singletonMap(
partition, new OffsetAndMetadata(partitionLastoffset + 1)));
}
The Camel will take care of this even you do not set the AutocommitEnable.