JBoss AMQ / ActiveMQ Artemis: Pre-configure Durable Subscribers - activemq-artemis

I have a Red Hat AMQ (which is based on ActiveMQ Artemis) broker and I would like to make use of durable subscription (or equivalent) feature, so that I will have multiple OpenWire JMS subscribers subscribing to the events of our application which will be delivered to them reliably.
I would like to pre-configure subscribers, so to save me trouble in initial application startup. I want to avoid the case for initial application start up where the main application starts running and publishing events before our durable subscribers perform their initial subscription.
I also wants to avoid explicitly ordering start up sequence of my processes.
Is there any way I can pre-configure durable subscribers? In ordinary ActiveMQ (not Artemis), there is feature like Virtual Topics which (kind of) solve the problem.
What is the preferred solution for ActiveMQ Artemis?

It is possible to pre-configure durable subscriptions since the OpenWire implementation creates the queue used for the durable subscription in a deterministic way (i.e. using the format of client-id.subscription-name). For example, if you wanted to configure a durable subscription on the address myAddress with a client-id of myclientid and a subscription name of mysubscription then configure the durable subscription:
<addresses>
<address name="myAddress">
<multicast>
<queue name="myclientid.mysubscription"/>
</multicast>
</address>
</addresses>

Related

ActiveMQ Artemis: Virtual Topic and Dead Letter Queue

We are migrating from ActiveMQ "Classic" to ActiveMQ Artemis.
On ActiveMQ "Classic" we were using Virtual Topics. They were created with virtualDestinationInterceptor:
<virtualTopic name="VirtualTopic.>" prefix="Consumer.*." selectorAware="false"/>
There is also deadLetterStrategy to automatically create the dead letter queue by appending .Dead at the end of the queue name.
With that setup when a message is undelivered on a Virtual Topic consumers queue it is placed on a queue with the same name suffixed with .Dead.
On ActiveMQ Artemis we have reproduced that by setting the OpenWire parameter virtualTopicConsumerWildcards=Consumer.*.>;2.
The result it that when a consumer is listening on the queue Consumer.CLIENT_ID.VirtualTopic.QUEUE_NAME he receives messages sent to the address VirtualTopic.QUEUE_NAME.
The corresponding FQQN is therefore VirtualTopic.QUEUE_NAME::Consumer.CLIENT_ID.VirtualTopic.QUEUE_NAME.
The dead letter mechanism is reproduced by configuring the broker with auto create dead letter resource and suffixing the created queue with .Dead:
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix></dead-letter-queue-prefix>
<dead-letter-queue-suffix>.Dead</dead-letter-queue-suffix>
...
</address-setting>
The result is that when a message is not delivered it ends on the DLQ address on a queue with the name of the original address suffixed with .Dead. FQQN: DLQ::VirtualTopic.QUEUE_NAME.Dead.
Question: How can we send the undelivered messages to a dead-letter queue with the same name as the consumed queue: FQQN Consumer.CLIENT_ID.VirtualTopic.QUEUE_NAME::Consumer.CLIENT_ID.VirtualTopic.QUEUE_NAME?
I was hoping that a divert could hook in but that has not effect. The undelivered message is still ending on the DLQ address:
<diverts>
<divert name="virtualTopicDeadDivert">
<address>DLQ</address>
<forwarding-address>Consumer.CLIENT_ID.VirtualTopic.FunctionalTest.Dead</forwarding-address>
<!--filter string="_AMQ_ORIG_QUEUE='Consumer.CLIENT_ID.VirtualTopic.QUEUE_NAME'"/-->
<exclusive>true</exclusive>
</divert>
</diverts>
There is currently no way to send an undelivered message to a dead-letter address with a queue which is automatically created and named according to the queue where the message was originally sent.
This mismatch in behavior is a consequence of the naming convention and the semantics of virtual topics in ActiveMQ "Classic" combined with the semantics of the address model of ActiveMQ Artemis. To be clear, ActiveMQ Artemis is not meant to be a feature-for-feature reimplementation of ActiveMQ "Classic." Rather, it is a new implementation with features and abilities not possible on ActiveMQ "Classic" with equivalent (and sometimes identical) behavior where it makes sense.
Please note that virtual topics in ActiveMQ "Classic" were originally developed to deal with some of the shortcomings of JMS topic subscriptions in JMS 1.1. However, these limitations are already dealt with by the fundamental address model of ActiveMQ Artemis, but more importantly these limitations are already dealt with in JMS 2.0. ActiveMQ "Classic" doesn't yet support JMS 2.0, but ActiveMQ Artemis always has. You might consider moving a way from virtual topics and using JMS 2.0 instead.

Triggering kubernetes job for a kafka message

I have a kubernetes service that only does something when it consumes a message from a Kafka queue. The queue does not have messages very often, and running the service as a job triggered whenever a message is found would save resources.
I see that Kubernetes has this functionality for AMQP-type message services: https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/
Is there a way to adapt this for Kafka, given that Kafka does not support AMQP? I'd switch to a different messaging system, but I have other services that also read from this queue that require Kafka.
That Kafka consumer Service is all you really need. If you want to save resources, this could be paired with KEDA autoscaler such that it scales up and down, depending on load or consumer group lag.
Or you can use serverless platforms such as KNative to trigger based on Kafka (or other messaging systems) events.
Kafka does not support AMQP
Kafka Connect should be able to bridge AMQP to Kafka. E.g. Apache Camel has connectors for both.

Filtering in ActiveMQ Artemis. Reload of config in a cluster

A question about Filtering in ActiveMQ Artemis.
If I have a queue named MyQueue.IN and a filter only accepting a certain JMS Headers. Let's say ORDER.
In Broker.xml under the tag
<core>
<configuration-file-refresh-period>5000</configuration-file-refresh-period>
<queues>
<queue name="MyQueue.IN">
<address>MyQueue.IN</address>
<filter string="TOSTATUS='ORDER'"/>
<durable>true</durable>
</queue>
</queues>
</core>
As I read the manual, changing the Broker.xml it should now relaod config in Broker.xml every 5 seconds.
But when I change the filter to
<filter string="TOSTATUS='ORDERPICKUP'"/>
The config is not changed in ActiveMQ Artemis.
Not even if I restart the node.
It is in a cluster but I have changed Broker.xml on both sides.
Any ideas on how to change a filter on a queue? Preferably by changing the Broker.xml
/Zeddy
You are seeing the expected behavior. Although this behavior may not be intuitive or particularly user friendly it is meant to protect data integrity. Queues are immutable so once they are created they can't be changed. Therefore, to "change" a queue it has to be deleted and re-created. Of course deleting a queue means losing all the messages in the queue which is potentially catastrophic. In general, there are 2 ways to delete the queue and have it re-created:
Set <config-delete-queues>FORCE</config-delete-queues> in a matching <address-setting>. However, there is currently a problem with this approach which will be resolved via ARTEMIS-2076.
Delete the queue via management while the broker is running. This can be done via the JMX (e.g. using JConsole), the web console, the Artemis CLI, etc. Once the broker is stopped, update the XML, and then restart the broker.

Apache Kafka - Advisory Message

Apache Kafka has option "Advisory Message", similar to ActiveMQ?
ActiveMQ Advisory Message -> http://activemq.apache.org/advisory-message.html
No, there is no such functionality. Instead of managing cluster with advisory messages, kafka relies on zookeeper and whenever some action is required (e.g. delete topic, perform rebalance) it creates appropriate "command" node in zk.
Having said this, kafka exposes a lot of it's underpinnings as JMX accessible statistics.

NServiceBus and remote input queues with an MSMQ cluster

We would like to use the Publish / Subscribe abilities of NServiceBus with an MSMQ cluster. Let me explain in detail:
We have an SQL Server cluster that also hosts the MSMQ cluster. Besides SQL Server and MSMQ we cannot host any other application on this cluster. This means our subscriber is not allowed to run on the clsuter.
We have multiple application servers hosting different types of applications (going from ASP.NET MVC to SharePoint Server 2010). The goal is to do a pub/sub between all these applications.
All messages going through the pub/sub are critical and have an important value to the business. That's why we don't want local queues on the application server, but we want to use MSMQ on the cluster (in case we lose one of the application servers, we don't risk losing the messages since they are safe on the cluster).
Now I was assuming I could simply do the following at the subscriber side:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" />
....
</configSections>
<MsmqTransportConfig InputQueue="myqueue#server" ErrorQueue="myerrorqueue"
NumberOfWorkerThreads="1" MaxRetries="5" />
....
</configuration>
I'm assuming this used to be supported seen the documentation: http://docs.particular.net/nservicebus/messaging/publish-subscribe/
But this actually throws an exception:
Exception when starting endpoint, error has been logged. Reason:
'InputQueue' entry in 'MsmqTransportConfig' section is obsolete. By
default the queue name is taken from the class namespace where the
configuration is declared. To override it, use .DefineEndpointName()
with either a string parameter as queue name or Func parameter
that returns queue name. In this instance, 'myqueue#server' is defined
as queue name.
Now, the exception clearly states I should use the DefineEndpointName method:
Configure.With()
.DefaultBuilder()
.DefineEndpointName("myqueue#server")
But this throws an other exception which is documented (input queues should be on the same machine):
Exception when starting endpoint, error has been logged. Reason: Input
queue must be on the same machine as this process.
How can I make sure that my messages are safe if I can't use MSMQ on my cluster?
Dispatcher!
Now I've also been looking into the dispatcher for a bit and this doesn't seem to solve my issue either. I'm assuming also the dispatcher wouldn't be able to get messages from a remote input queue? And besides that, if the dispatcher dispatches messages to the workers, and the workers go down, my messages are lost (even though they were not processed)?
Questions?
To summarize, these are the things I'm wondering with my scenario in NServiceBus:
I want my messages to be safe on the MSMQ cluster and use a remote input queue. Is this something is should or shouldn't do? Is it possible with NServiceBus?
Should I use a dispatcher in this case? Can it read from a remote input queue? (I cannot run the dispatcher on the cluster)
What if the dispatcher dispatchers messages to the workers and one of the workers goes down? Do I lose the message(s) that were being processed?
Phill's comment is correct.
The thing is that you would get the type of fault tolerance you require practically by default if you set up a virtualized environment. In that case, the C drive backing the local queue of your processes is actually sitting on the VM image on your SAN.
You will need a MessageEndpointMappings section that you will use to point to the Publisher's input queue. This queue is used by your Subscriber to drop off subscription messages. This will need to be QueueName#ClusterServerName. Be sure to use the cluster name and not a node name. The Subscriber's input queue will be used to receive messages from the Publisher and that will be local, so you don't need the #servername.
There are 2 levels of failure, one is that the transport is down(say MSMQ) and the other is that the endpoint is down(Windows Service). In the event that the endpoint is down, the transport will handle persisting the messages to disk. A redundant network storage device may be in order.
In the event that the transport is down, assuming it is MSMQ the messages will backup on the Publisher side of things. Therefore you have to account for the size and number of messages to calculate how long you want to messages to backup for. Since the Publisher is clustered, you can be assured that the messages will arrive eventually assuming you planned your disk appropriately.