Migration from ActiveMQ to ActiveMQ Artemis and advisories - activemq-artemis

We have JMS application which is using ActiveMQ. There are features which use ActiveMQ advisory messages (e.g. we are interested in count of consumers in a queue, etc).
We are testing the application with ActiveMQ Artemis 2.24.0, and we found that we are not receiving any advisory messages.
We read documentation, and we use OpenWire protocol with supportAdvisory=true and suppressInternalManagementObjects=false. We see advisory queues and addresses in the admin UI. We see that consumers are there, but we are receiving no advisories at all.
Admin UI says that count of messages in advisory queues is zero so we understand it that Artemis is not sending them. When we send a message to the advisory queue, it is propagated to clients so clients/consumers are bound correctly.
We use spring boot framework. There is sample code we use:
#JmsListener(destination = "ActiveMQ.Advisory.Consumer.Topic.Application", containerFactory = "jmsFactory")
public void checkIfBecomeMaster(Message message) {
try {
consumerCount = Integer.valueOf(message.getStringProperty("consumerCount"));
// More code there...
} catch (JMSException e) {
e.printStackTrace();
}
}
What we are missing? Is there any plugin required or any additional tool?
We know that there are many other ways how to query Artemis for same values, but we would like to migrate the application as it is.

Related

Exception handling using Kafka rider in MassTransit

In MassTransit while using transport like RabbitMQ when an exception is thrown, the message goes into queue queue_name_error. But using Kafka, there is no topic with _error suffix, nor similar queue on supporting transport. How to handle exceptions properly using Kafka with MassTransit, and where erroneous messages can be found?
Since Kafka (and Azure Event Hub) are essentially log files with a fancy API, there is no need for an _error queue, as there are no queues anyway. There are no dead letters either. So the built-in error handling of MassTransit that moves faulted messages to the _error doesn't apply (nor does it make sense).
You can use the retry middleware (UseMessageRetry, etc.) with topic endpoints, to handle transient exceptions. You can also log the offset of poison messages to deal with them. The offset doesn't change, the messages remain in the topic until the expiration is reached.

ActiveMQ Artemis configure standalone brokers with failover and statically assigned queues

I am trying to figure out how to utilize ActiveMQ Artemis to achieve the following topology. I do need to have several producers writing to queues hosted on two standalone Artemis brokers. For the moment every producer creates two connection factories which handle the connections to the 2 brokers and create the corresponding queues.
#Bean
public ConnectionFactory jmsConnectionFactoryBroker1() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl_1,username,password);
return connectionFactory;
}
#Bean
public ConnectionFactory jmsConnectionFactoryBroker2() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerUrl_2,username,password);
return connectionFactory;
}
My main issue is that I need to know which queue is assigned to which broker and at the same time I need to know that if one broker is down for some reason that I can re-create that queue to the other broker on the fly and avoid losing any further messages. So my approach was to setup broker urls as below
artemis.brokerUrl_1=(tcp://myhost1:61616,tcp://myhost2:61616)?randomize=false
artemis.brokerUrl_2=(tcp://myhost2:61616,tcp://myhost1:61616)?randomize=false
So using a different JmsTemplate for each broker url my intention was that when referring to JmsTemplate
using brokerUrl_1 would create the queues on myhost1, and the same for the corresponding JmsTemplate
for brokerUrl_2.
I would have expected (due to randomize parameter) that each queue would have some kind of static membership to a broker and in the case of a broker's failure there would be some kind of migration by re-creating the queue from scratch to the other broker.
Instead what I notice that almost every time the distribution of queue creation does not happen as perceived but rather randomly since the same queue can appear in either broker which is not a desirable
for my use-case.
How can I approach this case and solve my problem in a way that I can create my queues on a predefined broker and have the fail-safe that if one broker is down the producer will create the same queue to the
other broker and continue?
Note that having shared state between the brokers is not an option
The randomize=false doesn't apply to the Artemis core JMS client. It only applies to the OpenWire JMS client distributed with ActiveMQ 5.x. Which connector is selected from the URL is determined by the connection load-balancing policy as discussed in the documentation. The default connection load-balancing policy is org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy which will select a random connector from the URL list and then round-robin connections after that. There are other policies available, and if none of them give you the behavior you want then you can potentially implement your own.
That said, it sounds like what you really want/need is 2 pairs of brokers where each pair consists of a live and a backup. That way if the live broker fails then all the clients can fail-over to the backup and you won't have to deal with any of this other complexity of this "fake" fail-over functionality you're trying to implement.
Also, since you're using Spring's JmsTemplate you should be aware of some well-known anti-patterns that it uses which may significantly impact performance in a negative way.

Discard duplicate message only if they are still queued with ActiveMQ Artemis and JBoss EAP 7.1

We're using ActiveMQ Artemis on JBoss EAP 7.1.
We noticed that once a message with a specific _AMQ_DUPL_ID value is passed through the queue, if the message producer tries to send a message with the same _AMQ_DUPL_ID value to the same queue again it is discarded by the broker. However, our need is to discard duplicate messages only if they are still in queue.
Is there a way to achieve this goal?
We use the primary key from the database as _AMQ_DUPL_ID value. This is the code we use
public void sendMessage(final T msg, final String id) {
jmsTemplate.send(destination, new MessageCreator() {
#Override
public Message createMessage(Session session) throws JMSException {
Message message = session.createObjectMessage(msg);
message.setStringProperty("_AMQ_DUPL_ID", id);
return message;
}
});
}
We're looking for a solution because we have a timer that every 30 seconds loads from DB all records with a specific value for status field and puts them into the JMS queue.
Consumers consume JMS messages, processes them, updates their status field, insert/update them into the DB and opens a websocket connection with another application that we don't control. Sometimes the consumer hangs on the websocket call and, consequently, it remains busy while the timer continues to fill the queue.
To solve this problem we thought that something like Artemis duplicate message detection would help. However, when the external app hangs our consumer we need our timer to be able to put the message on the queue again.
The duplicate message detection on ActiveMQ Artemis is working as designed. Its goal is to avoid any chance that a consumer will receive a duplicate message which means that even though a message may no longer be in the queue (e.g. because it was consumed) any duplicate of that message should still be rejected.
What you're asking for here is akin to asking how you can insert multiple records with the same primary key into a database table. It simply can't because because the entire point of having a primary key is to avoid duplicate records.
I recommend you implement some kind of timeout for the websocket call otherwise your application will be negatively impacted by a resource you have no control over.
Aside from that you may be able to use a last-value queue using the primary key as the value for _AMQ_LVQ_NAME. This will guarantee that only 1 instance of the message will be in a queue at any point. Read the documentation for more details.

max.in.flight.requests.per.connection and Spring Kafka Producer Synchronous Event Publishing with KafkaTemplate

I'm a bit confused about the relationship between max.in.flight.requests.per.connection for Kafka Producers and synchronous publishing of events using Spring-Kafka and was hoping someone might be able to clear up the relationship between the two.
I'm looking to set up synchronous event publishing with Spring Kafka using Spring Kafka's KafkaTemplate. The Spring Kafka documentation provides an example using ListenableFuture's get(SOME_TIME, TimeUnit) to enable synchronous publishing of events (duplicated below for reference).
public void sendToKafka(final MyOutputData data) {
final ProducerRecord<String, String> record = createRecord(data);
try {
template.send(record).get(10, TimeUnit.SECONDS);
handleSuccess(data);
}
catch (ExecutionException e) {
handleFailure(data, record, e.getCause());
}
catch (TimeoutException | InterruptedException e) {
handleFailure(data, record, e);
}
}
I was looking at Kafka's Producer Configuration Documentation and saw that Kafka had a configuration for max.in.flight.requests.per.connection, which was responsible for the below setting in Kafka.
The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).
What value does max.in.flight.requests.per.connection give set to a value of 1 give when event publishing is handled asynchronously? Does setting max.in.flight.requests.per.connection to a value of 1 force synchronous publishing of events for a Kafka Producer? If I want to set up synchronous publishing of events for a Kafka Producer and take the approach recommended by Spring-Kafka, should I be concerned about max.in.flight.requests.per.connection or is it safe to ignore this?
I don't believe they are related at all. The send is still asynchronous; setting it to one means the second will block until the first completes.
future1 = template.send(...);
future2 = template.send(...); // this will block
future1.get(); // and this will return almost immediately
future2.get();
You still need to get the result of the future, to test success/failure.

How to have sync producer batch messages?

We were using the Kafka 0.8 async producer but it is dropping messages (and there is no aysnc response from another thread or we could keep using async).
We have set the batch.num.messages to 500 and our consumer is not changing. I read that batch.num.messages only applies to the async producer and not sync so I need to batch myself. We are using compression.codec=snappy and our own serializer class.
My question is two-fold:
Can I assume that I can just use our own serializer class and then send the message on my own?
Do I need to worry about any special snappy options/parameters that Kafka might be using?
Yes, it's because batch.num.messages controls behaviour of async producer only. This is explicitly said so in relevant guide on parameters:
The number of messages to send in one batch when using async mode. The producer will wait until either this number of messages are ready to send or queue.buffer.max.ms is reached.
In order to have batching for sync producer you have to send list of messages:
public void trySend(List<M> messages) {
List<KeyedMessage<String, M>> keyedMessages = Lists.newArrayListWithExpectedSize(messages.size());
for (M m : messages) {
keyedMessages.add(new KeyedMessage<String, M>(topic, m));
}
try {
producer.send(keyedMessages);
} catch (Exception ex) {
log.error(ex)
}
}
Note that I'm using kafka.javaapi.producer.Producer here.
Once send is executed, batch is sent.
Can I assume that I can just use our own serializer class and then send the message on my own?
Do I need to worry about any special snappy options/parameters that Kafka might be using?
Both, compression and serializer are orthogonal features that don't affect batching, but actually applied to individual messages.
Note that there will be api changes and async/sync api will be unified.