Message redistribution on ArtemisMQ 2.x does not work - hornetq

I would like to enable message redistribution on my 2-nodes cluster with static hosts. But it does not seem to work.
1) I have 10 producers that write to the queue "MyTestQueue" on node 1 (but no consumers).
2) I have 1 consumer on node 2 (but no producers) that consumes messages from node 2.
I expect that node 1 will redistribute the messages to node 2 where the consumer exists, but it does not. The message count on node 1 is still equal the amount of messages that was sent to node 1.
I have the following configuration in my broker.xml that sets forward-when-no-consumers to false.
I also have set redistribution-delay to a value of zero.
<jms xmlns="urn:activemq:jms">
<queue name="MyTestQueue"/>
</jms>
...
<cluster-connections>
<cluster-connection name="my-test-cluster">
<address>jms</address>
<connector-ref>server0-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>false</forward-when-no-consumers>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<confirmation-window-size>1024</confirmation-window-size>
<static-connectors>
<connector-ref>server1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
...
<address-settings>
<address-setting match="#">
<redelivery-delay>5000</redelivery-delay>
<redelivery-delay-multiplier>3</redelivery-delay-multiplier>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-delivery-attempts>10</max-delivery-attempts>
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
How can I get the message redistribution to work?

This might be related to a known issue. There is a situation on which the broker fails to load balance the messages if they don't contain the application properties field.
Could you, please, try with that?

Couple of things...
What version of Artemis are you using? Have you tried reproducing this with version 2.2.0?
What kind of client are you using (e.g. JMS, AMQP, STOMP, etc.)?
Do you have a reproducible test-case (e.g. a modified version of one of the examples shipped with Artemis)?
The configuration element <forward-when-no-consumers> is not valid in Artemis (although it was in older versions of HornetQ).

Remove the <address>jms</address> from cluster connection configuration - each cluster connection only applies to addresses that match the specified address. And make sure that you're using compatible client because messages from 1.x clients to 2.x cluster are lost when the are load balanced to nodes with matching consumers.
Here's official, working example of ActiveMQ Artemis configuration with symmetric cluster, on demand load balancing and message redistribution

Related

ActiveMQ Artemis and multiple DLQs

I have many queues, i.e
my.queue.no.1
my.queue.no.2
my.queue.no.3
my.queue.no.4
And I want to redirect unsuccessful messages to DLQ, but I don't want to mix messages from all queues to a one DLQ.
Is it possible to have multiple DLQs?
i.e
my.queue.no.1
my.queue.no.dlq.1
my.queue.no.2
my.queue.no.dlq.2
my.queue.no.3
my.queue.no.dlq.3
my.queue.no.4
my.queue.no.dlq.4
P.S I'm using Artemis 2.16.0
ActiveMQ Artemis can automatically create the defined dead-letter-address and a corresponding dead-letter queue when a message is undeliverable enabling the auto-create-dead-letter-resources address setting. The dead-letter-queue-prefix address setting can be used to define a prefix used for automatically created dead-letter queues, i.e. to create DLQ.my.queue.no.NNN queues under the DLA address:
<address-settings>
<address-setting match="my.queue.no.#">
<dead-letter-address>DLA</dead-letter-address>
<auto-create-dead-letter-resources>true</auto-create-dead-letter-resources>
<dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix>
...

Make Artemis Slave Replication Use SSL

In Artemis when using replication to keep master/slave pairs synchronized the data will be replicated to the slave using a 'connection'.
I want to ensure this replication connection is encrypted. I suspect that this is done by using SSL on the connectors section of the broker.xml. However digging through the guides/official docs does not explicitly state how this is done. Yeah I can go waddling through source code and play with settings and try and sniff the traffic just thought asking here might be a bit easier.
Lets assume I have just a master/slave pair for now(I know not good for split brain but lets keep it simple for now) and will be using static connection lists as UDP is not allowed in my data center I have the following setup.
<connectors xmlns="urn:activemq:core">
<connector name="master">
tcp://master:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r
</connector>
<connector name="slave">
tcp://slave:61616?sslEnabled=true;keyStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001.jks;keyStorePassword=1q2w3e4r;needClientAuth=true;trustStorePath=/d1/usr/dltuser/keystore/qcsp6ab2001_trust.jks;truststorepassword=1q2w3e4r
</connector>
</connectors>
<cluster-connections>
<cluster-connection name="amq-cluster">
<connector-ref>master</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<forward-when-no-consumers>false</forward-when-no-consumers>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>master</connector-ref>
<connector-ref>slave</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
<!-- what master/slave group is this broker part of, master and slave must match -->
<group-name>nft-group-1</group-name>
<!-- does the broker initiate a quorum vote if connection to slave fails -->
<vote-on-replication-failure>true</vote-on-replication-failure>
<!-- how many votes should backup intiate when requesting a quorum?-->
<vote-retries>5</vote-retries>
<!-- how long should the broker wait between vote retries -->
<vote-retry-wait>5000</vote-retry-wait>
<vote-on-replication-failure>true</vote-on-replication-failure>
<cluster-name>amq-cluster</cluster-name>
</master>
</replication>
</ha-policy>
From my understanding the connectors will be used when forming the master slave pairs and then the replication will be done via SSL using the configuration from connectors section is this the case?
From my understanding the connectors will be used when forming the master slave pairs and then the replication will be done via SSL using the configuration from connectors section is this the case?
Yes, that is the case.

ActiveMQ Artemis Producer performance issue with 30k address/queues

I have an application that needs to send messages to ~30k address. I used to have OpenMQ as my broker and use JMS. The JMS client code that I have was capable of sending ~30k messages to the broker within few seconds. Then I switched to Artemis and the send performance degraded drastically. Now it takes up to 10 minutes to send all the messages to Artemis broker.
I played with different configuration settings both on the server (broker.xml) and also via connection URL.
Address setting on broker.xml
<address-setting match="#">
<!-- <dead-letter-address>DLQ</dead-letter-address> -->
<!-- <expiry-address>ExpiryQueue</expiry-address> -->
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>DROP</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
Sending
private static final String BROKER_URL = "tcp://localhost:61616?minLargeMessageSize=10485760;compressLargeMessages=true;producerWindowSize=-1";
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(BROKER_URL);
factory.setDupsOKBatchSize(50 * 1024 * 1024);
factory.setProducerWindowSize(-1);
factory.setBlockOnDurableSend(false);
factory.setBlockOnNonDurableSend(false);
factory.setCacheDestinations(true);
System.out.println("Connection factory " + factory.toString());
Connection connection = factory.createConnection();
Session producerSession = connection.createSession(false, ActiveMQJMSConstants.PRE_ACKNOWLEDGE);
MessageProducer producer = producerSession.createProducer(null);
producer.setDisableMessageID(true);
producer.setDisableMessageTimestamp(true);
// Create a JMS message of type ByteMessage, and send
producer.send(destination, jmsMessage, DeliveryMode.NON_PERSISTENT, JMS_MESSAGE_PRIORITY, DEFAULT_TTL);
With above code (simplified for readability) I only get 20 to 40 messages per second.
Am I missing something? Or is it some performance penalty introduced by JMS to Core conversion from Artemis client library?
Is there anything I can do to improve the message rate?

active-mq artemis springboot clustered topic load balancing (round robin) issue

After spending a lot of time in configuring and trying a lot of solutions to make Artemis work in a cluster mode like the local mode in a publish-subscribe (topic).
So, I 've prepared 3 consumers on different nodes and a producer that publish messages on only one node.
I expect that the 3 consumers receives their own copy of messages like described in here!
The problem is the cluster (Core Bridge) still round robin messages between the 3 nodes.
My project Github Repo
spring-boot-artemis-clustered-topic
Broker Cluster Config
<!-- Using STRICT is like setting the legacy forward-when-no-consumers
parameter to true-->
<!-- Using ON_DEMAND is like setting the legacy forward-when-no-consumers
parameter to false.-->
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
Consumers behavior
artemis-b1-b2-b3
In your ConnectionFactoryClusteredConfig.pubSubFactory() method, try moving factory.setPubSubDomain(true) after configurer.configure(factory, connectionFactory) as explained here: https://stackoverflow.com/a/44416121/832268.

Facing cluster testing issue with ActiveMQ Artemis

I have 2 instances of ActiveMQ Artemis , simply created with command
/.artemis create artemis/server1 and
/.artemis create artemis/server2
I am using linux ubantu.
here is broker.xml for server1:
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<!-- connector to the server1 -->
<connector name="server1-connector">tcp://localhost:61617</connector>
</connectors>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
and here is broker.xml for server2:
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61617?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector">tcp://localhost:61617</connector>
<!-- connector to the server0 -->
<connector name="server0-connector">tcp://localhost:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server0-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Also in server2, change in bootstrap.xml, changes web bind port
<web bind="http://localhost:8163" path="web">
I am testing it with StaticClusteredQueueExample and this example working file.
Now I am running the ActiveMQ Artemis JMeter Performance against my cluster, I am using JMeter Testing Examples which is here
Now when i am running point to point test with Jmeter is giving me near 50% errors rate (Aggregate Report in Jmeter) in consumer,
But where i am running only one node(any of server1 or server2) in ubantu system, it's working fine, 0% error rate(Aggregate Report in Jmeter).
Can you please help why i am getting 50% errors rate(Aggregate Report in Jmeter) when running multiple instances(nodes) with docker
The problem is that you're mixing one example (i.e. the JMeter example) with a cluster configuration (i.e. from the clustered-static-discovery example) that really isn't compatible.
The <message-load-balancing> of the cluster is STRICT which means messages will be load-balanced across the cluster regardless of the presence of consumers. Furthermore, the default <redistribution-delay> is -1 meaning the messages sent to the other nodes in the cluster due to the STRICT message-load-balancing type will stay on those nodes and will not be redistributed based on consumer demand.
The JMeter example was written with a single node in mind so it only send messages to and consumes messages from 1 node which means it will only receive back half of the messages that it sends as the other half will have been forwarded to the other node in the cluster due to the configuration.
If you change the <message-load-balancing> to ON_DEMAND you won't see any errors as all the messages will stay on the node where they were specifically sent which is also where the consumer will be connected.