Side effect of paging modus in ActiveMQ - activemq-artemis

we are currently experiencing that after some addresses/queues finished being paged, our application does not get any messages anymore. It is like as if the messages are gone. We are expecting that paging modus should not cause the disfunctionality of the cluster. Our configuration of address-settings looks as follow:
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq#">
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<auto-delete-queues>true</auto-delete-queues>
<auto-delete-addresses>true</auto-delete-addresses>
<config-delete-addresses>FORCE</config-delete-addresses>
</address-setting>
<address-setting match="MyQueueEins">
<address-full-policy>BLOCK</address-full-policy>
<max-size-bytes>100Mb</max-size-bytes>
</address-setting>
<address-setting match="MyQueueZwei">
<address-full-policy>BLOCK</address-full-policy>
<max-size-bytes>50Mb</max-size-bytes>
</address-setting>
<address-setting match="MyQueueDrei">
<address-full-policy>BLOCK</address-full-policy>
<max-size-bytes>1Mb</max-size-bytes>
</address-setting>
<address-setting match="Onlinequeue.#">
<address-full-policy>BLOCK</address-full-policy>
<max-size-bytes>50Mb</max-size-bytes>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<page-size-bytes>10000000</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>false</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
<auto-create-jms-queues>false</auto-create-jms-queues>
<auto-create-jms-topics>false</auto-create-jms-topics>
<auto-delete-queues>false</auto-delete-queues>
<auto-delete-addresses>false</auto-delete-addresses>
<config-delete-addresses>FORCE</config-delete-addresses>
</address-setting>
</address-settings>
Are we missing something in broker.xml configuration?
What could be the cause why the messages are gone/ not being consumed after paging modus?
An address experiencing this issue is BoxDeliveryQueue. In broker.xml, this queue would have the default address-setting with # as match-regex. We are experiencing, that after paging mode on this address, the messages are not being consumed anymore. Today I found the file system we are using to store the paged messages:
<paging-directory>./data/paging</paging-directory>
I will check in the file system, whether messages exist in the defined directory for addresses. If that is the case, it would mean that the system is unable to consume the messages from the file system as stated in the documentation:
The system will navigate on the files as needed, and it will remove the page file as soon as all the messages are acknowledged up to that point.
Could it be the case, that somehow because of our configuration in broker.xml the messages directly get acknowledged after being paged, so that the are not being consumed anymore?
BoxDeliveryQueue is created also in broker.xml:
</address>
<address name="BoxDeliveryQueue">
<anycast>
<queue name="BoxDeliveryQueue">
<durable>true</durable>
</queue>
</anycast>
</address>
The consumer is a Message-Driven-Bean, pointing to the BoxDeliveryQueue
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
#MessageDriven(mappedName = "BoxDeliveryQueue", name = "BoxDeliveryMessageBean", activationConfig = {
#ActivationConfigProperty(propertyName = "clientId", propertyValue = "BoxDeliveryQueue"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "BoxDeliveryQueue"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
#ActivationConfigProperty(propertyName = "rebalanceConnections", propertyValue = "true"),
#ActivationConfigProperty(propertyName = "resourceAdapterJndiName", propertyValue = "delivery_ra") })
public class BoxDeliveryMessageBean implements MessageListener {
This is so wierd. The messages are unable to be consumed from the MDB only after the address finsihed being paged. If the address not being paged, everything is fine. Regarding protocols, it is configured like this:
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis"
>tcp://localhost:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;nioRemotingThreads=24</acceptor>
</acceptors>
Our JavaEE-Server is Weblogic 12c. I havent checked the thread-dumps regarding stuckthread. I assumed that it is the issue in my configuration of Artemis-System, as I assume that the MDBs are unaware whether the message directly from memory or from paged storage. I will check if there are stuck threads in application server.
Thank you,
Hadi

Related

Started paging before reaching max size in ActiveMQ Artemis

Paging started before reaching max size configured in ActiveMQ Artemis 2.17.0 version.
2021-05-31 08:59:30,293 WARN [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on address 'consumer_queue'; size is currently: 208,250 bytes; max-size-bytes: 524,288,000; global-size-bytes: 1,073,744,320
Address settings configured for queue consumer_queue.
<address-setting match="consumer_queue">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>500Mb</max-size-bytes>
<page-size-bytes>10Mb</page-size-bytes>
<max-delivery-attempts>5</max-delivery-attempts>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>false</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
<auto-create-jms-queues>false</auto-create-jms-queues>
<auto-create-jms-topics>false</auto-create-jms-topics>
</address-setting>
global-max-size set as -1. The JVM is using -Xms36G and the broker is automatically configuring the global-max-size to 18G, confirmed via this log:
AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 19,327,352,832
This looks like it may be a bug. Please file an issue here, and include a minimal reproducible example.

ActiveMQ Artemis Producer performance issue with 30k address/queues

I have an application that needs to send messages to ~30k address. I used to have OpenMQ as my broker and use JMS. The JMS client code that I have was capable of sending ~30k messages to the broker within few seconds. Then I switched to Artemis and the send performance degraded drastically. Now it takes up to 10 minutes to send all the messages to Artemis broker.
I played with different configuration settings both on the server (broker.xml) and also via connection URL.
Address setting on broker.xml
<address-setting match="#">
<!-- <dead-letter-address>DLQ</dead-letter-address> -->
<!-- <expiry-address>ExpiryQueue</expiry-address> -->
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>DROP</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
Sending
private static final String BROKER_URL = "tcp://localhost:61616?minLargeMessageSize=10485760;compressLargeMessages=true;producerWindowSize=-1";
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(BROKER_URL);
factory.setDupsOKBatchSize(50 * 1024 * 1024);
factory.setProducerWindowSize(-1);
factory.setBlockOnDurableSend(false);
factory.setBlockOnNonDurableSend(false);
factory.setCacheDestinations(true);
System.out.println("Connection factory " + factory.toString());
Connection connection = factory.createConnection();
Session producerSession = connection.createSession(false, ActiveMQJMSConstants.PRE_ACKNOWLEDGE);
MessageProducer producer = producerSession.createProducer(null);
producer.setDisableMessageID(true);
producer.setDisableMessageTimestamp(true);
// Create a JMS message of type ByteMessage, and send
producer.send(destination, jmsMessage, DeliveryMode.NON_PERSISTENT, JMS_MESSAGE_PRIORITY, DEFAULT_TTL);
With above code (simplified for readability) I only get 20 to 40 messages per second.
Am I missing something? Or is it some performance penalty introduced by JMS to Core conversion from Artemis client library?
Is there anything I can do to improve the message rate?

Facing cluster testing issue with ActiveMQ Artemis

I have 2 instances of ActiveMQ Artemis , simply created with command
/.artemis create artemis/server1 and
/.artemis create artemis/server2
I am using linux ubantu.
here is broker.xml for server1:
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<!-- connector to the server1 -->
<connector name="server1-connector">tcp://localhost:61617</connector>
</connectors>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
and here is broker.xml for server2:
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61617?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector">tcp://localhost:61617</connector>
<!-- connector to the server0 -->
<connector name="server0-connector">tcp://localhost:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>server0-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Also in server2, change in bootstrap.xml, changes web bind port
<web bind="http://localhost:8163" path="web">
I am testing it with StaticClusteredQueueExample and this example working file.
Now I am running the ActiveMQ Artemis JMeter Performance against my cluster, I am using JMeter Testing Examples which is here
Now when i am running point to point test with Jmeter is giving me near 50% errors rate (Aggregate Report in Jmeter) in consumer,
But where i am running only one node(any of server1 or server2) in ubantu system, it's working fine, 0% error rate(Aggregate Report in Jmeter).
Can you please help why i am getting 50% errors rate(Aggregate Report in Jmeter) when running multiple instances(nodes) with docker
The problem is that you're mixing one example (i.e. the JMeter example) with a cluster configuration (i.e. from the clustered-static-discovery example) that really isn't compatible.
The <message-load-balancing> of the cluster is STRICT which means messages will be load-balanced across the cluster regardless of the presence of consumers. Furthermore, the default <redistribution-delay> is -1 meaning the messages sent to the other nodes in the cluster due to the STRICT message-load-balancing type will stay on those nodes and will not be redistributed based on consumer demand.
The JMeter example was written with a single node in mind so it only send messages to and consumes messages from 1 node which means it will only receive back half of the messages that it sends as the other half will have been forwarded to the other node in the cluster due to the configuration.
If you change the <message-load-balancing> to ON_DEMAND you won't see any errors as all the messages will stay on the node where they were specifically sent which is also where the consumer will be connected.

Message redistribution on ArtemisMQ 2.x does not work

I would like to enable message redistribution on my 2-nodes cluster with static hosts. But it does not seem to work.
1) I have 10 producers that write to the queue "MyTestQueue" on node 1 (but no consumers).
2) I have 1 consumer on node 2 (but no producers) that consumes messages from node 2.
I expect that node 1 will redistribute the messages to node 2 where the consumer exists, but it does not. The message count on node 1 is still equal the amount of messages that was sent to node 1.
I have the following configuration in my broker.xml that sets forward-when-no-consumers to false.
I also have set redistribution-delay to a value of zero.
<jms xmlns="urn:activemq:jms">
<queue name="MyTestQueue"/>
</jms>
...
<cluster-connections>
<cluster-connection name="my-test-cluster">
<address>jms</address>
<connector-ref>server0-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>false</forward-when-no-consumers>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<confirmation-window-size>1024</confirmation-window-size>
<static-connectors>
<connector-ref>server1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
...
<address-settings>
<address-setting match="#">
<redelivery-delay>5000</redelivery-delay>
<redelivery-delay-multiplier>3</redelivery-delay-multiplier>
<max-redelivery-delay>10000</max-redelivery-delay>
<max-delivery-attempts>10</max-delivery-attempts>
<max-size-bytes>104857600</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<address-full-policy>PAGE</address-full-policy>
<redistribution-delay>0</redistribution-delay>
</address-setting>
</address-settings>
How can I get the message redistribution to work?
This might be related to a known issue. There is a situation on which the broker fails to load balance the messages if they don't contain the application properties field.
Could you, please, try with that?
Couple of things...
What version of Artemis are you using? Have you tried reproducing this with version 2.2.0?
What kind of client are you using (e.g. JMS, AMQP, STOMP, etc.)?
Do you have a reproducible test-case (e.g. a modified version of one of the examples shipped with Artemis)?
The configuration element <forward-when-no-consumers> is not valid in Artemis (although it was in older versions of HornetQ).
Remove the <address>jms</address> from cluster connection configuration - each cluster connection only applies to addresses that match the specified address. And make sure that you're using compatible client because messages from 1.x clients to 2.x cluster are lost when the are load balanced to nodes with matching consumers.
Here's official, working example of ActiveMQ Artemis configuration with symmetric cluster, on demand load balancing and message redistribution

How to configure standalone HornetQ along with EAP 6.3 or Jboss 7 for durable JMS subscription?

I would like to send JMS messages from one Jboss server to another but through a standalone HornetQ server.
This way I can have messages delivered later in case of the destination server crash (provided durable subscriptions).
However I have already messages routed internally at each Jboss. I would like a configuration that will not conflict with those.
The topology of the desired solution is visualised on the diagram.
How to achieve this configuration?
Lets start from configuring the standalone HornetQ. You download the standalone server from their download page.
Next you have to configure the topic. Please add your topic in %HORNETQ-HOME%\config\stand-alone\non-clustered\hornetq-jms.xml file:
<topic name="Topic1">
<entry name="java:/topic/Topic1"/>
</topic>
You probably want to test the configuration on one machine first, so I recommend changing the port at which HornetQ will be listening for messages from 5455 to 5456.
Please edit %HORNETQ-HOME%\config\stand-alone\non-clustered\hornetq-configuration.xml file to change these ports. You also want to be able to register durable subscribers, so add these two lines to the <security-setting match="#"> element in the same file:
<permission type="createDurableQueue" roles="guest"/>
<permission type="deleteDurableQueue" roles="guest"/>
Then start the standalone HornetQ by running %HORNETQ-HOME%\run.bat.
First we gonna see how to send a message to this newly created topic. For this we need a designated connection factory on the Jboss Server 1. In the jboss:domain:messaging subsystem of %JBOSS-HOME1%\standalone\configuration\standalone-full.xml, please add a new pooled connection factory:
<pooled-connection-factory name="StandaloneHornetQConnectionFactory">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="standalone-hornetq-connector"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/StandaloneHornetQConnectionFactory"/>
</entries>
</pooled-connection-factory>
From now on you have to use this connection factory when you want to send a message to Topic 1. This is usually done by dependency injection:
#Resource(lookup = "java:jboss/exported/jms/StandaloneHornetQConnectionFactory")
private ConnectionFactory connectionFactory;
As you can see above we referenced standalone-hornetq-connector but don't have one yet. Lets create it by adding another netty connector into <connectors>:
<connectors>
<netty-connector name="standalone-hornetq-connector" socket-binding="standalone-hornetq-socket"/>
<netty-connector name="netty" socket-binding="messaging"/>
<netty-connector name="netty-throughput" socket-binding="messaging-throughput">
<param key="batch-delay" value="50"/>
</netty-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
As you can see we need standalone-hornetq-socket socket binding. Lets create it in <socket-binding-group> subelement:
<outbound-socket-binding name="standalone-hornetq-socket">
<remote-destination host="localhost" port="5446"/>
</outbound-socket-binding>
As you can see, this is an outbound socket binding that will be used to send messages to our HornetQ Standalone server that is listening on the 5446 port. This configuration is enough on Jboss Server 1 to send messages to Jboss Server 2 via the Standalone HornetQ server.
To be able to receive the messages on Jboss Server 2, we have to once again repeat above configuration in the %JBOSS-HOME1%\standalone\configuration\standalone-full.xml. However this time we offset the ports of the Jboss Server 2 by port-offset:3, to be able to work on the same machine:
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:3}">
This step is not nesessary if you will be putting the servers in the separate machines (If you do, please change localhost accordingly ;) ).
Now we can create MDB that will be a durable subscriber of the Topic 1.
#MessageDriven(name = "MyDurableSubscriber", activationConfig = {
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:/topic/Topic1"),
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "subscriptionName", propertyValue = "Topic1Subscription"),
#ActivationConfigProperty(propertyName = "clientId", propertyValue = "MySubscriber"),
})
#ResourceAdapter("StandaloneHornetQConnectionFactory")
public class MyDurableSubscriber implements MessageListener {
#Override
public void onMessage(Message message) {
// ...
}
}
The #ResourceAdapter("StandaloneHornetQConnectionFactory") line is the most important, because by default all MDBs are using hornetq-ra resource adapter to subscribe (local subscription). The ResourceAdapter annotation is from org.jboss.ejb3.annotation package
and you can make this class available to you via maven dependency:
<dependency>
<groupId>org.jboss.ejb3</groupId>
<artifactId>jboss-ejb3-ext-api</artifactId>
<version>2.1.0</version>
<scope>provided</scope>
</dependency>
With these all set you can enjoy durable subscriptions with "star" topology of your servers.