ActiveMQ Artemis + queue filter - activemq-artemis

I am using ActiveMQ Artemis 2.15.0 and have master/slave configuration with following address:
<address name="TestTopic">
<multicast>
<queue name="TestQueue" />
</multicast>
</address>
After starting master/slave pair I have add filtered as below:
<address name="TestTopic">
<multicast>
<queue name="TestQueue">
<filter string="archive IS NULL OR archive <> true" />
</queue>
</multicast>
</address>
After adding filter, when I processed a message with the JMS header archive with value true the filter will not work as expected. Also I have executed following cli command to update filter but still no luck.
artemis queue update --user admin --password admin --url tcp://localhost:61617 --filter "archive IS NULL OR archive <> true"
To resolve the problem I need to restart the ActiveMQ Artemis server. Is it possible for the filter to work without restarting? Find my brokers on GitHub.

Related

How can I set up and test the embedded ActiveMQ Artemis server in WildFly?

I have a standalone WildFly server running and would like to setup the embedded instance of ActiveMQ Artemis, but I'm not sure if I've done it correctly. Here are the related parts from my standalone-full.xml:
<server>
...
<profile>
...
<subsystem xmlns="urn:jboss:domain:messaging-activemq:13.1">
<server name="default">
...
<http-connector name="http-connector" socket-binding="activemq" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="activemq" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<http-acceptor name="http-acceptor" http-listener="activemq"/>
<http-acceptor name="http-acceptor-throughput" http-listener="activemq">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
...
</server>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:undertow:12.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other" statistics-enabled="${wildfly.undertow.statistics-enabled:${wildfly.statistics-enabled:false}}">
<server name="default-server">
...
<http-listener name="activemq" socket-binding="activemq" enable-http2="true"/>
...
</server>
</subsystem>
...
</profile>
...
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
...
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<socket-binding name="managemnet" interface="activemq-interface" port="${jboss.activemq.port:8081}"/>
...
</socket-binding-group>
</server>
When I try to connect to the server at tcp://localhost:8081 nothing seems to happen. Is there some tool out there that can help me examine the issue or do you guys know what might be wrong?
EDIT: Sorry guys I forgot to add a few things. I have standalone-full.xml That was a typo. However i was receving an error when using the standard configuration
AMQ122005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector.
So I assumed something was badly configured and that this was the cause for not being able to reach the imbedded artemis instance. I'm unsure what the standard port is for Artemis? is it localhost:9990?
Regarding versions
Applicaiton
Version
Artemis
2.19.1
Wildfly
26.1
I'm trying to connect wit the Quarkus JMS example described here
https://quarkus.io/guides/jms
The AMQ122005 message is warning you that you've bound the "activemq" socket-binding which is being used by the "http-connector" http-connector to 0.0.0.0 which is not valid. A remote client looking up any JMS ConnectionFactory which is configured to use that connector will receive a stub pointing to 0.0.0.0 which won't work.
The only thing you need to do here is to instead bind the server to a concrete, remotely-accessible interface rather than 0.0.0.0. Therefore, you don't need the extra http-listener, etc.
If you are using JNDI then you can connect embedded broker using a URL like this as demonstrated here:
http-remoting://host:8080
If you aren't using JNDI then you can connect to the embedded broker using a URL like:
tcp://host:8080?httpUpgradeEnabled=true
This is what you'd configure in Quarkus' application.properties in which case you can just ignore the AMQ122005 message since you're not using JNDI.
Why don't you use standalone-full.xml which has a complete working embedded Artemis broker.
Another solution with WildFly 27 is to use Galleon and provision the embedded-activemq layer.

ActiveMQ Artemis Kubernetes multi broker setup

I am trying to setup ActiveMQ Artemis multi broker setup in Kubernetes environment. I am able to run single pod deployments with persistence enabled successfully. I used the Artemis docker image built from the official repo.
But if I try to setup a multi-pod deployment with same persistence volume attached (shared PV), although pods gets deployed, one pod will be successful and other pods will crash because the first Artemis container has made file lock on the directory. So I am unable to bring up multiple pod with shared storage.
I also tried JGroups and broadcast concepts to create cluster so that each broker has their own storage and then communicate to each broker internally, but I was not able to configure it successfully.
Has anyone been able to successfully deploy multi-broker Artemis in Kubernetes? There is no issue if each pod has their own storage, but there should be high availabilty for Artemis broker and brokers should communicate like in cluster so that we would not lose messages.
It would be really helpful if anyone can share resources or steps about how to achieve this.
Edit
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>${name}</name>
${jdbc}
<persistence-enabled>${persistence-enabled}</persistence-enabled>
<connectors>
<connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
</connectors>
<broadcast-groups>
<broadcast-group name="cluster-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>jgroups.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="cluster-discovery-group">
<jgroups-file>jgroups.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="artemis-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<!-- <address>jms</address> -->
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="cluster-discovery-group"/>
<!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
</cluster-connection>
</cluster-connections>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>${journal.settings}</journal-type>
<paging-directory>${data.dir}/paging</paging-directory>
<bindings-directory>${data.dir}/bindings</bindings-directory>
<journal-directory>${data.dir}/journal</journal-directory>
<large-messages-directory>${data.dir}/large-messages</large-messages-directory>
${journal-retention}
<journal-datasync>${fsync}</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>${device-block-size}</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
${journal-buffer.settings}${ping-config.settings}${connector-config.settings}
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
${page-sync.settings}
${global-max-section}
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://${host}:${default.port}?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=${support-advisory};suppressInternalManagementObjects=${suppress-internal-management-objects}</acceptor>
${amqp-acceptor}${stomp-acceptor}${hornetq-acceptor}${mqtt-acceptor}
</acceptors>
${cluster-security.settings}${cluster.settings}${replicated.settings}${shared-store.settings}
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="${role}"/>
<permission type="deleteNonDurableQueue" roles="${role}"/>
<permission type="createDurableQueue" roles="${role}"/>
<permission type="deleteDurableQueue" roles="${role}"/>
<permission type="createAddress" roles="${role}"/>
<permission type="deleteAddress" roles="${role}"/>
<permission type="consume" roles="${role}"/>
<permission type="browse" roles="${role}"/>
<permission type="send" roles="${role}"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="${role}"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>${full-policy}</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>${full-policy}</address-full-policy>
<auto-create-queues>${auto-create}</auto-create-queues>
<auto-create-addresses>${auto-create}</auto-create-addresses>
<auto-create-jms-queues>${auto-create}</auto-create-jms-queues>
<auto-create-jms-topics>${auto-create}</auto-create-jms-topics>
<auto-delete-queues>${auto-delete}</auto-delete-queues>
<auto-delete-addresses>${auto-delete}</auto-delete-addresses>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>${address-queue.settings}
</addresses>
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
</core>
</configuration>
This is my broker.xml configuration.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<TCP
enable_diagnostics="true"
bind_addr="match-interface:eth0,lo"
bind_port="7800"
recv_buf_size="20000000"
send_buf_size="640000"
max_bundle_size="64000"
max_bundle_timeout="30"
sock_conn_timeout="300"
thread_pool.enabled="true"
thread_pool.min_threads="1"
thread_pool.max_threads="10"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="false"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="run"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="true"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="run"
/>
<!-- <TRACE/> -->
<org.jgroups.protocols.kubernetes.KUBE_PING
namespace="${KUBERNETES_NAMESPACE:default}"
labels="${KUBERNETES_LABELS:app=custom-artemis-service}"
/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD timeout="10000" max_tries="5" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
<UNICAST3
xmit_table_num_rows="100"
xmit_table_msgs_per_row="1000"
xmit_table_max_compaction_time="30000"
/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
<FC max_credits="2000000" min_threshold="0.10"/>
<FRAG2 frag_size="60000" />
<pbcast.STATE_TRANSFER/>
<pbcast.FLUSH timeout="0"/>
</config>
This is jgroups.xml I used.
I used this config to setup multi-pods setup in k8s.I added relevant Kube ping jars in the lib folder.Although two pods were up, when I tried to access the Artemis UI,there were inconsistent behaviour.After logins,user lands on a UI page where asked to add connections.Sometimes even after successfule login,user is redirected to login page.User is not getting the UI usually gets when there is a single broker. I donot see any error logs too. Can anyone recommend the broker xml changes needed for kuberenetes deployment?
ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/
https://artemiscloud.io/documentation/operator/deploying-brokers-operator.html

ActiveMQ Artemis consumer does not reconnect to the same cluster node after crash

I am setting up a cluster of Artemis in Kubernetes with 3 group of master/slave:
activemq-artemis-master-0 1/1 Running
activemq-artemis-master-1 1/1 Running
activemq-artemis-master-2 1/1 Running
activemq-artemis-slave-0 0/1 Running
activemq-artemis-slave-1 0/1 Running
activemq-artemis-slave-2 0/1 Running
The Artemis version is 2.17.0. Here is my cluster config in master-0 broker.xml. The configs are the same for other brokers except the connector-ref is changed to match the broker:
<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<name>activemq-artemis-master-0</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 1.1 writes per millisecond
on the current journal configuration.
That translates as a sync write every 911999 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>100000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2244000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue"/>
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
<cluster-user>clusterUser</cluster-user>
<cluster-password>aShortclusterPassword</cluster-password>
<connectors>
<connector name="activemq-artemis-master-0">tcp://activemq-artemis-master-0.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-0">tcp://activemq-artemis-slave-0.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-1">tcp://activemq-artemis-master-1.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-1">tcp://activemq-artemis-slave-1.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-2">tcp://activemq-artemis-master-2.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-2">tcp://activemq-artemis-slave-2.activemq-artemis-slave.svc.cluster.local:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="activemq-artemis">
<connector-ref>activemq-artemis-master-0</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<!-- scale-down>true</scale-down -->
<static-connectors>
<connector-ref>activemq-artemis-master-0</connector-ref>
<connector-ref>activemq-artemis-slave-0</connector-ref>
<connector-ref>activemq-artemis-master-1</connector-ref>
<connector-ref>activemq-artemis-slave-1</connector-ref>
<connector-ref>activemq-artemis-master-2</connector-ref>
<connector-ref>activemq-artemis-slave-2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<group-name>activemq-artemis-0</group-name>
<quorum-vote-wait>12</quorum-vote-wait>
<vote-on-replication-failure>true</vote-on-replication-failure>
<!--we need this for auto failback-->
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
</core>
<core xmlns="urn:activemq:core">
<jmx-management-enabled>true</jmx-management-enabled>
</core>
</configuration>
My consumer is defined as a JmsListener in a Spring Boot app. During the consumption of the messages in a queue, the Spring Boot app crashed which results in kubernetes deleting the pod and recreating a new one. However, I noticed that the new pod did not connect to the same Artemis node, thus the left over messages from the previous connection were never consumed.
I thought the whole point of using the cluster is to have all the Artemis nodes act as one unit to deliver messages to the consumer regardless of which node it connect to. Am I wrong? If the cluster cannot reroute the consumer connection to the correct node (which holds the left over messages from previous consumer) then what is the recommended way to deal with this situation?
First, it's important to note that there's no feature to make a client reconnect to the broker from which it disconnected after the client crashes/restarts. Generally speaking the client shouldn't really care about what broker it connects to; that's one of the main goals of horizontal scalability.
It's also worth noting that if the number of messages on the brokers and the number of connected clients is low enough that this condition arises frequently that almost certainly means you have too many brokers in your cluster.
That said, I believe the reason that your client isn't getting the messages it expects is because you're using the default redistribution-delay (i.e. -1) which means messages will not be redistributed to other nodes in the cluster. If you want to enable redistribution (which is seems like you do) then you should set it to >= 0, e.g.:
<address-setting match="#">
...
<redistribution-delay>0</redistribution-delay>
...
</address-setting>
You can read more about redistribution in the documentation.
Aside from that you may want to reconsider your topology in general. Typically if you're in a cloud-like environment (e.g. one using Kubernetes) where the infrastructure itself will restart failed pods then you wouldn't use a master/slave configuration. You'd simply mount the journal on a separate pod (e.g. using NFSv4) such that when a node fails it would be restarted and then reconnect back to its persistent storage. This effectively provides broker high availability (which is what master/slave is designed for outside of cloud environments).
Also, a single instance of ActiveMQ Artemis can handle millions of messages per second depending on the use-case so you may not actually need 3 live nodes for your expected load.
Note, these are general recommendations about your overall architecture and are not directly related to your question.

Configure JMS Divert JBoss7 (HornetQ)

I would like to configure a divert for my hornetq topics/queues. I am using JBoss 7
I configure my HornetQ in messaging subsystem in standalone.xml
<subsystem xmlns="urn:jboss:domain:messaging:1.3">
<hornetq-server>
I configure queues and topics here too
<jms-queue name="topic1">
<entry name="queue/queue1"/>
<entry name="java:jboss/exported/jms/queue/queue1"/>
</jms-queue>
<jms-topic name="topic1">
<entry name="topic/topic1"/>
<entry name="java:jboss/exported/jms/topic/topic1"/>
</jms-topic>
I would like to configure a divert..to divert the topic onto the queue like so:
<!-- Attempting divert-->
<divert name="my-divert">
<address>jms.topic.topic1</address>
<forwarding-address>jms.queue.topic1</forwarding-address>
<exclusive>true</exclusive> </divert> -->
<!-- end divert-->
</hornetq-server>
If I place it in the stanalone.xml in messaging subsystem Jboss will not parse this on startup. Where should I place this config - can it live in stanadalone.xml?
Thanks
I was placing the divert in wrong part of subsystem, this can be easily achieved by running the following command using jboss-cli
/subsystem=messaging/hornetq-server=default/divert=my-divert:add(divert-address=jms.topic.topic1,forwarding-address=jms.queue.queue1,exclusive=true)

Connect to horneQ from another system

I run a hornetQ in standalone mode with its default configuration and I can connect to it from local system, If I want to connect from another system which configurations must be changed to make this possible?!
You first need to define what you mean by another system, did you mean another HornetQ instance, or did you mean another JMS Server?
What's the connection medium? you want a Bridge between hornetQ and other JMS Systems? look at the JMS Bridge on the hornetQ documentation
You want a client to connect to different Message servers? look at the Stomp protocol and several clients available from apache / activeMQ guys. HOrnetQ supports Stomp natively on the server's side.
You need to configure transports, Netty transports.
Take a look at http://docs.jboss.org/hornetq/2.2.14.Final/user-manual/en/html_single/index.html#configuring-transports
These are my configurations
hornetq-configuration.xml:
<configuration xmlns="urn:hornetq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<connectors>
<connector name="netty-connector">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="port" value="5446"/>
</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="port" value="5446"/>
<param key="host" value="0.0.0.0"/>
</acceptor>
</acceptors>
</configuration>
hornetq-beans.xml:
<bean name="Naming" class="org.jnp.server.NamingBeanImpl"/>
<bean name="JNDIServer" class="org.jnp.server.Main">
<property name="namingInfo">
<inject bean="Naming"/>
</property>
<property name="port">1099</property>
<property name="bindAddress">0.0.0.0</property>
<property name="rmiPort">1098</property>
<property name="rmiBindAddress">0.0.0.0</property>
</bean>