Wildfly 14 + Artemis pooledconnectionfactory jmsXA issue - wildfly

Configured Artemis with ha policy as below:
Machine1: artemis-server1-master
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
Machine2: artemis-server2-master
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
server1-slave & server2-slave
<ha-policy>
<shared-store>
<slave>
<failover-on-shutdown>true</failover-on-shutdown>
<restart-backup>true</restart-backup>
<allow-failback>true</allow-failback>
</slave>
</shared-store>
</ha-policy>
In Wildfly, configured Artemis settings below:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<remote-connector name="remote-artemis1" socket-binding="remote-artemis1"/>
<remote-connector name="remote-artemis2" socket-binding="remote-artemis2"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="remote-artemis1 remote-artemis2" client-failure-check-period="1000" reconnect-attempts="-1" retry-interval="1000" ha="true"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="remote-artemis1 remote-artemis2" client-id="Wildfly-14" transaction="xa" user="admin" password="admin" client-failure-check-period="1000" reconnect-attempts="-1" retry-interval="1000" ha="true"/>
</server>
</subsystem>
In ear application, there is a MDB to receiving message. Also using java:/jmsXA to lookup connection factory.
After did with above configuration and starts Wildfly, it is successfully processed and send message (i.e. create connection using jmsXA connection factory) to queue. But while processing few files and in between shutdown any of the Artemis master then mdb will receive the message successfully but using java:/jmsXA connectionfactory will not able to send message although other master is not down.

Related

Example multiple servers with scale down

We trying to use ActiveMQ Artemis within our Docker container, but one scenario I am not able to get working. This is probably due to some bad configuration. Any help is appreciated (e.g. example configuration).
Installation:
Docker instance containing an embedded ActiveMQ Artemis broker and a web application
The broker has clustering, HA and share store defined
We start 3 docker instances
Scenario:
Add 200 messages to the queue in one of the web application
I can see in the logging that all docker instance are handling the messages (this is as expected)
Kill one of the docker instances
Outcome of scenario:
Not all messages are being processed (every message on the queue should result to item in the database)
When restarting the killed docker instance will not result in every message being processed.
Expected outcome:
When a node is down that a other node is picking up the messages
When a node comes online again that is help picking up the messages
Questions:
HA scale down probably does not work because I kill the server.
Does this only work with persistence on the file system or should this also work in an RDBMS?
Configuration:
Below is the configuration which is in every Docker instance, only the host name (project-two) and the HA settings (master/slave) differ per docker instance. It could be a typo in below because I removed the customer specific names in the configuration.
<configuration
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:activemq"
xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd">
<core xmlns="urn:activemq:core">
<security-enabled>true</security-enabled>
<jmx-management-enabled>true</jmx-management-enabled>
<management-address>activemq.management</management-address>
<persistence-enabled>true</persistence-enabled>
<store>
<database-store>
<jdbc-driver-class-name>${artemis.databaseDriverClass}</jdbc-driver-class-name>
<jdbc-connection-url>${artemis.databaseConnectionUrl}</jdbc-connection-url>
<jdbc-user>${artemis.databaseUsername}</jdbc-user>
<jdbc-password>${artemis.databasePassword}</jdbc-password>
<bindings-table-name>ARTEMIS_BINDINGS</bindings-table-name>
<message-table-name>ARTEMIS_MESSAGE</message-table-name>
<page-store-table-name>ARTEMIS_PS</page-store-table-name>
<large-message-table-name>ARTEMIS_LARGE_MESSAGES</large-message-table-name>
<node-manager-store-table-name>ARTEMIS_NODE_MANAGER</node-manager-store-table-name>
</database-store>
</store>
<connectors>
<connector name="netty-connector">tcp://project-two:61617</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://project-two:61617</acceptor>
</acceptors>
<!-- cluster information -->
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>2000</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
<security-settings>
</security-settings>
<!-- Settings for the redelivery -->
<address-settings>
<address-setting match="#">
<redelivery-delay>5000</redelivery-delay>
<max-delivery-attempts>2</max-delivery-attempts>
</address-setting>
</address-settings>
<addresses>
</addresses>
<ha-policy>
<shared-store>
<slave/>
</shared-store>
</ha-policy>
</core>
</configuration>
Docker makes deploying microservice applications very easy but it has some limitations for a production environment. I would take a look to the ArtemisCloud.io operator that provide a way to deploy the Apache ActiveMQ Artemis Broker on Kubernetes.
The ArtemisCloud.io operator also supports message migration for brokers scale down, see https://artemiscloud.io/docs/tutorials/scaleup_and_scaledown/

ActiveMQ Artemis master slave error when backup becomes live

I have a master slave setup with 1 master and 2 slaves. When I kill the master, one of the slave tries to become master but fails with following exception:
2022/03/08 16:13:28.746 | mb | ERROR | 1-156 | o.a.a.a.c.server | | AMQ224000: Failure in initialisation: java.lang.IndexOutOfBoundsException: length(32634) exceeds src.readableBytes(32500) where src is: UnpooledHeapByteBuf(ridx: 78, widx: 32578, cap: 32578/32578)
at io.netty.buffer.AbstractByteBuf.checkReadableBounds(AbstractByteBuf.java:643)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1095)
at org.apache.activemq.artemis.core.message.impl.CoreMessage.reloadPersistence(CoreMessage.java:1207)
at org.apache.activemq.artemis.core.message.impl.CoreMessagePersister.decode(CoreMessagePersister.java:85)
at org.apache.activemq.artemis.core.message.impl.CoreMessagePersister.decode(CoreMessagePersister.java:28)
at org.apache.activemq.artemis.spi.core.protocol.MessagePersister.decode(MessagePersister.java:120)
at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.decodeMessage(AbstractJournalStorageManager.java:1336)
at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.lambda$loadMessageJournal$1(AbstractJournalStorageManager.java:1035)
at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList$SparseArray.clear(SparseArrayLinkedList.java:114)
at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList.clearSparseArrayList(SparseArrayLinkedList.java:173)
at org.apache.activemq.artemis.utils.collections.SparseArrayLinkedList.clear(SparseArrayLinkedList.java:227)
at org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.loadMessageJournal(AbstractJournalStorageManager.java:990)
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.loadJournals(ActiveMQServerImpl.java:3484)
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.initialisePart2(ActiveMQServerImpl.java:3149)
at org.apache.activemq.artemis.core.server.impl.SharedNothingBackupActivation.run(SharedNothingBackupActivation.java:325)
at org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$ActivationThread.run(ActiveMQServerImpl.java:4170)
I'm also observing a lot of messages like this one:
2022/03/08 16:13:28.745 | AMQ224009: Cannot find message 36,887,402,768
2022/03/08 16:13:28.745 | AMQ224009: Cannot find message 36,887,402,768
Master setup:
<ha-policy>
<replication>
<master>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
<connectors>
<connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
<connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
<connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.16.134.51:62616</acceptor>
<acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>connector-server-0</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>connector-server-1</connector-ref>
<connector-ref>connector-server-2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Slave 1 setup:
<ha-policy>
<replication>
<slave>
<allow-failback>true</allow-failback>
</slave>
</replication>
</ha-policy>
<connectors>
<connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
<connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
<connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.16.134.52:62616</acceptor>
<acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="cluster">
<connector-ref>connector-server-1</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>connector-server-0</connector-ref>
<connector-ref>connector-server-2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Slave 2
<ha-policy>
<replication>
<slave>
<allow-failback>true</allow-failback>
</slave>
</replication>
</ha-policy>
<connectors>
<connector name="connector-server-0">tcp://172.16.134.51:62616</connector>
<connector name="connector-server-1">tcp://172.16.134.52:62616</connector>
<connector name="connector-server-2">tcp://172.16.134.28:62616</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://172.16.134.28:62616</acceptor>
<acceptor name="invm">vm://0</acceptor>
</acceptors>
<cluster-connections>
<cluster-connection name="cluster">
<connector-ref>connector-server-2</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>connector-server-0</connector-ref>
<connector-ref>connector-server-1</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
Could you please tell me what is not correct in my setup?
I'm using activemq-artemis version 2.17.0
I recommend you upgrade to the latest release and retry.
Also, I recommend simplifying your configuration to use just a single live/backup pair. The broker will only ever replicate data to one other broker. The second backup will be completely idle until either the master or current backup fails.
Lastly, using a single live/backup pair with the replication ha-policy is very dangerous due to the possibility of split-brain. I strongly recommend that you use shared-storage or once you move to the latest release you configure pluggable quorum voting with ZooKeeper to mitigate the risk of split-brain.

Artemis cluster throws ClassNotFoundException: org.hornetq.core.remoting.impl.netty.NettyConnectorFactory

I prepared java application with embedded Artemis broker. It is single jar that contains all dependencies.
I use org.apache.activemq:artemis-server:2.10.1 and I start broker in following way:
new EmbeddedActiveMQ().setConfigResourcePath( config ).start();
where config is a URL to configuration file.
I use one config file for master and one config file for slave.
master:
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd">
<core xmlns="urn:activemq:core">
<persistence-enabled>true</persistence-enabled>
<security-enabled>false</security-enabled>
<acceptors>
<acceptor name="netty-acceptor">tcp://127.0.0.1:61617</acceptor>
</acceptors>
<bindings-directory>artemis/bindings</bindings-directory>
<journal-directory>artemis/journal</journal-directory>
<address-settings>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
</addresses>
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://127.0.0.1:61617</connector>
</connectors>
<cluster-user>artemis-cluster</cluster-user>
<cluster-password>artemis-cluster</cluster-password>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address></address>
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master/>
</replication>
</ha-policy>
</core>
</configuration>
Slave:
<core xmlns="urn:activemq:core">
<persistence-enabled>true</persistence-enabled>
<security-enabled>false</security-enabled>
<acceptors>
<acceptor name="netty-acceptor">tcp://127.0.0.1:61618</acceptor>
</acceptors>
<bindings-directory>artemis/bindings</bindings-directory>
<journal-directory>artemis/journal</journal-directory>
<address-settings>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
</addresses>
<connectors>
<!-- Connector used to be announced through cluster connections and notifications -->
<connector name="artemis">tcp://127.0.0.1:61618</connector>
</connectors>
<cluster-user>artemis-cluster</cluster-user>
<cluster-password>artemis-cluster</cluster-password>
<broadcast-groups>
<broadcast-group name="bg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>5000</broadcast-period>
<connector-ref>artemis</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address></address>
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<slave/>
</replication>
</ha-policy>
</core>
It works properly on Windows 10 and Centos 7. It does not work on Oracle Linux 7. All machines use the same Java version.
java -version
openjdk version "11.0.9" 2020-10-20
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.9+11)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.9+11, mixed mode)
There is no any exception on Windows and Centos and I see following logs:
11:14:30.867 [Thread-0] INFO org.apache.activemq.artemis.core.server - AMQ221025: Replication: sending NIOSequentialFile...
and
11:14:33.911 [Thread-1 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6#2a3c96e3)] INFO org.apache.activemq.artemis.core.server - AMQ221031: backup announced
Exactly the same jar and exaclty the same configuration file throws exception on Oracle Linux:
11:31:56.074 [Thread-0 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6#63b1d4fa)] WARN org.apache.activemq.artemis.core.client - AMQ212025: did not connect the cluster connection to other nodes
java.lang.IllegalStateException: java.lang.ClassNotFoundException: org.hornetq.core.remoting.impl.netty.NettyConnectorFactory
at org.apache.activemq.artemis.utils.ClassloadingUtil.newInstanceFromClassLoader(ClassloadingUtil.java:59) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$2.run(ClientSessionFactoryImpl.java:996) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$2.run(ClientSessionFactoryImpl.java:993) ~[broker.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.instantiateConnectorFactory(ClientSessionFactoryImpl.java:993) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.<init>(ClientSessionFactoryImpl.java:181) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:798) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:655) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:637) ~[broker.jar:?]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:595) ~[broker.jar:?]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) ~[broker.jar:?]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) ~[broker.jar:?]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) ~[broker.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [broker.jar:?]
Caused by: java.lang.ClassNotFoundException: org.hornetq.core.remoting.impl.netty.NettyConnectorFactory
at jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) ~[?:?]
at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[?:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[?:?]
at org.apache.activemq.artemis.utils.ClassloadingUtil.newInstanceFromClassLoader(ClassloadingUtil.java:55) ~[broker.jar:?]
... 15 more
Master and slave both throw the same exception. Why Artemis wants to use org.hornetq.core.remoting.impl.netty.NettyConnectorFactory on Oracle Linux? I debugged my application on Windows and org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory is used there. Where it can be configured? Maybe there is some configuration file in user directory or sth like that? Probably I can solve the issue by adding hornetq to dependencies but I prefer to use NettyConnectorFactory from Artemis. I do not understand why one machine uses different NettyConnectorFactory than the other two machines.
This is almost certainly being caused by a HornetQ broker on your network which is using the same group-address and group-port (i.e. 231.7.7.7:9876) to broadcast its connector information. You're using the default values so this isn't terribly surprising. You should either find this HornetQ broker and stop it or change your Artemis' cluster configuration to use a different group-address and/or group-port.

JBoss 7 ActiveMQ Integration Resource Adapter Configuration

I am trying to setup an embedded activeMQ with Jboss 7.3.0 and activemq-rar-5.6.0.
The embedded broker should be accessible from inside Jboss and also from outside(via tcp from another application)
I am facing the following exception when i start jboss with 'standalone.bat -c standalone-full.xml'
**Caused by: org.jboss.msc.service.DuplicateServiceException: Service jboss.ra.activemq-ra is already registered**
I really appreciate any guidance on why i am getting this exception. I have attached below an image from the log.
Below are my configurations.
I added a resource adapter inside resource-adapters subsystem in standalone-full.xml file
<subsystem xmlns="urn:jboss:domain:resource-adapters:5.0">
<resource-adapters>
<resource-adapter id="activemq-ra.rar">
<archive>
activemq-ra.rar
</archive>
<transaction-support>XATransaction</transaction-support>
<!-- <config-property name="ServerUrl">tcp://localhost:61616</config-property> -->
<config-property name="ServerUrl">vm://localhost</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
<admin-objects>
<admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/queue/HELLOWORLDMDBQueue" use-java-context="true" pool-name="HELLOWORLDMDBQueue">
<config-property name="PhysicalName">HELLOWORLDMDBQueue</config-property>
</admin-object>
<admin-object class-name="org.apache.activemq.command.ActiveMQTopic" jndi-name="java:/topic/HELLOWORLDMDBTopic" use-java-context="true" pool-name="HELLOWORLDMDBTopic">
<config-property name="PhysicalName">HELLOWORLDMDBTopic</config-property>
</admin-object>
</admin-objects>
</resource-adapter>
</resource-adapters>
</subsystem>
I updated the resource-adapter-ref inside the mdb tag in standalone-full.xml file.
<subsystem xmlns="urn:jboss:domain:ejb3:6.0">
<session-bean>
<stateless>
<bean-instance-pool-ref pool-name="slsb-strict-max-pool"/>
</stateless>
<stateful default-access-timeout="5000" cache-ref="simple" passivation-disabled-cache-ref="simple"/>
<singleton default-access-timeout="5000"/>
</session-bean>
<mdb>
<!--<resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/> -->
<resource-adapter-ref resource-adapter-name="activemq-ra.rar"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
The messaging subsystem inside standalone-full.xml is
<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
<server name="default">
<statistics enabled="${wildfly.messaging-activemq.statistics-enabled:${wildfly.statistics-enabled:false}}"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="myTestQ" entries="java:jboss/exported/jms/queue/myTestQ"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
I Updated ra.xml file, changed the ServerUrl to tcp://localhost:61616
<resourceadapter-class>org.apache.activemq.ra.ActiveMQResourceAdapter</resourceadapter-class>
<config-property>
<description>
The URL to the ActiveMQ server that you want this connection to connect to. If using
an embedded broker, this value should be 'vm://localhost'.
</description>
<config-property-name>ServerUrl</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tcp://localhost:61616</config-property-value>
<!--<config-property-value>vm://localhost</config-property-value> -->
</config-property>
And set the config-property-value of the config-property BrokerXmlConfig in META-INF/ra.xml to xbean:broker-config.xml
<config-property>
<description>
Sets the XML configuration file used to configure the embedded ActiveMQ broker via
Spring if using embedded mode.
BrokerXmlConfig is the filename which is assumed to be on the classpath unless
a URL is specified. So a value of foo/bar.xml would be assumed to be on the
classpath whereas file:dir/file.xml would use the file system.
Any valid URL string is supported.
</description>
<config-property-name>BrokerXmlConfig</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value></config-property-value>
<description>To use the broker-config.xml from the root for the RAR </description>
<config-property-value>xbean:broker-config.xml</config-property-value>
<!-- To use an external file or url location
<config-property-value>xbean:file:///amq/config/jee/broker-config.xml</config-property-value>
-->
</config-property>
I created a new activemq-ra.rar file and placed it in jboss-eap-7.3\standalone\deployments folder.
Try using a different name for your resource adapter archive. For example, use activemq5-ra.rar instead of activemq-ra.rar. I believe you're conflicting with the embedded JCA RA used for ActiveMQ Artemis integration.
Alternatively you could remove the messaging subsystem from your server's XML configuration.

Artemis Replication In Kubernetes Issues

I am deploying a simple master/slave replication configuration within Kubernetes environment. They use static connectors. When I delete the master pod the slave successfully takes over duties, but when the master pod comes back up the slave pod does not terminate as live so I end up with two live servers. When this happens I notice they also form an internal bridge. I've ran the exact same configurations locally, outside of Kubernetes, and the slave successfully terminates and goes back to being a slave when the master comes back up. Any ideas as to why this is happening? I am using Artemis version 2.6.4.
master broker.xml:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<jms xmlns="urn:activemq:jms">
<queue name="jms.queue.acmadapter_to_acm_design">
<durable>true</durable>
</queue>
</jms>
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector-master">tcp://artemis-service-0.artemis-service.falconx.svc.cluster.local:61618</connector>
<connector name="netty-connector-backup">tcp://artemis-service2-0.artemis-service.falconx.svc.cluster.local:61618</connector>
</connectors>
<ha-policy>
<replication>
<master>
<!--we need this for auto failback-->
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector-master</connector-ref>
<static-connectors>
<connector-ref>netty-connector-master</connector-ref>
<connector-ref>netty-connector-backup</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
</core>
</configuration>
slave broker.xml:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector-backup">tcp://artemis-service2-0.artemis-service.test.svc.cluster.local:61618</connector>
<connector name="netty-connector-master">tcp://artemis-service-0.artemis-service.test.svc.cluster.local:61618</connector>
</connectors>
<ha-policy>
<replication>
<slave>
<allow-failback>true</allow-failback>
<!-- not needed but tells the backup not to restart after failback as there will be > 0 backups saved -->
<max-saved-replicated-journals-size>0</max-saved-replicated-journals-size>
</slave>
</replication>
</ha-policy>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector-backup</connector-ref>
<static-connectors>
<connector-ref>netty-connector-master</connector-ref>
<connector-ref>netty-connector-backup</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
</core>
</configuration>
It's likely that the master's journal is being lost when it is stopped. The journal (specifically the server.lock file) holds the node's unique identifier (which is shared by the replicated slave). If the journal is lost when the node is dropped then it has no way to pair with its slave when it comes back up which would explain the behavior you're observing. Make sure the journal is on a persistent volume claim.
Also, it's worth noting that a single master/slave pair is not recommended due to the risk of split brain. In general, it's recommended that you have 3 master/slave pairs to establish a proper quorum.