I would like to expose my WildFly server to STOMP clients but I have not found any recent samples. As I understand it all communication in recent WildFly versions goes through a single socket (listening to 8080 by default). Do I need to change any configuration or is it supported out of the box? Any pointers are appreciated.
I am on a different version of WildFly (10.0.CR1) for that version the smallest possible change looks like:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
...
<acceptor name="stomp-acceptor" factory-class="org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory">
<param name="protocols" value="STOMP"/>
<param name="port" value="61613"/>
</acceptor>
...
</server>
</subsystem>
It's true WildFly only listens on port 8080 by default (plus port 9990 for management), using HTTP protocol upgrade to switch to different protocols.
However, you can still define additional acceptors for other ports. I don't know whether or not it is possible to use STOMP with protocol upgrade over port 8080, but here's how to configure an additional Netty acceptor for port 5445:
<extension module="org.jboss.as.messaging"/>
<subsystem xmlns="urn:jboss:domain:messaging:2.0">
<hornetq-server>
<journal-file-size>102400</journal-file-size>
<connectors>
<http-connector name="http-connector" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor"/>
</http-connector>
<http-connector name="http-connector-throughput" socket-binding="http">
<param key="http-upgrade-endpoint" value="http-acceptor-throughput"/>
<param key="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<acceptors>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</http-acceptor>
<netty-acceptor name="stomp-acceptor" socket-binding="messaging-stomp">
<param key="protocols" value="STOMP"/>
<param key="connection-ttl" value="30000"/>
</netty-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="send" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
</security-setting>
</security-settings>
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<max-size-bytes>10485760</max-size-bytes>
<page-size-bytes>2097152</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
</address-setting>
</address-settings>
<jms-connection-factories>
<connection-factory name="InVmConnectionFactory">
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="RemoteConnectionFactory">
<connectors>
<connector-ref connector-name="http-connector"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
</entries>
</connection-factory>
<pooled-connection-factory name="hornetq-ra">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/JmsXA"/>
<entry name="java:jboss/DefaultJMSConnectionFactory"/>
</entries>
</pooled-connection-factory>
</jms-connection-factories>
<jms-destinations>
<jms-queue name="ExpiryQueue">
<entry name="java:/jms/queue/ExpiryQueue"/>
</jms-queue>
<jms-queue name="DLQ">
<entry name="java:/jms/queue/DLQ"/>
</jms-queue>
</jms-destinations>
</hornetq-server>
</subsystem>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="messaging-stomp" port="5445"/>
</socket-binding-group>
In addition, you'll have to create a user account with role guest via add-user.sh. This account will be used by the STOMP client.
Tested on WildFly 8.2.0.Final.
The following conf did the trick in my case (WF 10.0.0.Final)
<remote-acceptor name="stomp-acceptor" socket-binding="messaging-stomp">
<param name="protocols" value="STOMP"/>
<param name="connection-ttl" value="30000"/>
<param name="stomp-enable-message-id" value="true"/>
</remote-acceptor>
...
<socket-binding name="messaging-stomp" port="61613"/>
Related
I am trying to setup ActiveMQ Artemis multi broker setup in Kubernetes environment. I am able to run single pod deployments with persistence enabled successfully. I used the Artemis docker image built from the official repo.
But if I try to setup a multi-pod deployment with same persistence volume attached (shared PV), although pods gets deployed, one pod will be successful and other pods will crash because the first Artemis container has made file lock on the directory. So I am unable to bring up multiple pod with shared storage.
I also tried JGroups and broadcast concepts to create cluster so that each broker has their own storage and then communicate to each broker internally, but I was not able to configure it successfully.
Has anyone been able to successfully deploy multi-broker Artemis in Kubernetes? There is no issue if each pod has their own storage, but there should be high availabilty for Artemis broker and brokers should communicate like in cluster so that we would not lose messages.
It would be really helpful if anyone can share resources or steps about how to achieve this.
Edit
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>${name}</name>
${jdbc}
<persistence-enabled>${persistence-enabled}</persistence-enabled>
<connectors>
<connector name="netty-connector">tcp://${ipv4addr:localhost}:61618</connector>
</connectors>
<broadcast-groups>
<broadcast-group name="cluster-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>jgroups.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="cluster-discovery-group">
<jgroups-file>jgroups.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="artemis-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<!-- <address>jms</address> -->
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="cluster-discovery-group"/>
<!-- <forward-when-no-consumers>true</forward-when-no-consumers> -->
</cluster-connection>
</cluster-connections>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>${journal.settings}</journal-type>
<paging-directory>${data.dir}/paging</paging-directory>
<bindings-directory>${data.dir}/bindings</bindings-directory>
<journal-directory>${data.dir}/journal</journal-directory>
<large-messages-directory>${data.dir}/large-messages</large-messages-directory>
${journal-retention}
<journal-datasync>${fsync}</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>${device-block-size}</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
${journal-buffer.settings}${ping-config.settings}${connector-config.settings}
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
${page-sync.settings}
${global-max-section}
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61618</acceptor>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://${host}:${default.port}?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=${support-advisory};suppressInternalManagementObjects=${suppress-internal-management-objects}</acceptor>
${amqp-acceptor}${stomp-acceptor}${hornetq-acceptor}${mqtt-acceptor}
</acceptors>
${cluster-security.settings}${cluster.settings}${replicated.settings}${shared-store.settings}
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="${role}"/>
<permission type="deleteNonDurableQueue" roles="${role}"/>
<permission type="createDurableQueue" roles="${role}"/>
<permission type="deleteDurableQueue" roles="${role}"/>
<permission type="createAddress" roles="${role}"/>
<permission type="deleteAddress" roles="${role}"/>
<permission type="consume" roles="${role}"/>
<permission type="browse" roles="${role}"/>
<permission type="send" roles="${role}"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="${role}"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>${full-policy}</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>${full-policy}</address-full-policy>
<auto-create-queues>${auto-create}</auto-create-queues>
<auto-create-addresses>${auto-create}</auto-create-addresses>
<auto-create-jms-queues>${auto-create}</auto-create-jms-queues>
<auto-create-jms-topics>${auto-create}</auto-create-jms-topics>
<auto-delete-queues>${auto-delete}</auto-delete-queues>
<auto-delete-addresses>${auto-delete}</auto-delete-addresses>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>${address-queue.settings}
</addresses>
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
</core>
</configuration>
This is my broker.xml configuration.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<TCP
enable_diagnostics="true"
bind_addr="match-interface:eth0,lo"
bind_port="7800"
recv_buf_size="20000000"
send_buf_size="640000"
max_bundle_size="64000"
max_bundle_timeout="30"
sock_conn_timeout="300"
thread_pool.enabled="true"
thread_pool.min_threads="1"
thread_pool.max_threads="10"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="false"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="run"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="true"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="run"
/>
<!-- <TRACE/> -->
<org.jgroups.protocols.kubernetes.KUBE_PING
namespace="${KUBERNETES_NAMESPACE:default}"
labels="${KUBERNETES_LABELS:app=custom-artemis-service}"
/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD timeout="10000" max_tries="5" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true"/>
<UNICAST3
xmit_table_num_rows="100"
xmit_table_msgs_per_row="1000"
xmit_table_max_compaction_time="30000"
/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true"/>
<FC max_credits="2000000" min_threshold="0.10"/>
<FRAG2 frag_size="60000" />
<pbcast.STATE_TRANSFER/>
<pbcast.FLUSH timeout="0"/>
</config>
This is jgroups.xml I used.
I used this config to setup multi-pods setup in k8s.I added relevant Kube ping jars in the lib folder.Although two pods were up, when I tried to access the Artemis UI,there were inconsistent behaviour.After logins,user lands on a UI page where asked to add connections.Sometimes even after successfule login,user is redirected to login page.User is not getting the UI usually gets when there is a single broker. I donot see any error logs too. Can anyone recommend the broker xml changes needed for kuberenetes deployment?
ArtemisCloud.io proposes a solution with an operator to deploy an ActiveMQ Artemis Kubernetes multi broker setup, see https://artemiscloud.io/blog/using_operator/
https://artemiscloud.io/documentation/operator/deploying-brokers-operator.html
We have a WildFly 10 instance configured with ActiveMQ Artemis.
After the server is running for weeks We have found this error on putting a message in a queue:
WARN [com.arjuna.ats.jta] (default task-39) ARJUNA016061: TransactionImple.enlistResource - XAResource.start returned: XAException.XAER_RMFAIL for < formatId=131077, gtrid_length=29, bqual_length=36, tx_uid=0:ffff0a4809a7:12ad14f2:613b6c2b:8c8976c, node_name=1, branch_uid=0:ffff0a4809a7:12ad14f2:613b6c2b:8c89779, subordinatenodename=null, eis_name=java:/JmsXA NodeId:560e8de4-ccea-11eb-a9ec-014175cacf74 >: javax.transaction.xa.XAException
at org.apache.activemq.artemis.ra.ActiveMQRAXAResource.start(ActiveMQRAXAResource.java:85)
at org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.start(ActiveMQXAResourceWrapperImpl.java:121)
at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:662)
at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.enlistResource(TransactionImple.java:423)
...
Caused by: ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ119014: Timed out after waiting 30,000 ms for response when sending packet 44]
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:398)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:304)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext.simpleRollback(ActiveMQSessionContext.java:299)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.rollback(ClientSessionImpl.java:542)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.rollback(ClientSessionImpl.java:513)
at org.apache.activemq.artemis.core.client.impl.ClientSessionImpl.resetIfNeeded(ClientSessionImpl.java:594)
at org.apache.activemq.artemis.ra.ActiveMQRAXAResource.start(ActiveMQRAXAResource.java:80)
... 131 more
This error appears on each operation on the queues until the server is restarted.
The exception was logged by a local client (a war deployed in the application server itself). The local client sends messages, there are remote consumers consuming them. When the error occurred the as load was not higher than usual. I did not collect a thread dump I can try to collect them if the error occurs again.
The subsystem configuration is:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<management jmx-enabled="true" />
<security enabled="false" />
<bindings-directory
path="/data/activemq/bindings" />
<journal-directory path="/data/activemq/journal" />
<large-messages-directory
path="/data/activemq/largemessages" />
<paging-directory path="/data/activemq/pages" />
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true"
create-non-durable-queue="true" consume="true" send="true" />
</security-setting>
<address-setting name="#"
message-counter-history-day-limit="10" page-size-bytes="2097152"
max-size-bytes="104857600" max-delivery-attempts="-1"
redelivery-delay="300000" expiry-address="jms.queue.ExpiryQueue"
dead-letter-address="jms.queue.DLQ" />
<address-setting
name="jms.queue.queue1" max-delivery-attempts="-1"
expiry-address="jms.queue.ExpiryQueue"
dead-letter-address="jms.queue.DLQ" redelivery-delay="120000" />
<http-connector name="http-connector"
endpoint="http-acceptor" socket-binding="messaging" />
<http-connector name="http-connector-throughput"
endpoint="http-acceptor-throughput" socket-binding="messaging">
<param name="batch-delay" value="50" />
</http-connector>
<in-vm-connector name="in-vm" server-id="0" />
<http-acceptor name="http-acceptor"
http-listener="default" />
<http-acceptor name="http-acceptor-throughput"
http-listener="default">
<param name="batch-delay" value="50" />
<param name="direct-deliver" value="false" />
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0" />
<jms-queue name="ExpiryQueue"
entries="java:/jms/queue/ExpiryQueue" />
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ" />
<jms-queue name="queue1"
entries="queue1 queue/queue1 jms/queue/queue1 java:jboss/exported/queue1" />
<!--
...
--->
<jms-queue name="queueN"
entries="queueN queue/queueN jms/queue/queueN java:jboss/exported/queueN" />
<connection-factory name="InVmConnectionFactory"
entries="java:/ConnectionFactory" connectors="in-vm" />
<connection-factory
name="RemoteConnectionFactory"
failover-on-initial-connection="true" reconnect-attempts="-1"
block-on-acknowledge="true" consumer-window-size="0"
client-failure-check-period="10000" ha="true"
entries="java:jboss/exported/jms/RemoteConnectionFactory"
connectors="http-connector" />
<pooled-connection-factory
name="activemq-ra" transaction="xa"
entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"
connectors="in-vm" />
</server>
</subsystem>
Is there any configuration error? Can anyone suggest a solution for this error?
WildFly 10.1.0 was released in August 2016 and it uses ActiveMQ Artemis 1.1.0 which was release in October 2015, nearly 6 years ago now. WildFly is up to version 25 now and ActiveMQ Artemis is at 2.18.0 now. I strongly encourage you to upgrade to a later release of WildFly or even use the latest release of ActiveMQ Artemis standalone (i.e. not embedded in WildFly).
Even if a bug is identified in 1.1.0 there will be no bug-fix release for this version. In order to get any kind of fix you will be forced to upgrade to the latest version unless you back-port the fix and build it yourself. It's worth noting that many hundreds of bugs have been fixed in ActiveMQ Artemis since 1.1.0 was released. Your issue could very well be among them.
I am trying to setup an embedded activeMQ with Jboss 7.3.0 and activemq-rar-5.6.0.
The embedded broker should be accessible from inside Jboss and also from outside(via tcp from another application)
I am facing the following exception when i start jboss with 'standalone.bat -c standalone-full.xml'
**Caused by: org.jboss.msc.service.DuplicateServiceException: Service jboss.ra.activemq-ra is already registered**
I really appreciate any guidance on why i am getting this exception. I have attached below an image from the log.
Below are my configurations.
I added a resource adapter inside resource-adapters subsystem in standalone-full.xml file
<subsystem xmlns="urn:jboss:domain:resource-adapters:5.0">
<resource-adapters>
<resource-adapter id="activemq-ra.rar">
<archive>
activemq-ra.rar
</archive>
<transaction-support>XATransaction</transaction-support>
<!-- <config-property name="ServerUrl">tcp://localhost:61616</config-property> -->
<config-property name="ServerUrl">vm://localhost</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
<admin-objects>
<admin-object class-name="org.apache.activemq.command.ActiveMQQueue" jndi-name="java:/queue/HELLOWORLDMDBQueue" use-java-context="true" pool-name="HELLOWORLDMDBQueue">
<config-property name="PhysicalName">HELLOWORLDMDBQueue</config-property>
</admin-object>
<admin-object class-name="org.apache.activemq.command.ActiveMQTopic" jndi-name="java:/topic/HELLOWORLDMDBTopic" use-java-context="true" pool-name="HELLOWORLDMDBTopic">
<config-property name="PhysicalName">HELLOWORLDMDBTopic</config-property>
</admin-object>
</admin-objects>
</resource-adapter>
</resource-adapters>
</subsystem>
I updated the resource-adapter-ref inside the mdb tag in standalone-full.xml file.
<subsystem xmlns="urn:jboss:domain:ejb3:6.0">
<session-bean>
<stateless>
<bean-instance-pool-ref pool-name="slsb-strict-max-pool"/>
</stateless>
<stateful default-access-timeout="5000" cache-ref="simple" passivation-disabled-cache-ref="simple"/>
<singleton default-access-timeout="5000"/>
</session-bean>
<mdb>
<!--<resource-adapter-ref resource-adapter-name="${ejb.resource-adapter-name:activemq-ra.rar}"/> -->
<resource-adapter-ref resource-adapter-name="activemq-ra.rar"/>
<bean-instance-pool-ref pool-name="mdb-strict-max-pool"/>
</mdb>
The messaging subsystem inside standalone-full.xml is
<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
<server name="default">
<statistics enabled="${wildfly.messaging-activemq.statistics-enabled:${wildfly.statistics-enabled:false}}"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="myTestQ" entries="java:jboss/exported/jms/queue/myTestQ"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
I Updated ra.xml file, changed the ServerUrl to tcp://localhost:61616
<resourceadapter-class>org.apache.activemq.ra.ActiveMQResourceAdapter</resourceadapter-class>
<config-property>
<description>
The URL to the ActiveMQ server that you want this connection to connect to. If using
an embedded broker, this value should be 'vm://localhost'.
</description>
<config-property-name>ServerUrl</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value>tcp://localhost:61616</config-property-value>
<!--<config-property-value>vm://localhost</config-property-value> -->
</config-property>
And set the config-property-value of the config-property BrokerXmlConfig in META-INF/ra.xml to xbean:broker-config.xml
<config-property>
<description>
Sets the XML configuration file used to configure the embedded ActiveMQ broker via
Spring if using embedded mode.
BrokerXmlConfig is the filename which is assumed to be on the classpath unless
a URL is specified. So a value of foo/bar.xml would be assumed to be on the
classpath whereas file:dir/file.xml would use the file system.
Any valid URL string is supported.
</description>
<config-property-name>BrokerXmlConfig</config-property-name>
<config-property-type>java.lang.String</config-property-type>
<config-property-value></config-property-value>
<description>To use the broker-config.xml from the root for the RAR </description>
<config-property-value>xbean:broker-config.xml</config-property-value>
<!-- To use an external file or url location
<config-property-value>xbean:file:///amq/config/jee/broker-config.xml</config-property-value>
-->
</config-property>
I created a new activemq-ra.rar file and placed it in jboss-eap-7.3\standalone\deployments folder.
Try using a different name for your resource adapter archive. For example, use activemq5-ra.rar instead of activemq-ra.rar. I believe you're conflicting with the embedded JCA RA used for ActiveMQ Artemis integration.
Alternatively you could remove the messaging subsystem from your server's XML configuration.
I was upgrading ATG application from 10.x to 11.x. I also upgraded jboss-eap from 5.1 to 7.2 I have faced various JBoss issues and many of them were fixed.
As of now we are getting the following error while starting the ATG fulfillment server and it seems to be JBoss JMS issue.
07:56:43,346 ERROR [nucleusNamespace.atg.dynamo.messaging.MessagingManager] (ServerService Thread Pool -- 81) PatchBay failed to startup properly : a Scheduler job will be registered to continue trying to bring PatchBay up : note this may result in further errors: atg.nucleus.ServiceException: An error occurred trying to resolve JNDI name "java:/XAConnectionFactory" for the "xa-topic-connection-factory-name" in provider "Hornet" in definition file "/atg/dynamo/messaging/dynamoMessagingSystem.xml": javax.naming.NameNotFoundException: XAConnectionFactory -- service jboss.naming.context.java.XAConnectionFactory
at atg.dms.patchbay.Provider.initializeTopicConnection(Provider.java:364)
at atg.dms.patchbay.PatchBayManager.createInputDestination(PatchBayManager.java:1811)
at atg.dms.patchbay.PatchBayManager.createInputPorts(PatchBayManager.java:1446)
at atg.dms.patchbay.PatchBayManager.createElementManager(PatchBayManager.java:1477)
at atg.dms.patchbay.PatchBayManager.createMessageFilters(PatchBayManager.java:1338)
In Jboss 5, there were following configuration files:
ls jboss-eap-5.1/seam/bootstrap/deploy/messaging/
connection-factories-service.xml hsqldb-persistence-service.xml legacy-service.xml remoting-service.xml
destinations-service.xml jms-ds.xml messaging-service.xml
In Jboss 7.2 we have the following message config in standalone.xml file:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<journal pool-files="10"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
Following are the contents of atg/dynamo/messaging/dynamoMessagingSystem.xml in the code
<?xml version="1.0" encoding="UTF-8"?>
<dynamo-message-system>
<patchbay>
<!-- JBoss Hornet provider -->
<provider>
<provider-name>Hornet</provider-name>
<xa-topic-connection-factory-name>
java:/XAConnectionFactory
</xa-topic-connection-factory-name>
<xa-queue-connection-factory-name>
java:/XAConnectionFactory
</xa-queue-connection-factory-name>
<supports-transactions>
true
</supports-transactions>
<supports-xa-transactions>
true
</supports-xa-transactions>
<username>***</username>
<password>***</password>
<initial-context-factory>
/abcd/common/services/HornetQ
</initial-context-factory>
</provider>
<!-- Reporting order message source -->
<message-source>
<nucleus-name>/abcd/commerce/fulfillment/processor/SendReportingSubmitOrderMessage</nucleus-name>
<output-port>
<port-name>ReportingOrderSubmit</port-name>
<output-destination>
<provider-name>local</provider-name>
<destination-name>localdms:/local/Fulfillment/LocalSubmitOrder</destination-name>
<destination-type>Topic</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Split order message source -->
<message-source>
<nucleus-name>/abcd/commerce/fulfillment/processor/SendSplitMessages/</nucleus-name>
<output-port>
<port-name>DEFAULT</port-name>
</output-port>
<output-port>
<port-name>FulfillmentOrderSubmitPort</port-name>
<output-destination>
<destination-name>patchbay:/Fulfillment/SubmitOrder</destination-name>
<destination-type>Topic</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Custom source/sink will take fulfillment failures and forward them, perhaps to multiple queues or none -->
<message-source>
<nucleus-name>
/abcd/commerce/fulfillment/FailureMessageSink
</nucleus-name>
<output-port>
<port-name>
FulfillmentFailureNotifications
</port-name>
<output-destination>
<destination-name>
patchbay:/Fulfillment/FulfillmentFailureNotifications
</destination-name>
<destination-type>
Topic
</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Custom source/sink will take fulfillment failures and forward them, perhaps to multiple queues or none -->
<message-sink>
<nucleus-name>
/abcd/commerce/fulfillment/FailureMessageSink
</nucleus-name>
<input-port>
<port-name>FulfillmentError</port-name>
<input-destination>
<destination-name>patchbay:/Fulfillment/ErrorNotification</destination-name>
<destination-type>Queue</destination-type>
</input-destination>
</input-port>
</message-sink>
I'm new to both jboss and ATG, Could anyone help me to resolve the issue ?
java:/XAConnectionFactory is not defined in WildFly. You need to configure WildFly to properly create and expose those connection factories like this:
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory java:/XAConnectionFactory" connectors="in-vm" transaction="xa"/>
Please note also that you are now on Apache ActiveMQ Artemis and no longer on HornetQ
I have following queue setup in cluster:
server1 running on 61616
server2 running on 61617
I start two consumer with two different available master server
java -jar AmqJmsConsume.jar -duration 5 -queue IT-InputQueue -stats -log
/tmp/artemis/4 -verify -commitdelay 300 -url 'tcp://localhost:61616'
java -jar AmqJmsConsume.jar -duration 5 -queue IT-InputQueue -stats -log
/tmp/artemis/4 -verify -commitdelay 300 -url 'tcp://localhost:61617'
and run producer to server1 as below
java -jar AmqJmsProducer.jar -topic RFTopic -stats -log /tmp/artemis/4 -id
-count 500 -n 500 -ttl 3600000 -url 'tcp://localhost:61616' -outliers 100
-outliersize 500k
both consumer gets few message but after below exception server1 clears
queue with almost 50% of messages but rest of messages are stuck with
internal.sf.cluster queue and not received by any consumer.
java.lang.IllegalStateException: no queueIDs defined
at org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectionBridge.beforeForward(ClusterConnectionBridge.java:182) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl.handle(BridgeImpl.java:653) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl.handle(QueueImpl.java:3346) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2606) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2300(QueueImpl.java:117) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:3613) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.9.0.jar:2.9.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_211]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.9.0.jar:2.9.0]
2019-07-29 10:21:38,503 WARN [org.apache.activemq.artemis.core.server.impl.QueueImpl] null: java.util.NoSuchElementException
at org.apache.activemq.artemis.utils.collections.PriorityLinkedListImpl$PriorityLinkedListIterator.repeat(PriorityLinkedListImpl.java:172) [artemis-commons-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2627) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2300(QueueImpl.java:117) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:3613) [artemis-server-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.9.0.jar:2.9.0]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.9.0.jar:2.9.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_211]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_211]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.9.0.jar:2.9.0]
server1-master broker
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<persistence-enabled>true</persistence-enabled>
<thread-pool-max-size>400</thread-pool-max-size>
<journal-type>NIO</journal-type>
<paging-directory>${data.dir}/paging</paging-directory>
<bindings-directory>${data.dir}/bindings</bindings-directory>
<journal-directory>${data.dir}/journal</journal-directory>
<large-messages-directory>${data.dir}/large-messages
</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>20</journal-min-files>
<journal-pool-files>20</journal-pool-files>
<journal-file-size>10M</journal-file-size>
<journal-compact-min-files>30</journal-compact-min-files>
<journal-buffer-timeout>23480000</journal-buffer-timeout>
<!-- When using ASYNCIO, this will determine the writing queue depth for
libaio. -->
<journal-max-io>1</journal-max-io>
<!-- You can verify the network health of a particular NIC by specifying
the <network-check-NIC> element. <network-check-NIC>theNicName</network-check-NIC> -->
<!-- Use this to use an HTTP server to validate the network <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs it should
accept IPV6 Warning: Make sure you understand your network topology as this
is meant to validate if your network is valid. Using IPs that could eventually
disappear or be partially visible may defeat the purpose. You can use a list
of multiple IPs, and if any successful ping will make the server OK to continue
running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the
disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the
connection in certain protocols that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>LOG</critical-analyzer-policy>
<transaction-timeout>1800000</transaction-timeout>
<!-- the system will enter into page mode once you hit this limit. This
is an estimate in bytes of how much the messages are using in memory The
system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your
needs. <global-max-size>100Mb</global-max-size> -->
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://localhost:61616?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.</acceptor>
</acceptors>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>idsk-jgroups.xml</jgroups-file>
<jgroups-channel>persistence-fs</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>idsk-jgroups.xml</jgroups-file>
<jgroups-channel>persistence-fs</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group" />
</cluster-connection>
</cluster-connections>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<diverts>
<divert name="RF-Transform">
<routing-name>RFeeds-Transform</routing-name>
<address>RFTopic</address>
<forwarding-address>IT-InputQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
<divert name="RF-Output">
<routing-name>RFeeds-Output</routing-name>
<address>RFTopic</address>
<forwarding-address>T1-InputQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
</diverts>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq" />
<permission type="deleteNonDurableQueue" roles="amq" />
<permission type="createDurableQueue" roles="amq" />
<permission type="deleteDurableQueue" roles="amq" />
<permission type="createAddress" roles="amq" />
<permission type="deleteAddress" roles="amq" />
<permission type="consume" roles="amq" />
<permission type="browse" roles="amq" />
<permission type="send" roles="amq" />
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq" />
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be
auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>60000</redelivery-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>50485760</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
</address-setting>
<!--default for catch all -->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>60000</redelivery-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<redistribution-delay>10000</redistribution-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>50485760</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="IT-InputQueue">
<anycast>
<queue name="IT-InputQueue" />
</anycast>
</address>
<address name="T1-InputQueue">
<anycast>
<queue name="T1-InputQueue" />
</anycast>
</address>
<address name="RFTopic">
<multicast />
</address>
</addresses>
</core>
</configuration>
server-master broker
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<persistence-enabled>true</persistence-enabled>
<thread-pool-max-size>400</thread-pool-max-size>
<journal-type>NIO</journal-type>
<paging-directory>${data.dir}/paging</paging-directory>
<bindings-directory>${data.dir}/bindings</bindings-directory>
<journal-directory>${data.dir}/journal</journal-directory>
<large-messages-directory>${data.dir}/large-messages
</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>20</journal-min-files>
<journal-pool-files>20</journal-pool-files>
<journal-file-size>10M</journal-file-size>
<journal-compact-min-files>30</journal-compact-min-files>
<journal-buffer-timeout>23480000</journal-buffer-timeout>
<!-- When using ASYNCIO, this will determine the writing queue depth for
libaio. -->
<journal-max-io>1</journal-max-io>
<!-- You can verify the network health of a particular NIC by specifying
the <network-check-NIC> element. <network-check-NIC>theNicName</network-check-NIC> -->
<!-- Use this to use an HTTP server to validate the network <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs it should
accept IPV6 Warning: Make sure you understand your network topology as this
is meant to validate if your network is valid. Using IPs that could eventually
disappear or be partially visible may defeat the purpose. You can use a list
of multiple IPs, and if any successful ping will make the server OK to continue
running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the
disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the
connection in certain protocols that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>LOG</critical-analyzer-policy>
<transaction-timeout>1800000</transaction-timeout>
<!-- the system will enter into page mode once you hit this limit. This
is an estimate in bytes of how much the messages are using in memory The
system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your
needs. <global-max-size>100Mb</global-max-size> -->
<connectors>
<connector name="netty-connector">tcp://localhost:61617</connector>
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://localhost:61617?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.</acceptor>
</acceptors>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>idsk-jgroups.xml</jgroups-file>
<jgroups-channel>persistence-fs</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>idsk-jgroups.xml</jgroups-file>
<jgroups-channel>persistence-fs</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group" />
</cluster-connection>
</cluster-connections>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<diverts>
<divert name="RF-Transform">
<routing-name>RFeeds-Transform</routing-name>
<address>RFTopic</address>
<forwarding-address>IT-InputQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
<divert name="RF-Output">
<routing-name>RFeeds-Output</routing-name>
<address>RFTopic</address>
<forwarding-address>T1-InputQueue</forwarding-address>
<exclusive>false</exclusive>
</divert>
</diverts>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq" />
<permission type="deleteNonDurableQueue" roles="amq" />
<permission type="createDurableQueue" roles="amq" />
<permission type="deleteDurableQueue" roles="amq" />
<permission type="createAddress" roles="amq" />
<permission type="deleteAddress" roles="amq" />
<permission type="consume" roles="amq" />
<permission type="browse" roles="amq" />
<permission type="send" roles="amq" />
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq" />
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be
auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>60000</redelivery-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>50485760</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
</address-setting>
<!--default for catch all -->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>60000</redelivery-delay>
<max-delivery-attempts>5</max-delivery-attempts>
<redistribution-delay>10000</redistribution-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>50485760</max-size-bytes>
<page-size-bytes>10485760</page-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>false</auto-create-addresses>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="IT-InputQueue">
<anycast>
<queue name="IT-InputQueue" />
</anycast>
</address>
<address name="T1-InputQueue">
<anycast>
<queue name="T1-InputQueue" />
</anycast>
</address>
<address name="RFTopic">
<multicast />
</address>
</addresses>
</core>
</configuration>
My Analysis:
I have gone thru the code of artemis-server module and below are my findings.
The org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl. route(message,context,direct,rejectDuplicates,bindingMove) method clean the internal properties from the Message. Is there any reason to cleanup those properties. As I am getting above error because the properties "_AMQ_ROUTE_TO$.artemis.internal.sf.my-cluster.68d83229-be54-11e9-824e-f8b156cb82da" is not present in message properties
We are cleaning up internal properties based on following condition.
package org.apache.activemq.artemis.core.message.impl
public class CoreMessage extends RefCountMessage implements ICoreMessage {
private static final Predicate<SimpleString> INTERNAL_PROPERTY_NAMES_PREDICATE =
name -> (name.startsWith(Message.HDR_ROUTE_TO_IDS) && !name.equals(Message.HDR_ROUTE_TO_IDS)) ||
(name.startsWith(Message.HDR_ROUTE_TO_ACK_IDS) && !name.equals(Message.HDR_ROUTE_TO_ACK_IDS));