HornetQ Role based security implementation - jboss

Iam trying to secure hornet Q using role based security implementation.
iam using FSW 6.0 which uses Jboss EAP 6.1.
Standalone xml configuration.
<security-settings>
<security-setting match="#">
<permission type="send" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
</security-setting>
<security-setting match="Pricing.Eu.In.#">
<permission type="send" roles="pricing"/>
<permission type="consume" roles="pricing"/>
</security-setting>
</security-settings>
I have created a new user using add-user.bat Application Realm and assigned role to it.
application-roles.Properties
#
# Properties declaration of users roles for the realm 'ApplicationRealm'.
#
# This includes the following protocols: remote ejb, remote jndi, web, remote jms
#
# Users can be added to this properties file at any time, updates after the server has started
# will be automatically detected.
#
# The format of this file is as follows: -
# username=role1,role2,role3
#
# A utility script is provided which can be executed from the bin folder to add the users: -
# - Linux
# bin/add-user.sh
#
# - Windows
# bin\add-user.bat
#
# The following illustrates how an admin user could be defined.
#
#admin=PowerUser,BillingAdmin,
#guest=guest
fswAdmin=overlorduser,admin.sramp,dev,qa,stage,prod,manager,arch,ba
dtgovworkflows=overlorduser,admin.sramp
guest=guest
cubehpr=pricing
When i try to send messages to the Pricing.Eu.In.Deferred jms Queue using client application iam getting below error.
Am i missing anything?
Exception in thread "main" javax.jms.JMSSecurityException: HQ119032: User: cubehpr doesnt have permission=SEND on address {2}
at org.hornetq.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:388)
at org.hornetq.core.client.impl.ClientProducerImpl.sendRegularMessage(ClientProducerImpl.java:318)
at org.hornetq.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:288)
at org.hornetq.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:140)
at org.hornetq.jms.client.HornetQMessageProducer.doSend(HornetQMessageProducer.java:438)
at org.hornetq.jms.client.HornetQMessageProducer.send(HornetQMessageProducer.java:194)
at com.agcs.bih.api.pricing.eu.dispatcher.HornetQClient.main(HornetQClient.java:63)
Caused by: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119032: User: cubehpr doesnt have permission=SEND on address {2}]
... 7 more
can you please help me.

Try following
<security-settings>
<security-setting match="jms.queue.Pricing.Eu.In.#">
<permission type="send" roles="pricing"/>
<permission type="consume" roles="pricing"/>
</security-setting>
<security-setting match="#">
<permission type="send" roles="guest"/>
<permission type="consume" roles="guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
</security-setting>
</security-settings>

Related

How to force ActiveMQ client to set client id and subscription name

My MQ server is built with ActiveMQ Artemis 2.17.0.
Recently I realized that some clients are connecting to my ActiveMQ Artemis without setting client id and subscription and subscribing to some topics. Their queues became UUID like below:
So, I want to know how to force ActiveMQ clients to set client id and subscription name? And if they don't config client id and subscription name, the ActiveMQ Artemis can kick them out.
You can't exactly force clients to set their client ID and subscription name, but you can change the security authorization configuration to prevent clients from creating non-durable subscriptions.
Non-durable subscribers create non-durable queues which use the UUID naming convention shown in your web-console screenshot. If you wish to prevent this then modify your security-settings in broker.xml and remove the permission whose type is createNonDurableQueue. Here's the default security-settings:
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
This is the permission I'm referring to:
<permission type="createNonDurableQueue" roles="amq"/>
If you remove that permission or remove the user's role from the permission then that user will no longer be able to create a non-durable queue on the matching address. However, as long as they have the createDurableQueue and deleteDurableQueue permissions they will be able to create & delete a durable subscription.

Apache ActiveMQ Artemis how to investigate if messages were lost?

I'm using ActiveMQ Artemis 2.16.0 as my broker and artemis-jms-client-2.16.0.jar as my JMS client. It feels that I'm losing few messages at random for the reasons unknown to me. I've investigated my Java code and found nothing unusual yet.
I have a method
#JmsListener(destination = "${myQueue}", containerFactory = "jmsListenerContainerFactory")
#Override
public void process(Message message) {
try {
processMessage(Message message);
} catch (Exception ex) {
LOG.error("Error[...]", ex);
responseSender.send(otherQueue, message, ex);
}
}
The processMessage(Message message) method looks like this:
public void processMessage(Message message) {
try {
byte[] request = message.getBody(byte[].class);
[...]
if (!condition) {
throw new MyBusinessError("error happened");
}
[...]
} finally {
MDC.remove(ID);
}
}
#Bean(name = "jmsListenerContainerFactoryTest")
#Primary
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setSessionTransacted(true);
factory.setConnectionFactory(cachingConnectionFactory());
return factory;
}
public class MyBusinessException extends Exception {
private int code;
[...]
}
broker.xml:
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 2,17 writes per millisecond
on the current journal configuration.
That translates as a sync write every 490000 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>490000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>1</journal-max-io>
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>460000</page-sync-timeout>
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq, admin"/>
</security-setting>
</security-settings>
<connection-ttl-override>60000</connection-ttl-override>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<!-- <config-delete-queues>FORCE</config-delete-queues>
<config-delete-addresses>FORCE</config-delete-addresses>-->
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<auto-delete-queues>false</auto-delete-queues>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<auto-delete-queues>false</auto-delete-queues>
</address-setting>
</address-settings>
<addresses>
<address name="MyQueue">
<anycast>
<queue name="MyQueue">
</queue>
</anycast>
</address>
<address name="MyOtherQueue">
<anycast>
<queue name="MyOtherQueue" />
</anycast>
</address>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
If MyBusinessError(...) is thrown the idea is to catch the exception and send that very same message to myOtherQueue. If sending that message fails (i.e. exception happens) then it is redelivered a second time and so on up to 10 times and then to DLQ. In essence that is what I see most of the time, but at random moments in my logs I only see one try to redeliver the message and no message in DLQ and the receiving side is complaining about the absence of a message. It feels that the message went missing. I have looked in myOtherQueue with a magnifying glass so to speak using both Artemis Console and JmsToolbox, but I see nothing but an empty queue. I have no consumers on this queues.
The purpose is not to get a failing message to DLQ, but to that other queue (myOtherQueue) for later investigation. If it happens that the message cannot be delivered to that queue it gets placed on DLQ. That is how I've thought about it.
At the end of the day at random very few messages go missing, and that is what I'm trying to understand. How should I investigate Artemis and see if any message loss has happened? Where to start? What tools to use?
I would start by putting a property in each message that will allow it to be uniquely identified and then logging that value so you can correlate client and broker logs later. If you're using JMS then you can use something like this:
String uuid = java.util.UUID.randomUUID().toString();
message.setStringProperty("UUID", uuid);
logger.info("Sending message with UUID: " + uuid);
Then of course you'll want to log this on the consumer as well, e.g.:
Message message = consumer.receive();
String uuid = message.getStringProperty("UUID");
logger.info("Received message with UUID: " + uuid);
On you the broker you should then activate audit logging or perhaps use the LoggingActiveMQServerPlugin.
Once you have all the logging in place you simply have to wait until you think you've lost a message and then go through the logs to find the ID of the message which was sent but not received. Once you know that then you can go through the broker logs to see if it was received by the broker properly, dispatched to the consumer, etc. That will help you narrow down where the issue lies.

Artemis HA configuration for core bridges

We have 4 servers in two shared disk HA pairs, with core bridges between them. The core bridge configuration and the connectors they use (sms and sms1b) are identical on all 4 servers. The only differences being master vs slave ha, and the host names in the other fields (acceptor, artemis and node0 connector, name)
In testing we found the bridge works perfectly when the two lives are up, but sometimes when shutting down a live server, the backup never opens a consumer for the bridge.
Is this the intended way to configure a pair of HA servers with a core bridge, or is the backup server configured wrong?
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>ba-artms3.example.com</name>
<security-enabled>false</security-enabled>
<persistence-enabled>true</persistence-enabled>
<paging-directory>/data/ba_artemis/msg-sms1/paging</paging-directory>
<bindings-directory>/data/ba_artemis/msg-sms1/bindings</bindings-directory>
<journal-directory>/data/ba_artemis/msg-sms1/journal</journal-directory>
<large-messages-directory>/data/ba_artemis/msg-sms1/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>132000</journal-buffer-timeout>
<journal-max-io>4096</journal-max-io>
<connectors>
<connector name="artemis">tcp://ba-artms3.example.com:2539</connector>
<connector name = "node0">tcp://ba-artms4.example.com:2539</connector>
<connector name="sms1">(tcp://ba-artms3.example.com:61616,tcp://ba-artms4.example.com:61616)</connector>
<connector name="sms1b">(tcp://ba-artms9.example.com:61616,tcp://ba-artms10.example.com:61616)</connector>
</connectors>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>620000</page-sync-timeout>
<acceptors>
<acceptor name="artemis">tcp://ba-artms3.example.com:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;</acceptor>
<acceptor name="cluster">tcp://ba-artms3.example.com:2539?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE;useEpoll=true</acceptor>
<acceptor name="amqp">tcp://ba-artms3.example.com:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
</acceptors>
<cluster-user>msg-sms1-cluster</cluster-user>
<cluster-password>redacted</cluster-password>
<cluster-connections>
<cluster-connection name="msg-sms1">
<connector-ref>artemis</connector-ref>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>0</max-hops>
<static-connectors>
<connector-ref>node0</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<xi:include href="${configDir}/addresses.xml"/>
<bridges>
<bridge name="sms1_forwarder">
<queue-name>UpdateOutboundForward_0</queue-name>
<forwarding-address>UpdateOutbound</forwarding-address>
<ha>true</ha>
<failover-on-server-shutdown>true</failover-on-server-shutdown>
<user>rave</user>
<password>redacted</password>
<static-connectors>
<connector-ref>sms1</connector-ref>
</static-connectors>
</bridge>
<bridge name="sms1b_forwarder">
<queue-name>UpdateOutboundForward_1</queue-name>
<forwarding-address>UpdateOutbound</forwarding-address>
<ha>true</ha>
<failover-on-server-shutdown>true</failover-on-server-shutdown>
<user>rave</user>
<password>redacted</password>
<static-connectors>
<connector-ref>sms1b</connector-ref>
</static-connectors>
</bridge>
</bridges>
</core>
</configuration>
Keep in mind that the acceptor on port 2539 is specifically used for clustering. There are 4 servers total: ba-artms3 (live), ba-artms4 (slave) & ba-artms9 (live), ba-artms10 (slave).
Your configuration looks essentially correct, but it's hard to tell with so many moving pieces - especially the extra acceptor for clustering. I've seen folks do that before, but it's not something I've ever tested or recommended so I'm not sure how it would function in practice. Theoretically it would be fine, but there are always complicating factors many of which are subtle.
It would be worth simplifying your configuration to only what's absolutely necessary to reproduce the problem. For example, configure just 1 bridge on 1 server connecting to a live/backup pair with all the brokers on your local machine on unique ports (i.e. no docker). Once you have that working you can keeping adding complexity and testing as you go to see where things break down (assuming they do).
The solution that worked for me was using one connector per server instead of the bracket and comma syntax:
<connectors>
<connector name="artemis">tcp://ba-artms3.example.com:2539</connector>
<connector name = "node0">tcp://ba-artms4.example.com:2539</connector>
<connector name="sms1_1">tcp://ba-artms3.example.com:2539</connector>
<connector name="sms1_2">tcp://ba-artms4.example.com:2539</connector>
<connector name="sms1b_1">tcp://ba-artms9.example.com:2539</connector>
<connector name="sms1b_2">tcp://ba-artms10.example.com:2539</connector>
</connectors>
And then listing both of them in the list:
<bridge name="sms1b_forwarder">
<queue-name>UpdateOutboundForward_1</queue-name>
<forwarding-address>UpdateOutbound</forwarding-address>
<ha>true</ha>
<failover-on-server-shutdown>true</failover-on-server-shutdown>
<user>rave</user>
<password>redacted</password>
<static-connectors>
<connector-ref>sms1b_1</connector-ref>
<connector-ref>sms1b_2</connector-ref>
</static-connectors>
</bridge>
After that in the log instead of seeing:
2021-07-20 09:13:51,809 WARN [org.apache.activemq.artemis.core.server] AMQ224091: Bridge BridgeImpl#5886a172 [name=sms1b_forwarder, queue=QueueImpl[name=UpdateOutboundForward_1, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=13be52e3-cb0f-11eb-a851-000c29d5fa03], temp=false]#56414412 targetConnector=ServerLocatorImpl (identity=Bridge sms1b_forwarder) [initialConnectors=[TransportConfiguration(name=sms1b, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=ba-artms9-example-com], discoveryGroupConfiguration=null]] is unable to connect to destination. Retrying
2021-07-20 09:13:51,880 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge BridgeImpl#983e222 [name=sms1_forwarder, queue=QueueImpl[name=UpdateOutboundForward_0, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=13be52e3-cb0f-11eb-a851-000c29d5fa03], temp=false]#7440d62 targetConnector=ServerLocatorImpl (identity=Bridge sms1_forwarder) [initialConnectors=[TransportConfiguration(name=sms1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=ba-artms3-example-com], discoveryGroupConfiguration=null]] is connected
I was seeing:
2021-07-20 09:30:40,333 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge BridgeImpl#339a572d [name=sms1b_forwarder, queue=QueueImpl[name=UpdateOutboundForward_1, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=13be52e3-cb0f-11eb-a851-000c29d5fa03], temp=false]#3d5daf2e targetConnector=ServerLocatorImpl (identity=Bridge sms1b_forwarder) [initialConnectors=[TransportConfiguration(name=sms1b_1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=ba-artms9-example-com, TransportConfiguration(name=sms1b_2, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=ba-artms10-example-com], discoveryGroupConfiguration=null]] is connected
2021-07-20 09:30:42,206 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge BridgeImpl#47c22520 [name=sms1_forwarder, queue=QueueImpl[name=UpdateOutboundForward_0, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=13be52e3-cb0f-11eb-a851-000c29d5fa03], temp=false]#4905c3a8 targetConnector=ServerLocatorImpl (identity=Bridge sms1_forwarder) [initialConnectors=[TransportConfiguration(name=sms1_1, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=2539&host=ba-artms3-example-com, TransportConfiguration(name=sms1_2, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=2539&host=ba-artms4-example-com], discoveryGroupConfiguration=null]] is connected
Note both servers now showing up in the initialConnectors.

Apache ActiveMQ Artemis Slow performance after paging starts

We have an ActiveMQ Artemis test deployment and we noticed very slow performance after broker having a large number of messages. This is when paging starts. I hope this is normal. To mitigate this after testing we doubled the xmx for the broker. Now the paging (and performance drop) is delayed. My question is are there any other parameters beside memory which can address this.
My broker.xml is:
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core">
<ha-policy>
<replication>
<master>
<group-name>master</group-name>
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
<global-max-size>-1</global-max-size>
<bindings-directory>/opt/broker/broker-data/bindings</bindings-directory>
<journal-directory>/opt/broker/broker-data/journal</journal-directory>
<large-messages-directory>/opt/broker/broker-data/largemessages</large-messages-directory>
<paging-directory>/opt/broker-data/paging</paging-directory>
<journal-min-files>25</journal-min-files>
<journal-type>ASYNCIO</journal-type>
<journal-max-io>5000</journal-max-io>
<journal-sync-transactional>false</journal-sync-transactional>
<journal-sync-non-transactional>false</journal-sync-non-transactional>
<journal-buffer-timeout>750000</journal-buffer-timeout>
<connectors>
<connector name="netty-connector">tcp://node1:61616?tcpSendBufferSize=307200;tcpReceiveBufferSize=307200;writeBufferHighWaterMark=1228800;useEpoll=true;useNio=true</connector>
</connectors>
<acceptors>
<acceptor name="netty-acceptor">tcp://node1:61616?tcpSendBufferSize=307200;tcpReceiveBufferSize=307200;writeBufferHighWaterMark=1228800;useEpoll=true;useNio=true</acceptor>
</acceptors>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<group-address>${udp-address:231.7.7.7}</group-address>
<group-port>9875</group-port>
<broadcast-period>100</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<group-address>${udp-address:231.7.7.7}</group-address>
<group-port>9875</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<connection-ttl>130000</connection-ttl>
<call-timeout>120000</call-timeout>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
<auto-delete-addresses>false</auto-delete-addresses>
</address-setting>
</address-settings>
<!-- address section -->
</core>
</configuration>
EDIT:
Most critical issue is once paging starts broker won't recover to original performance even-though majority of messages are consumed.
Consider that paged messages need to be synchronized to disk similarly to durable ones and the parameter to be set to control the frequency of flushes is page-sync-timeout. If no value is set, the default one is used (see the documentation for an explanation about what that setting is for).
By looking at your journal-buffer-timeout (and assuming this is correctly set) your disk seems quite slow so it's expected that paged messages won't perform great as the disk doesn't have enough IOPS.
I would first check what's the expected IOPS for random writes for your disk and will set page-sync-timeout accordingly (1/IOPS in nanoseconds), but don't expect any improvement if the disk isn't fast enough.
Additional note: If you don't care about power failure durability you can still disable journal-datasync and it should let any disk write to be able to survive just to process failures (i.e. no power failure guarantees). It should be ok if you are using shared-nothing replication, given that a backup is able to take the role in case of failure.

Share config in cluster

How I can share config file in cluster with is on failover mode?
I don't want to edit artemis-user.properties, artemis-role.properties, and broker.xml files on every server in the cluster.
Cluster settings:
security-enabled: true
persistence-enabled : true
paging-directory, bindings-directory, journal-directory, large-messages-directory used from master server
Security settings like this:
<security-setting match="clusterQueue">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
It is possible?
ActiveMQ Artemis doesn't provide any automated way to share configuration between cluster members. However, since all the configuration you referenced is text-based it should be fairly simple to replicate it to your brokers with standard tools and/or infrastructure. For example, you could use SCP to copy the files, create a shared NFS mount with them, etc.
Even in a homogeneous cluster it's common to have small differences in the configuration files (e.g. for the cluster-connection, an acceptor, etc.). In that case you can use system property substitution (which is referenced in the documentation) to pull out the bits from each broker which need to be customized and then set those in artemis.profile, e.g.:
JAVA_ARGS="$JAVA_ARGS -DmyAcceptor=tcp://192.168.1.10:61616"
Then reference that system property in your broker.xml, e.g.:
<acceptor name="netty-acceptor">${myAcceptor}</acceptor>
In this way you can have the same broker.xml shared among all the brokers but each can have their own artemis.profile with the unique values that each broker needs.