Consumer with wildcard syntax - activemq-artemis

I'm using ActiveMQ Artemis 2.17.0. I want to create a consumer with wildcard syntax that would consume messages from multiple addresses. I wrote the next consumer. But it consumes from address news.europe.#, but not from addresses matching the wildcard syntax (news.europe.sport, news.europe.politics etc). What am I doing wrong?
Scenario:
Start Artemis broker
Send 2 messages with the producer in news.europe.sport, news.europe.politics
Start the consumer
Expected behavior:
2 messages received by the consumer
Observed behavior
messages remain in queues
the address news.europe.# has an active consumer
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import javax.jms.*;
public class ArtemisConsumer {
public static void main(String[] args) throws JMSException, InterruptedException {
String brokerURL = "tcp://localhost:61716";
String queueName = "news.europe.#";
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerURL);
connectionFactory.setUser("user");
connectionFactory.setPassword("pass");
Connection connection = connectionFactory.createConnection();
connection.start();
Session session = connection.createSession(true, Session.SESSION_TRANSACTED);
Destination destination = session.createQueue(queueName);
MessageConsumer consumer = session.createConsumer(destination);
consumer.setMessageListener(new ConsumerMessageListener("Consumer"));
Thread.sleep(60000);
session.commit();
session.close();
connection.close();
}
}
broker.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns="urn:activemq">
<core xmlns="urn:activemq:core">
<name>QMA</name>
<max-disk-usage>100</max-disk-usage>
<configuration-file-refresh-period>9223372036854775807</configuration-file-refresh-period>
<bindings-directory>${ARTEMISMQ_DATA}/bindings</bindings-directory>
<journal-directory>${ARTEMISMQ_DATA}/journal</journal-directory>
<large-messages-directory>${ARTEMISMQ_DATA}/largemessages</large-messages-directory>
<paging-directory>.${ARTEMISMQ_DATA}/paging</paging-directory>
<cluster-user>user</cluster-user>
<cluster-password>password</cluster-password>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61716</acceptor>
<acceptor name="in-vm">vm://0</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission roles="user-group" type="createNonDurableQueue"/>
<permission roles="user-group" type="deleteNonDurableQueue"/>
<permission roles="user-group" type="createDurableQueue"/>
<permission roles="user-group" type="deleteDurableQueue"/>
<permission roles="user-group" type="createAddress"/>
<permission roles="user-group" type="deleteAddress"/>
<permission roles="user-group" type="consume"/>
<permission roles="user-group" type="browse"/>
<permission roles="user-group" type="send"/>
<permission roles="user-group" type="manage"/>
</security-setting>
</security-settings>
</core>
</configuration>
producer
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import javax.jms.*;
public class ArtemisProducer {
public static void main(final String[] args) throws Exception {
String brokerURL = "tcp://localhost:61716";
ActiveMQConnectionFactory connFactory = new ActiveMQConnectionFactory(brokerURL);
connFactory.setUser("user");
connFactory.setPassword("password");
final Connection conn = connFactory.createConnection();
conn.start();
final Session sess = conn.createSession(true, Session.SESSION_TRANSACTED);
final Destination dest = sess.createQueue("news.europe.politics");
final MessageProducer prod = sess.createProducer(dest);
final Message msg = sess.createTextMessage("Sample message");
prod.send(msg);
sess.commit();
conn.close();
}
}

You are seeing the expected behavior. This is because the feature you're using is a wildcard address. In short, any messages sent to a matching address will also be routed to the wildcard address (and any queues bound to that address according to their semantics (i.e. anycast or multicast)).
In your case the wildcard address hasn't yet been created when you send your messages so there is no way for those messages to be routed to it.
FWIW, you can see this feature in action in the topic-hierarchies examples which ships with the broker in the examples/features/standard directory.

Related

How to properly configure multicast message redistribution around the Artemis cluster

I'm using Artemis 2.8.0.
I've started two standalone servers in symmetric cluster mode and deployed address with type 'multicast' on both of them also I've created couple of predefined queues attached to this address. When I wrote messages to address on first server it successfully wroted to all queues attached to address. After that I connected to second server and created consummer for one of a queues and messages from first server didn't redistribute to second.
I can't realize is it expected behaviour or not ?
I had tried connect consummer by FQQN too but result was the same.
In documentation there isn't any special information about 'multicast' redistribution.
my broker.xml looks like
<?xml version='1.0'?>
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>server1</name>
<cluster-user>artemis</cluster-user>
<cluster-password>artemis</cluster-password>
<persistence-enabled>true</persistence-enabled>
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>20000</journal-buffer-timeout>
<journal-max-io>4096</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61716?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
<acceptor name="cluster-acceptor">tcp://0.0.0.0:61717</acceptor>
</acceptors>
<connectors>
<connector name="netty-connector">tcp://localhost:61616</connector>
<connector name="cluster-connector">tcp://localhost:61617</connector>
</connectors>
<cluster-connections>
<cluster-connection name="k24-artemis-cluster">
<address></address>
<connector-ref>netty-connector</connector-ref>
<check-period>5000</check-period>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>cluster-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="k24.#">
<redistribution-delay>0</redistribution-delay>
<max-delivery-attempts>100</max-delivery-attempts>
<redelivery-delay-multiplier>1.5</redelivery-delay-multiplier>
<redelivery-delay>5000</redelivery-delay>
<max-redelivery-delay>50000</max-redelivery-delay>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
<auto-create-addresses>true</auto-create-addresses>
<auto-delete-addresses>true</auto-delete-addresses>
<auto-create-queues>true</auto-create-queues>
<auto-delete-queues>true</auto-delete-queues>
<default-purge-on-no-consumers>false</default-purge-on-no-consumers>
<max-size-bytes>104857600</max-size-bytes><!--100 Mb-->
<page-size-bytes>20971520</page-size-bytes><!--20 Mb-->
<address-full-policy>PAGE</address-full-policy>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
<address name="k24.payment">
<multicast>
<queue name="k24.payment.bossbi">
<durable>true</durable>
</queue>
<queue name="k24.payment.other">
<durable>true</durable>
</queue>
</multicast>
</address>
</addresses>
</core>
</configuration>
startup logs on the first server
2019-05-23 11:12:33,188 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address k24.payment supporting [MULTICAST]
2019-05-23 11:12:33,188 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying MULTICAST queue k24.payment.bossbi on address k24.payment
2019-05-23 11:12:33,188 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying MULTICAST queue k24.payment.other on address k24.payment
2019-05-23 11:12:33,366 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.AcceptorControlImpl#1b45c0e []
2019-05-23 11:12:33,373 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.AcceptorControlImpl#73a8da0f []
2019-05-23 11:12:33,391 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61616 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
2019-05-23 11:12:33,400 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61617 for protocols [CORE,MQTT,AMQP,HORNETQ,STOMP,OPENWIRE]
2019-05-23 11:12:33,403 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2019-05-23 11:12:33,403 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.8.0 [server0, nodeID=d6998d86-7d25-11e9-88ce-7c76357cb366]
2019-05-23 11:12:33,432 INFO [org.apache.activemq.audit.base] AMQ601267: User anonymous is creating a core session on target resource ActiveMQServerImpl::serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366 [with parameters: [229b6b91-7d2a-11e9-94a2-106530a6cae3, artemis, ****, 102400, RemotingConnectionImpl [ID=3ded146a, clientID=null, nodeID=d6998d86-7d25-11e9-88ce-7c76357cb366, transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection#3e514577[ID=3ded146a, local= /127.0.0.1:61616, remote=/127.0.0.1:52806]], true, true, true, false, null, org.apache.activemq.artemis.core.protocol.core.impl.CoreSessionCallback#39407896, true, OperationContextImpl [1688453060] [minimalStore=9223372036854775807, storeLineUp=0, stored=0, minimalReplicated=9223372036854775807, replicationLineUp=0, replicated=0, paged=0, minimalPage=9223372036854775807, pageLineUp=0, errorCode=-1, errorMessage=null, executorsPending=0, executor=OrderedExecutor(tasks=[])], {}]]
2019-05-23 11:12:33,461 INFO [org.apache.activemq.audit.base] AMQ601267: User anonymous is creating a core session on target resource ActiveMQServerImpl::serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366 [with parameters: [22a110e2-7d2a-11e9-94a2-106530a6cae3, artemis, ****, 102400, RemotingConnectionImpl [ID=3ded146a, clientID=null, nodeID=d6998d86-7d25-11e9-88ce-7c76357cb366, transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection#3e514577[ID=3ded146a, local= /127.0.0.1:61616, remote=/127.0.0.1:52806]], true, true, true, false, null, org.apache.activemq.artemis.core.protocol.core.impl.CoreSessionCallback#200d654a, true, OperationContextImpl [964255832] [minimalStore=9223372036854775807, storeLineUp=0, stored=0, minimalReplicated=9223372036854775807, replicationLineUp=0, replicated=0, paged=0, minimalPage=9223372036854775807, pageLineUp=0, errorCode=-1, errorMessage=null, executorsPending=0, executor=OrderedExecutor(tasks=[])], {}]]
2019-05-23 11:12:33,470 INFO [org.apache.activemq.audit.base] AMQ601065: User artemis is creating a queue on target resource: ServerSessionImpl() [with parameters: [Address [name=activemq.notifications, id=0, routingTypes={MULTICAST}, autoCreated=false], notif.22a22253-7d2a-11e9-94a2-106530a6cae3.ActiveMQServerImpl_serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366, _AMQ_Binding_Type<>2 AND _AMQ_NotifType IN ('SESSION_CREATED','BINDING_ADDED','BINDING_REMOVED','CONSUMER_CREATED','CONSUMER_CLOSED','PROPOSAL','PROPOSAL_RESPONSE','UNPROPOSAL') AND _AMQ_Distance<1 AND (((_AMQ_Address NOT LIKE '$.artemis.internal.sf.%') AND (_AMQ_Address NOT LIKE 'activemq.management%'))) AND (_AMQ_NotifType = 'SESSION_CREATED' OR (_AMQ_Address NOT LIKE 'activemq.notifications%')), true, false, -1, false, false, false, -1, null, false, null, false, 0, -1, false, 0, 0, false]]
2019-05-23 11:12:33,494 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.QueueControlImpl#28cb8c95 []
2019-05-23 11:12:33,507 INFO [org.apache.activemq.audit.base] AMQ601265: User artemis is creating a core consumer on target resource ServerSessionImpl() [with parameters: [0, notif.22a22253-7d2a-11e9-94a2-106530a6cae3.ActiveMQServerImpl_serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366, null, 0, false, true, null]]
2019-05-23 11:12:33,542 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge#9fe7fc4 [name=$.artemis.internal.sf.k24-artemis-cluster.d8938882-7d25-11e9-9b96-106530a6cae3, queue=QueueImpl[name=$.artemis.internal.sf.k24-artemis-cluster.d8938882-7d25-11e9-9b96-106530a6cae3, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366], temp=false]#66c0982c targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#9fe7fc4 [name=$.artemis.internal.sf.k24-artemis-cluster.d8938882-7d25-11e9-9b96-106530a6cae3, queue=QueueImpl[name=$.artemis.internal.sf.k24-artemis-cluster.d8938882-7d25-11e9-9b96-106530a6cae3, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366], temp=false]#66c0982c targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#1676605578[nodeUUID=d6998d86-7d25-11e9-88ce-7c76357cb366, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=localhost, address=, server=ActiveMQServerImpl::serverUUID=d6998d86-7d25-11e9-88ce-7c76357cb366])) [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=localhost], discoveryGroupConfiguration=null]] is connected
2019-05-23 11:12:33,548 INFO [org.apache.activemq.audit.message] AMQ601500: User artemis is sending a core message on target resource: ServerSessionImpl() [with parameters: [TransactionImpl [xid=null, txID=378, xid=null, state=ACTIVE, createTime=1558595553462(Thu May 23 11:12:33 SAMT 2019), timeoutSeconds=300, nr operations = 0]#5c8669b1, CoreMessage[messageID=0,durable=false,userID=null,priority=4, timestamp=Thu May 23 11:12:33 SAMT 2019,expiration=0, durable=false, address=activemq.management,size=457,properties=TypedProperties[_AMQ_OperationName=sendQueueInfoToQueue,_AMQ_ResourceName=broker]]#1649442918, true, false, RoutingContextImpl(Address=null, routingType=null, PreviousAddress=null previousRoute:null, reusable=null, version=0)
..................................................
]]
2019-05-23 11:12:33,552 INFO [org.apache.activemq.audit.base] AMQ601263: User artemis is handling a management message on target resource 22a110e2-7d2a-11e9-94a2-106530a6cae3 [with parameters: [TransactionImpl [xid=null, txID=378, xid=null, state=ACTIVE, createTime=1558595553462(Thu May 23 11:12:33 SAMT 2019), timeoutSeconds=300, nr operations = 0]#5c8669b1, CoreMessage[messageID=385,durable=false,userID=null,priority=4, timestamp=Thu May 23 11:12:33 SAMT 2019,expiration=0, durable=false, address=activemq.management,size=457,properties=TypedProperties[_AMQ_OperationName=sendQueueInfoToQueue,_AMQ_ResourceName=broker]]#1649442918, true]]
And on the second
2019-05-23 11:12:27,556 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address k24.payment supporting [MULTICAST]
2019-05-23 11:12:27,557 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying MULTICAST queue k24.payment.bossbi on address k24.payment
2019-05-23 11:12:27,557 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying MULTICAST queue k24.payment.other on address k24.payment
2019-05-23 11:12:27,734 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.AcceptorControlImpl#75d2da2d []
2019-05-23 11:12:27,736 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.AcceptorControlImpl#3370f42 []
2019-05-23 11:12:27,753 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61716 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
2019-05-23 11:12:27,762 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61717 for protocols [CORE,MQTT,AMQP,HORNETQ,STOMP,OPENWIRE]
2019-05-23 11:12:27,766 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
2019-05-23 11:12:27,766 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.8.0 [server1, nodeID=d8938882-7d25-11e9-9b96-106530a6cae3]
2019-05-23 11:12:27,993 INFO [org.apache.activemq.hawtio.branding.PluginContextListener] Initialized activemq-branding plugin
2019-05-23 11:12:28,044 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin
2019-05-23 11:12:28,390 INFO [io.hawt.HawtioContextListener] Initialising hawtio services
2019-05-23 11:12:28,401 INFO [io.hawt.system.ConfigManager] Configuration will be discovered via system properties
2019-05-23 11:12:28,402 INFO [io.hawt.jmx.JmxTreeWatcher] Welcome to hawtio 1.5.5 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
2019-05-23 11:12:28,404 INFO [io.hawt.jmx.UploadManager] Using file upload directory: /home/dmitry/work/tools/apache-artemis-2.6.2/broker1/tmp/uploads
2019-05-23 11:12:28,414 INFO [io.hawt.web.AuthenticationFilter] Starting hawtio authentication filter, JAAS realm: "activemq" authorized role(s): "amq" role principal classes: "org.apache.activemq.artemis.spi.core.security.jaas.RolePrincipal"
2019-05-23 11:12:28,441 INFO [io.hawt.web.JolokiaConfiguredAgentServlet] Jolokia overridden property: [key=policyLocation, value=file:/home/dmitry/work/tools/apache-artemis-2.6.2/broker1/etc/jolokia-access.xml]
2019-05-23 11:12:28,461 INFO [io.hawt.web.RBACMBeanInvoker] Using MBean [hawtio:type=security,area=jmx,rank=0,name=HawtioDummyJMXSecurity] for role based access control
2019-05-23 11:12:28,552 INFO [io.hawt.system.ProxyWhitelist] Initial proxy whitelist: [localhost, 127.0.0.1, 10.255.100.22, 10.255.100.32, bdk-laptop.lan.itecos.com]
2019-05-23 11:12:28,761 INFO [org.apache.activemq.artemis] AMQ241001: HTTP Server started at http://localhost:8261
2019-05-23 11:12:28,761 INFO [org.apache.activemq.artemis] AMQ241002: Artemis Jolokia REST API available at http://localhost:8261/console/jolokia
2019-05-23 11:12:28,762 INFO [org.apache.activemq.artemis] AMQ241004: Artemis Console available at http://localhost:8261/console
2019-05-23 11:12:33,417 INFO [org.apache.activemq.audit.base] AMQ601267: User anonymous is creating a core session on target resource ActiveMQServerImpl::serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3 [with parameters: [2298850f-7d2a-11e9-bf72-106530a6cae3, artemis, ****, 102400, RemotingConnectionImpl [ID=05d224cf, clientID=null, nodeID=d8938882-7d25-11e9-9b96-106530a6cae3, transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection#27357d98[ID=05d224cf, local= /127.0.0.1:61716, remote=/127.0.0.1:57184]], true, true, true, false, null, org.apache.activemq.artemis.core.protocol.core.impl.CoreSessionCallback#1c98f1c2, true, OperationContextImpl [92544794] [minimalStore=9223372036854775807, storeLineUp=0, stored=0, minimalReplicated=9223372036854775807, replicationLineUp=0, replicated=0, paged=0, minimalPage=9223372036854775807, pageLineUp=0, errorCode=-1, errorMessage=null, executorsPending=0, executor=OrderedExecutor(tasks=[])], {}]]
2019-05-23 11:12:33,450 INFO [org.apache.activemq.audit.base] AMQ601267: User anonymous is creating a core session on target resource ActiveMQServerImpl::serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3 [with parameters: [229f89f0-7d2a-11e9-bf72-106530a6cae3, artemis, ****, 102400, RemotingConnectionImpl [ID=05d224cf, clientID=null, nodeID=d8938882-7d25-11e9-9b96-106530a6cae3, transportConnection=org.apache.activemq.artemis.core.remoting.impl.netty.NettyServerConnection#27357d98[ID=05d224cf, local= /127.0.0.1:61716, remote=/127.0.0.1:57184]], true, true, true, false, null, org.apache.activemq.artemis.core.protocol.core.impl.CoreSessionCallback#6e34b6c6, true, OperationContextImpl [191090876] [minimalStore=9223372036854775807, storeLineUp=0, stored=0, minimalReplicated=9223372036854775807, replicationLineUp=0, replicated=0, paged=0, minimalPage=9223372036854775807, pageLineUp=0, errorCode=-1, errorMessage=null, executorsPending=0, executor=OrderedExecutor(tasks=[])], {}]]
2019-05-23 11:12:33,489 INFO [org.apache.activemq.audit.base] AMQ601065: User artemis is creating a queue on target resource: ServerSessionImpl() [with parameters: [Address [name=activemq.notifications, id=0, routingTypes={MULTICAST}, autoCreated=false], notif.22a4ba11-7d2a-11e9-bf72-106530a6cae3.ActiveMQServerImpl_serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3, _AMQ_Binding_Type<>2 AND _AMQ_NotifType IN ('SESSION_CREATED','BINDING_ADDED','BINDING_REMOVED','CONSUMER_CREATED','CONSUMER_CLOSED','PROPOSAL','PROPOSAL_RESPONSE','UNPROPOSAL') AND _AMQ_Distance<1 AND (((_AMQ_Address NOT LIKE '$.artemis.internal.sf.%') AND (_AMQ_Address NOT LIKE 'activemq.management%'))) AND (_AMQ_NotifType = 'SESSION_CREATED' OR (_AMQ_Address NOT LIKE 'activemq.notifications%')), true, false, -1, false, false, false, -1, null, false, null, false, 0, -1, false, 0, 0, false]]
2019-05-23 11:12:33,537 INFO [org.apache.activemq.audit.base] AMQ601019: User anonymous is getting mbean info on target resource: org.apache.activemq.artemis.core.management.impl.QueueControlImpl#7865d90f []
2019-05-23 11:12:33,547 INFO [org.apache.activemq.audit.base] AMQ601265: User artemis is creating a core consumer on target resource ServerSessionImpl() [with parameters: [0, notif.22a4ba11-7d2a-11e9-bf72-106530a6cae3.ActiveMQServerImpl_serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3, null, 0, false, true, null]]
2019-05-23 11:12:33,582 INFO [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge#5c52e5b7 [name=$.artemis.internal.sf.k24-artemis-cluster.d6998d86-7d25-11e9-88ce-7c76357cb366, queue=QueueImpl[name=$.artemis.internal.sf.k24-artemis-cluster.d6998d86-7d25-11e9-88ce-7c76357cb366, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3], temp=false]#3bb7d355 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge#5c52e5b7 [name=$.artemis.internal.sf.k24-artemis-cluster.d6998d86-7d25-11e9-88ce-7c76357cb366, queue=QueueImpl[name=$.artemis.internal.sf.k24-artemis-cluster.d6998d86-7d25-11e9-88ce-7c76357cb366, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3], temp=false]#3bb7d355 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl#1052212904[nodeUUID=d8938882-7d25-11e9-9b96-106530a6cae3, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=localhost, address=, server=ActiveMQServerImpl::serverUUID=d8938882-7d25-11e9-9b96-106530a6cae3])) [initialConnectors=[TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61716&host=localhost], discoveryGroupConfiguration=null]] is connected
2019-05-23 11:12:33,589 INFO [org.apache.activemq.audit.message] AMQ601500: User artemis is sending a core message on target resource: ServerSessionImpl() [with parameters: [TransactionImpl [xid=null, txID=97, xid=null, state=ACTIVE, createTime=1558595553475(Thu May 23 11:12:33 SAMT 2019), timeoutSeconds=300, nr operations = 0]#d735c32, CoreMessage[messageID=0,durable=false,userID=null,priority=4, timestamp=Thu May 23 11:12:33 SAMT 2019,expiration=0, durable=false, address=activemq.management,size=457,properties=TypedProperties[_AMQ_OperationName=sendQueueInfoToQueue,_AMQ_ResourceName=broker]]#24312193, true, false, RoutingContextImpl(Address=null, routingType=null, PreviousAddress=null previousRoute:null, reusable=null, version=0)
..................................................
]]
2019-05-23 11:12:33,592 INFO [org.apache.activemq.audit.base] AMQ601263: User artemis is handling a management message on target resource 229f89f0-7d2a-11e9-bf72-106530a6cae3 [with parameters: [TransactionImpl [xid=null, txID=97, xid=null, state=ACTIVE, createTime=1558595553475(Thu May 23 11:12:33 SAMT 2019), timeoutSeconds=300, nr operations = 0]#d735c32, CoreMessage[messageID=104,durable=false,userID=null,priority=4, timestamp=Thu May 23 11:12:33 SAMT 2019,expiration=0, durable=false, address=activemq.management,size=457,properties=TypedProperties[_AMQ_OperationName=sendQueueInfoToQueue,_AMQ_ResourceName=broker]]#24312193, true]]
My client code is written on scala. However I think it should be clear.
import java.util
import org.apache.activemq.artemis.api.core.TransportConfiguration
import org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy
import org.apache.activemq.artemis.api.core.client._
import org.apache.activemq.artemis.core.remoting.impl.netty.{NettyConnectorFactory, TransportConstants}
import org.slf4j.LoggerFactory
import org.testng.Assert
import org.testng.annotations.Test
import scala.collection.JavaConverters._
class MulticastClusterMsgRedistributionTest {
private val log = LoggerFactory.getLogger(getClass)
private val ADDRESS_NAME = "k24.payment"
private val QUEUE_NAME1 = "k24.payment.bossbi"
private val QUEUE_NAME2 = "k24.payment.other"
#Test
def write(): Unit = {
val sessionFactory = createSessionFactory(61616)
val session = sessionFactory.createSession(false, false, false)
val producer = session.createProducer(ADDRESS_NAME)
session.start()
1.to(50).foreach{ i =>
val msg = session.createMessage(true)
msg.writeBodyBufferString(s"msg $i")
producer.send(msg)
session.commit()
}
session.close()
sessionFactory.close()
}
#Test
def read(): Unit = {
val sessionFactory = createSessionFactory(61716)
val session = sessionFactory.createSession(false, false, false)
val consumer = session.createConsumer(s"$ADDRESS_NAME::$QUEUE_NAME1")
session.start()
1.to(50).foreach{ i =>
val msg = consumer.receive(2000)
Assert.assertNotNull(msg)
val body = msg.getBodyBuffer.readString()
Assert.assertEquals(body, s"msg $i")
log.info(s"-> $body")
msg.acknowledge()
session.commit()
}
}
private def createSessionFactory(targetServerPort: Int): ClientSessionFactory = {
val server = transportConfiguration(targetServerPort)
val loc = ActiveMQClient.createServerLocatorWithHA(Seq(server): _*)
loc.setUseTopologyForLoadBalancing(true)
loc.setConnectionLoadBalancingPolicyClassName(classOf[RoundRobinConnectionLoadBalancingPolicy].getName)
loc.addClusterTopologyListener(topologyListener(loc))
val sessionFactory = loc.createSessionFactory()
log.info(s"Session factory attached to ${sessionFactory.getConnection.getRemoteAddress}")
sessionFactory
}
private def transportConfiguration(port: Integer): TransportConfiguration = {
val srvParams = new util.HashMap[String, Object]()
srvParams.put(TransportConstants.HOST_PROP_NAME, "localhost")
srvParams.put(TransportConstants.PORT_PROP_NAME, port)
new TransportConfiguration(classOf[NettyConnectorFactory].getName, srvParams, s"srv_localhost_$port")
}
private def topologyListener(loc: ServerLocator) = {
new ClusterTopologyListener {
override def nodeUP(m: TopologyMember, last: Boolean): Unit = {
log.info(s"\t artemis member up = ${memberState(m)}")
log.info(s"cluster state:\n ${loc.getTopology.getMembers.asScala.map(memberState).mkString("\n")}")
}
override def nodeDown(eventUID: Long, nodeID: String): Unit = {
log.info(s"\t artemis member down = $nodeID")
log.info(s"cluster state:\n ${loc.getTopology.getMembers.asScala.map(memberState).mkString("\n")}")
}
}
}
private def memberState(m: TopologyMember): String = {
val live = Option(m.getLive)
val backup = Option(m.getBackup)
val backupState = backup.map{ b =>
s"${b.getParams.get(TransportConstants.HOST_PROP_NAME)}:${b.getParams.get(TransportConstants.PORT_PROP_NAME)}"
}
val liveState = live.map{ l =>
s"${l.getParams.get(TransportConstants.HOST_PROP_NAME)}:${l.getParams.get(TransportConstants.PORT_PROP_NAME)}"
}
s"$liveState backup to $backupState"
}
}
I suppose Artemis should redistribute messages from all queues attached to multicast address from first server to second when there are consummers on second server.

Artemis HA and cluster not working

Below are setting of artemis cluster (3 servers) in broker.xml
<!-- Clustering configuration -->
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<colocated>
<backup-port-offset>100</backup-port-offset>
<backup-request-retries>-1</backup-request-retries>
<backup-request-retry-interval>2000</backup-request-retry-interval>
<max-backups>2</max-backups>
<request-backup>true</request-backup>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
<slave>
<scale-down/>
</slave>
</colocated>
</shared-store>
</ha-policy>
Cluster and ha configuration are same in all servers. The failover scenario which i am trying to understand and execute is as below.
Start broker1,broker2,broker3 in sequence mentioned. Here I can
see from admin UI that broker1 have backing up broker2 and broker3.
broker2 have backing of broker1. broker3 does not have any backup.
I wrote below program to connect to server
public static void main(final String[] args) throws Exception {
Connection connection = null;
InitialContext initialContext = null;
try {
Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
properties.put("connectionFactory.ConnectionFactory",
"(tcp://localhost:61616,tcp://localhost:61617,tcp://localhost:61618)?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=-1");
properties.put("queue.queue/exampleQueue", "exampleQueue");
// Step 1. Create an initial context to perform the JNDI lookup.
initialContext = new InitialContext(properties);
ConnectionFactory cf = (ConnectionFactory) initialContext.lookup("ConnectionFactory");
// Step 2. Look-up the JMS Queue object from JNDI
Queue queue = (Queue) initialContext.lookup("queue/exampleQueue");
// Step 3. Create a JMS Connection
connection = cf.createConnection("admin", "admin");
// Step 4. Start the connection
connection.start();
// Step 5. Create a JMS session with AUTO_ACKNOWLEDGE mode
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 8. Create a text message
BytesMessage message = session.createBytesMessage();
message.setStringProperty(InfoSearchEngine.QUERY_ID_HEADER_PARAM, "123");
MessageConsumer consumer0 = session.createConsumer(queue);
// Step 9. Send the text message to the queue
while (true) {
try {
Thread.sleep(500);
// Step 7. Create a JMS message producer
MessageProducer messageProducer = session.createProducer(queue);
messageProducer.send(message);
System.out.println("Sent message: " + message.getBodyLength());
} catch (Exception e) {
System.out.println("Exception - " + e.getLocalizedMessage());
}
}
} finally {
if (connection != null) {
// Step 20. Be sure to close our JMS resources!
connection.close();
}
if (initialContext != null) {
// Step 21. Also close the initialContext!
initialContext.close();
}
}
}
if I shutdown broker1, program diverts to broker2 and runs fine. If
i shutdown broker2 then the program doest not connect to broker3.
I expected that broker3 should have started taking up the request since it was in cluster.
I can see from admin UI that broker1 have backing up broker2 and broker3. broker2 have backing of broker1. broker3 does not have any backup.
Failover in Artemis only works between a live and a backup. In your scenario broker1 is backing up broker2 so when you shutdown broker1 that means broker2 no longer has a backup so that when you shutdown broker2 no failover happens. You should specify <group-name> in your master and slave configurations so that your backups form in a more organized way so that this kind of situation doesn't happen.

Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)

Could someone provide an example of messaging application working under Wildfly 10 cluster (domain)? We are struggling with it and given that it is a new technology, there is a terrible lack of resources.
Currently we have the following:
A domain consisting of two hosts (nodes) and three groups on each, i.e. six separate servers in the domain.
A relevant part of server configuration (in domain.xml):
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password}"/>
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-siz
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQ" entries="java:jboss/exported/jms/queue/testq"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" co
</server>
</subsystem>
The configuration is more or less default, except added TestQ queue.
tcphq stack is defined in the JGroups configuration as follows:
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
I have written a testing application consisting from a simple "server", meaning MDB and a client as follows:
Server (MDB):
#MessageDriven(mappedName = "test", activationConfig = {
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/exported/jms/queue/testq"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
public class MessageServer implements MessageListener {
#Override
public void onMessage(Message message) {
try {
ObjectMessage msg = null;
if (message instanceof ObjectMessage) {
msg = (ObjectMessage) message;
}
System.out.print("The number in the message: "+ msg.getIntProperty("count"));
} catch (JMSException ex) {
Logger.getLogger(MessageServer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Client:
#Singleton
#Startup
public class ClientBean implements ClientBeanLocal {
#Resource(mappedName = "java:jboss/exported/jms/RemoteConnectionFactory")
private ConnectionFactory factory;
#Resource(mappedName = "java:jboss/exported/jms/queue/testq")
private Queue queue;
#PostConstruct
public void sendMessage() {
Connection connection = null;
try {
connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
Message message = session.createObjectMessage();
message.setIntProperty("count", 1);
producer.send(message);
System.out.println("Message sent.");
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} catch (NamingException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (connection != null) connection.close();
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
It actually works well if both client and server reside in the same group. In such a case it even seems it communicates between hosts (nodes). However if the server and client are in different groups, MDB is not invoked. Moreover, it even seems that MDB is invoked only if it resides in the group with 0 offset. When I moved the server MDB into a different group, it was not responding even if the client was in the same group.
I am a bit confused about JMS in Wildfly 10. There is a lot of examples and materials for older versions with HornetQ, however very few for Artemis. Could someone help? Many thanks.
As I came with the same question - put the answer that works for me.
Actually as Miroslav answered on developer.jboss.org the first thing to be checked is socket-binding for the "jgroups-tcp-hq" and the port-offset config on each server.
Should be <socket-binding name="jgroups-tcp-hq" ... port="7600"/> and port-offset is set (e.g. with the jboss.socket.binding.port-offset property) to 60 on dev1[7660] server ; 210 on dev1[7810] ; 360 on dev1[7960]. Same for dev2 servers.
And the second is jboss.bind.address.private property.
Usually default jgroups socket-binding refers to the "private" interface, e.g.
<socket-binding name="jgroups-tcp-hq" interface="private" port="7600"/>
So "private" interface address must be provided with the jboss.bind.address.private property (e.g. jboss.bind.address.private=dev1 ) - otherwise ClusterConnectionBridge will not be established between nodes!
See also this post for more details.
If communication between ActiveMQ server instances is established then the log entry must appear in server.log: AMQ221027: Bridge ClusterConnectionBridge#63549ead [name=sf.my-cluster ...] is connected.
See also this answer.

Client-Side Thread Management ActiveMQ

I'm trying to setup client side thread management on a Wildlfy 10 AS for JMS using ActiveMQ, I have a queue setup in standalone-full.xml DemoQueue currently the AS is creating endless threads eating up memory till eventual it crashes
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="demoQueue" entries="java:/jms/queue/demoQueue java:jboss/exported/jms/queue/demoQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
</server>
</subsystem>
I have it working with server side thread management.
I'v been trying to follow the instructions found here so currently I'm using :
DEFAULT_CONNECTION_FACTORY=jms/RemoteConnectionFactory
DEFAULT_DESTINATION=java:/jms/queue/demoQueue
DEFAULT_USERNAME=mUserName
DEFAULT_PASSWORD=myPassword
INITIAL_CONTEXT_FACTORY=org.jboss.naming.remote.client.InitialContextFactory
PROVIDER_URL=http-remoting://myURL.com:8082
/** Lookup the queue object */
Queue queue = (Queue) context.lookup(props.getProperty("DEFAULT_DESTINATION"));
/** Lookup the queue connection factory */
ConnectionFactory connFactory = (ConnectionFactory) context.lookup(props.getProperty("DEFAULT_CONNECTION_FACTORY"));
try (javax.jms.Connection connection = connFactory.createConnection(props.getProperty("DEFAULT_USERNAME"), props.getProperty("DEFAULT_PASSWORD"));
/** Create a queue session */
Session queueSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
/** Create a queue consumer */
MessageConsumer msgConsumer = queueSession.createConsumer(queue)) {
/** Set an asynchronous message listener */
msgConsumer.setMessageListener(asyncReceiver);
/** Set an asynchronous exception listener on the connection */
connection.setExceptionListener(asyncReceiver);
/** Start connection */
connection.start();
}
Do I need to add the ClientSessionFactory configuration to my "standalone-full.xml" for client side thread management?
I can't access the .setUseGlobalPools(falase); from the RemoteConnectionFactory.
I've tried adding:
ConnectionFactory myConnectionFactory = ActiveMQJMSClient.createConnectionFactory(myFactory);
I can't seem to access the needed methods from my code.
.useGlobalPools=false
scheduledThreadPoolMaxSize=10
I was using Wildfly 9 which implemented HornetQ so some of my configuration may need changing to work properly with ActiveMQ
I was showin a solution to this by a helpful user over on the Jboss forums I used server side thread management by modifying my XML configuration
<connection-factory name="RemoteConnectionFactory"
entries="java:jboss/exported/jms/RemoteConnectionFactory"
connectors="http-connector" use-global-pools="false"
thread-pool-max-size="10"/>
Another Stack user has pointed out on another question I had, that there may be an issue with this and other Wildfly version where this setting will not solve the problem, it did solve it for me, but there is another work around, by passing in the setting as a param during launch:
sh standalone.sh -c standalone-full.xml -Dactivemq.artemis.client.global.thread.pool.max.size=30

ActiveMQ producer XA transaction

I am trying to configure my custom ActiveMQ producer to use XA transaction. Unfortunately it does't work as expected because messages are sent to queue immediately instead of waiting for transactions to commit.
Here is the producer:
public class MyProducer {
#Autowired
#Qualifier("myTemplate")
private JmsTemplate template;
#Transactional
public void sendMessage(final Order order) {
template.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
ObjectMessage message = new ActiveMQObjectMessage();
message.setObject(order);
return message;
}
});
}
}
And this is template and connection factory configuration:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/activemq/ConnectionFactory" />
</bean>
<bean id="myTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsConnectionFactory"
p:defaultDestination-ref="myDestination"
p:sessionTransacted="true"
p:sessionAcknowledgeModeName="SESSION_TRANSACTED" />
As you can see I am using ConnectionFactory initiated via JNDI. It is configured on JBoss EAP 6.3:
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
<resource-adapters>
<resource-adapter id="activemq-rar.rar">
<module slot="main" id="org.apache.activemq.ra"/>
<transaction-support>XATransaction</transaction-support>
<config-property name="ServerUrl">
tcp://localhost:61616
</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
</resource-adapter>
</resource-adapters>
</subsystem>
When I debug I can see that JmsTemplate is configured properly:
it has a reference to valid connection factory org.apache.activemq.ra.ActiveMQConnectionFactory
connection factory has a reference to valid transaction manager: org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl
session transacted is set to true
session acknowledge mode is set to SESSION_TRANSACTED(0)
Do you have any idea why these messages are pushed to the queue immediately and they are not removed when transaction is rolled back (e.g. when I throw exception at the end of "sendMessage" method?
You need to show the rest of your configuration (transaction manager etc).
It looks like you don't have transactions enabled in the application context so the template is committing the transaction itself.
Do you have <tx:annotation-driven/> in the context?