Artemis HA and cluster not working - activemq-artemis

Below are setting of artemis cluster (3 servers) in broker.xml
<!-- Clustering configuration -->
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<colocated>
<backup-port-offset>100</backup-port-offset>
<backup-request-retries>-1</backup-request-retries>
<backup-request-retry-interval>2000</backup-request-retry-interval>
<max-backups>2</max-backups>
<request-backup>true</request-backup>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
<slave>
<scale-down/>
</slave>
</colocated>
</shared-store>
</ha-policy>
Cluster and ha configuration are same in all servers. The failover scenario which i am trying to understand and execute is as below.
Start broker1,broker2,broker3 in sequence mentioned. Here I can
see from admin UI that broker1 have backing up broker2 and broker3.
broker2 have backing of broker1. broker3 does not have any backup.
I wrote below program to connect to server
public static void main(final String[] args) throws Exception {
Connection connection = null;
InitialContext initialContext = null;
try {
Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
properties.put("connectionFactory.ConnectionFactory",
"(tcp://localhost:61616,tcp://localhost:61617,tcp://localhost:61618)?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=-1");
properties.put("queue.queue/exampleQueue", "exampleQueue");
// Step 1. Create an initial context to perform the JNDI lookup.
initialContext = new InitialContext(properties);
ConnectionFactory cf = (ConnectionFactory) initialContext.lookup("ConnectionFactory");
// Step 2. Look-up the JMS Queue object from JNDI
Queue queue = (Queue) initialContext.lookup("queue/exampleQueue");
// Step 3. Create a JMS Connection
connection = cf.createConnection("admin", "admin");
// Step 4. Start the connection
connection.start();
// Step 5. Create a JMS session with AUTO_ACKNOWLEDGE mode
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 8. Create a text message
BytesMessage message = session.createBytesMessage();
message.setStringProperty(InfoSearchEngine.QUERY_ID_HEADER_PARAM, "123");
MessageConsumer consumer0 = session.createConsumer(queue);
// Step 9. Send the text message to the queue
while (true) {
try {
Thread.sleep(500);
// Step 7. Create a JMS message producer
MessageProducer messageProducer = session.createProducer(queue);
messageProducer.send(message);
System.out.println("Sent message: " + message.getBodyLength());
} catch (Exception e) {
System.out.println("Exception - " + e.getLocalizedMessage());
}
}
} finally {
if (connection != null) {
// Step 20. Be sure to close our JMS resources!
connection.close();
}
if (initialContext != null) {
// Step 21. Also close the initialContext!
initialContext.close();
}
}
}
if I shutdown broker1, program diverts to broker2 and runs fine. If
i shutdown broker2 then the program doest not connect to broker3.
I expected that broker3 should have started taking up the request since it was in cluster.

I can see from admin UI that broker1 have backing up broker2 and broker3. broker2 have backing of broker1. broker3 does not have any backup.
Failover in Artemis only works between a live and a backup. In your scenario broker1 is backing up broker2 so when you shutdown broker1 that means broker2 no longer has a backup so that when you shutdown broker2 no failover happens. You should specify <group-name> in your master and slave configurations so that your backups form in a more organized way so that this kind of situation doesn't happen.

Related

Consumer with wildcard syntax

I'm using ActiveMQ Artemis 2.17.0. I want to create a consumer with wildcard syntax that would consume messages from multiple addresses. I wrote the next consumer. But it consumes from address news.europe.#, but not from addresses matching the wildcard syntax (news.europe.sport, news.europe.politics etc). What am I doing wrong?
Scenario:
Start Artemis broker
Send 2 messages with the producer in news.europe.sport, news.europe.politics
Start the consumer
Expected behavior:
2 messages received by the consumer
Observed behavior
messages remain in queues
the address news.europe.# has an active consumer
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import javax.jms.*;
public class ArtemisConsumer {
public static void main(String[] args) throws JMSException, InterruptedException {
String brokerURL = "tcp://localhost:61716";
String queueName = "news.europe.#";
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(brokerURL);
connectionFactory.setUser("user");
connectionFactory.setPassword("pass");
Connection connection = connectionFactory.createConnection();
connection.start();
Session session = connection.createSession(true, Session.SESSION_TRANSACTED);
Destination destination = session.createQueue(queueName);
MessageConsumer consumer = session.createConsumer(destination);
consumer.setMessageListener(new ConsumerMessageListener("Consumer"));
Thread.sleep(60000);
session.commit();
session.close();
connection.close();
}
}
broker.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration xmlns="urn:activemq">
<core xmlns="urn:activemq:core">
<name>QMA</name>
<max-disk-usage>100</max-disk-usage>
<configuration-file-refresh-period>9223372036854775807</configuration-file-refresh-period>
<bindings-directory>${ARTEMISMQ_DATA}/bindings</bindings-directory>
<journal-directory>${ARTEMISMQ_DATA}/journal</journal-directory>
<large-messages-directory>${ARTEMISMQ_DATA}/largemessages</large-messages-directory>
<paging-directory>.${ARTEMISMQ_DATA}/paging</paging-directory>
<cluster-user>user</cluster-user>
<cluster-password>password</cluster-password>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">tcp://0.0.0.0:61716</acceptor>
<acceptor name="in-vm">vm://0</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission roles="user-group" type="createNonDurableQueue"/>
<permission roles="user-group" type="deleteNonDurableQueue"/>
<permission roles="user-group" type="createDurableQueue"/>
<permission roles="user-group" type="deleteDurableQueue"/>
<permission roles="user-group" type="createAddress"/>
<permission roles="user-group" type="deleteAddress"/>
<permission roles="user-group" type="consume"/>
<permission roles="user-group" type="browse"/>
<permission roles="user-group" type="send"/>
<permission roles="user-group" type="manage"/>
</security-setting>
</security-settings>
</core>
</configuration>
producer
import org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory;
import javax.jms.*;
public class ArtemisProducer {
public static void main(final String[] args) throws Exception {
String brokerURL = "tcp://localhost:61716";
ActiveMQConnectionFactory connFactory = new ActiveMQConnectionFactory(brokerURL);
connFactory.setUser("user");
connFactory.setPassword("password");
final Connection conn = connFactory.createConnection();
conn.start();
final Session sess = conn.createSession(true, Session.SESSION_TRANSACTED);
final Destination dest = sess.createQueue("news.europe.politics");
final MessageProducer prod = sess.createProducer(dest);
final Message msg = sess.createTextMessage("Sample message");
prod.send(msg);
sess.commit();
conn.close();
}
}
You are seeing the expected behavior. This is because the feature you're using is a wildcard address. In short, any messages sent to a matching address will also be routed to the wildcard address (and any queues bound to that address according to their semantics (i.e. anycast or multicast)).
In your case the wildcard address hasn't yet been created when you send your messages so there is no way for those messages to be routed to it.
FWIW, you can see this feature in action in the topic-hierarchies examples which ships with the broker in the examples/features/standard directory.

Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)

Could someone provide an example of messaging application working under Wildfly 10 cluster (domain)? We are struggling with it and given that it is a new technology, there is a terrible lack of resources.
Currently we have the following:
A domain consisting of two hosts (nodes) and three groups on each, i.e. six separate servers in the domain.
A relevant part of server configuration (in domain.xml):
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password}"/>
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-siz
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQ" entries="java:jboss/exported/jms/queue/testq"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" co
</server>
</subsystem>
The configuration is more or less default, except added TestQ queue.
tcphq stack is defined in the JGroups configuration as follows:
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
I have written a testing application consisting from a simple "server", meaning MDB and a client as follows:
Server (MDB):
#MessageDriven(mappedName = "test", activationConfig = {
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/exported/jms/queue/testq"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
public class MessageServer implements MessageListener {
#Override
public void onMessage(Message message) {
try {
ObjectMessage msg = null;
if (message instanceof ObjectMessage) {
msg = (ObjectMessage) message;
}
System.out.print("The number in the message: "+ msg.getIntProperty("count"));
} catch (JMSException ex) {
Logger.getLogger(MessageServer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Client:
#Singleton
#Startup
public class ClientBean implements ClientBeanLocal {
#Resource(mappedName = "java:jboss/exported/jms/RemoteConnectionFactory")
private ConnectionFactory factory;
#Resource(mappedName = "java:jboss/exported/jms/queue/testq")
private Queue queue;
#PostConstruct
public void sendMessage() {
Connection connection = null;
try {
connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
Message message = session.createObjectMessage();
message.setIntProperty("count", 1);
producer.send(message);
System.out.println("Message sent.");
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} catch (NamingException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (connection != null) connection.close();
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
It actually works well if both client and server reside in the same group. In such a case it even seems it communicates between hosts (nodes). However if the server and client are in different groups, MDB is not invoked. Moreover, it even seems that MDB is invoked only if it resides in the group with 0 offset. When I moved the server MDB into a different group, it was not responding even if the client was in the same group.
I am a bit confused about JMS in Wildfly 10. There is a lot of examples and materials for older versions with HornetQ, however very few for Artemis. Could someone help? Many thanks.
As I came with the same question - put the answer that works for me.
Actually as Miroslav answered on developer.jboss.org the first thing to be checked is socket-binding for the "jgroups-tcp-hq" and the port-offset config on each server.
Should be <socket-binding name="jgroups-tcp-hq" ... port="7600"/> and port-offset is set (e.g. with the jboss.socket.binding.port-offset property) to 60 on dev1[7660] server ; 210 on dev1[7810] ; 360 on dev1[7960]. Same for dev2 servers.
And the second is jboss.bind.address.private property.
Usually default jgroups socket-binding refers to the "private" interface, e.g.
<socket-binding name="jgroups-tcp-hq" interface="private" port="7600"/>
So "private" interface address must be provided with the jboss.bind.address.private property (e.g. jboss.bind.address.private=dev1 ) - otherwise ClusterConnectionBridge will not be established between nodes!
See also this post for more details.
If communication between ActiveMQ server instances is established then the log entry must appear in server.log: AMQ221027: Bridge ClusterConnectionBridge#63549ead [name=sf.my-cluster ...] is connected.
See also this answer.

Is it possible to explicitly dictate which EJB Receiver is used within JBoss EAP 6?

I am trying to make remote calls to multiple servers running on one instance of JBoss EAP 6 from a client server running on a separate instance of JBoss EAP 6. I have configured for JBoss-to-JBoss remote communication, and have read about scoped EJB client contexts, but the two do not appear to be compatible. Currently, I have two EJB Receivers configured (one for each remote server), but it appears when I try to make a remote call, the initialized Context randomly selects the EJB Receiver it will use. It would seem reasonable that I can force which EJB Receiver is used when the Context is initialized if I have the remote ip and port, or the remote connection name, but alas, I don't know the the secret handshake.
host.xml:
<security-realm name="ejb-security-realm">
<server-identities>
<secret value="ZWpiUEBzc3cwcmQ="/>
</server-identities>
</security-realm>
domain.xml:
<subsystem xmlns="urn:jboss:domain:remoting:1.2">
<connector name="remoting-connector" socket binding="remoting" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection name="remote-ejb-connection" outbound-socket-binding-ref="mpg1-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection name="remote-ejb-connection2" outbound-socket-binding-ref="mpg2-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
...
<socket-binding-group name="full-sockets" default-interface="public">
...
<socket-binding name="remoting" port="44447"/>
<outbound-socket-binding name="mpg1-app1">
<remote-destination host="localhost" port="44452"/>
</outbound-socket-binding>
<outbound-socket-binding name="mpg2-app1">
<remote-destination host="localhost" port="44453"/>
</outbound-socket-binding>
</socket-binding-group>
jboss-ejb-client.xml
<jboss-ejb-client xmlns="urn:jboss:ejb-client:1.0">
<client-context>
<ejb-receivers>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection"/>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection2"/>
</ejb-receivers>
</client-context>
</jboss-ejb-client>
The remote call:
Context ctx = null;
final Properties props = new Properties();
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
try {
ctx = new InitialContext(props);
MyInterfaceObject ourInterface = ctx.lookup("ejb:" + appName + "/" + moduleName + "/" + beanName + "!"
+ viewClassName);
ourInteface.refreshProperties();//remote method call
}
Any Help would be greatly appreciated!
have you try cluster-node-selector
jboss-ejb-client.xml
<!-- if an outbound connection connect to a cluster a list of members is provided after successful connection.
To connect to this node this cluster element must be defined.
-->
<clusters>
<!-- cluster of remote-ejb-connection-1 -->
<cluster name="ejb" security-realm="ejb-security-realm-1" username="test" cluster-node-selector="org.jboss.as.quickstarts.ejb.clients.selector.AllClusterNodeSelector">
<connection-creation-options>
<property name="org.xnio.Options.SSL_ENABLED" value="false" />
<property name="org.xnio.Options.SASL_POLICY_NOANONYMOUS" value="false" />
</connection-creation-options>
</cluster>
</clusters>
</client-context>
</jboss-ejb-client>
Selector Implementation
#Override
public String selectNode(final String clusterName, final String[] connectedNodes, final String[] availableNodes) {
if (availableNodes.length == 1) {
return availableNodes[0];
}
// Go through all the nodes and point to the one you want
for (int i = 0; i < availableNodes.length; i++) {
if (availableNodes[i].contains("someoneYouInterestIn")) {
return availableNodes[i];
}
}
final Random random = new Random();
final int randomSelection = random.nextInt(availableNodes.length);
return availableNodes[randomSelection];
}
For more information you can check
https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/7.0/developing-ejb-applications/chapter-8-clustered-enterprise-javab

How to configure HornetQ client with standalone server cluster (configured using JGroups TCP)

I have configured 2 hornetq standalone servers in clustered mode using groups (tcp) as i cant use default UDP. Below is the configuration.
hornetq-configuration.xml:
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-file>jgroups-tcp.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-file>jgroups-tcp.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
Jgroups.xml:
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP bind_port="7800"
recv_buf_size="${tcp.recv_buf_size:5M}"
send_buf_size="${tcp.send_buf_size:5M}"
max_bundle_size="64K"
max_bundle_timeout="30"
use_send_queues="true"
sock_conn_timeout="300"
timer_type="new3"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="8"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="discard"/>
<TCPPING
initial_hosts="${jgroups.tcpping.initial_hosts:hornetq-server1-ip[7800], hornetq-server1-ip[7900], hornetq-server2-ip[7800], hornetq-server2-ip[7900]}"
port_range="1"/>
<MERGE3 min_interval="10000"
max_interval="30000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 use_mcast_xmit="false"
discard_delivered_msgs="true"/>
<UNICAST3 />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="2000"
view_bundling="true"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<!--RSVP resend_interval="2000" timeout="10000"/-->
<pbcast.STATE_TRANSFER/>
Servers work fine i.e., if the live goes down, backup takes its place.
Client producer:
TransportConfiguration[] servers = new TransportConfiguration[2];
List<Configuration> configurations = ... // user defined class
for (int i = 0; i < configurations.size(); i++) {
Map<String, Object> map = new HashMap<>();
map.put("host", configurations.get(i).getHost());
map.put("port", configurations.get(i).getPort());
servers[i] = new TransportConfiguration(NettyConnectorFactory.class.getName(), map);
}
ServerLocator locator = HornetQClient.createServerLocatorWithHA(servers);
locator.setReconnectAttempts(5);
factory = locator.createSessionFactory();
session = factory.createSession();
producer = session.createProducer(queueName);
Client Consumer:
ClientSessionFactory factory = locator.createSessionFactory();
for (int i = 1; i <= nReceivers; i++) {
ClientSession session = factory.createSession(true, true, 1);
sessions.add(session);
if (i == 1) {
Thread.sleep(10000); // waiting to download cluster information
}
session.start();
ClientConsumer consumer = session.createConsumer(queueName);
consumer.setMessageHandler(handler);
}
Issue:
Client (producer) doesnt automatically fall back if the server connected to, goes down, while sending messages.
The sessions created using same client factory is always connecting to one server (as opposed to documentation http://docs.jboss.org/hornetq/2.3.0.beta1/docs/user-manual/html/clusters.html#clusters.client.loadbalancing)
So it seems the client never gets the cluster information. I also dont find any documentation for configuring a client to use jgroups (needed?) to connect to a hornetq cluster.
Any help is appreciated.
Figured out that i can use jgroups on client side too.
Detailed solution can be found here

ActiveMQ producer XA transaction

I am trying to configure my custom ActiveMQ producer to use XA transaction. Unfortunately it does't work as expected because messages are sent to queue immediately instead of waiting for transactions to commit.
Here is the producer:
public class MyProducer {
#Autowired
#Qualifier("myTemplate")
private JmsTemplate template;
#Transactional
public void sendMessage(final Order order) {
template.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
ObjectMessage message = new ActiveMQObjectMessage();
message.setObject(order);
return message;
}
});
}
}
And this is template and connection factory configuration:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/activemq/ConnectionFactory" />
</bean>
<bean id="myTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsConnectionFactory"
p:defaultDestination-ref="myDestination"
p:sessionTransacted="true"
p:sessionAcknowledgeModeName="SESSION_TRANSACTED" />
As you can see I am using ConnectionFactory initiated via JNDI. It is configured on JBoss EAP 6.3:
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
<resource-adapters>
<resource-adapter id="activemq-rar.rar">
<module slot="main" id="org.apache.activemq.ra"/>
<transaction-support>XATransaction</transaction-support>
<config-property name="ServerUrl">
tcp://localhost:61616
</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
</resource-adapter>
</resource-adapters>
</subsystem>
When I debug I can see that JmsTemplate is configured properly:
it has a reference to valid connection factory org.apache.activemq.ra.ActiveMQConnectionFactory
connection factory has a reference to valid transaction manager: org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl
session transacted is set to true
session acknowledge mode is set to SESSION_TRANSACTED(0)
Do you have any idea why these messages are pushed to the queue immediately and they are not removed when transaction is rolled back (e.g. when I throw exception at the end of "sendMessage" method?
You need to show the rest of your configuration (transaction manager etc).
It looks like you don't have transactions enabled in the application context so the template is committing the transaction itself.
Do you have <tx:annotation-driven/> in the context?