I'm trying to setup client side thread management on a Wildlfy 10 AS for JMS using ActiveMQ, I have a queue setup in standalone-full.xml DemoQueue currently the AS is creating endless threads eating up memory till eventual it crashes
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="demoQueue" entries="java:/jms/queue/demoQueue java:jboss/exported/jms/queue/demoQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
</server>
</subsystem>
I have it working with server side thread management.
I'v been trying to follow the instructions found here so currently I'm using :
DEFAULT_CONNECTION_FACTORY=jms/RemoteConnectionFactory
DEFAULT_DESTINATION=java:/jms/queue/demoQueue
DEFAULT_USERNAME=mUserName
DEFAULT_PASSWORD=myPassword
INITIAL_CONTEXT_FACTORY=org.jboss.naming.remote.client.InitialContextFactory
PROVIDER_URL=http-remoting://myURL.com:8082
/** Lookup the queue object */
Queue queue = (Queue) context.lookup(props.getProperty("DEFAULT_DESTINATION"));
/** Lookup the queue connection factory */
ConnectionFactory connFactory = (ConnectionFactory) context.lookup(props.getProperty("DEFAULT_CONNECTION_FACTORY"));
try (javax.jms.Connection connection = connFactory.createConnection(props.getProperty("DEFAULT_USERNAME"), props.getProperty("DEFAULT_PASSWORD"));
/** Create a queue session */
Session queueSession = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
/** Create a queue consumer */
MessageConsumer msgConsumer = queueSession.createConsumer(queue)) {
/** Set an asynchronous message listener */
msgConsumer.setMessageListener(asyncReceiver);
/** Set an asynchronous exception listener on the connection */
connection.setExceptionListener(asyncReceiver);
/** Start connection */
connection.start();
}
Do I need to add the ClientSessionFactory configuration to my "standalone-full.xml" for client side thread management?
I can't access the .setUseGlobalPools(falase); from the RemoteConnectionFactory.
I've tried adding:
ConnectionFactory myConnectionFactory = ActiveMQJMSClient.createConnectionFactory(myFactory);
I can't seem to access the needed methods from my code.
.useGlobalPools=false
scheduledThreadPoolMaxSize=10
I was using Wildfly 9 which implemented HornetQ so some of my configuration may need changing to work properly with ActiveMQ
I was showin a solution to this by a helpful user over on the Jboss forums I used server side thread management by modifying my XML configuration
<connection-factory name="RemoteConnectionFactory"
entries="java:jboss/exported/jms/RemoteConnectionFactory"
connectors="http-connector" use-global-pools="false"
thread-pool-max-size="10"/>
Another Stack user has pointed out on another question I had, that there may be an issue with this and other Wildfly version where this setting will not solve the problem, it did solve it for me, but there is another work around, by passing in the setting as a param during launch:
sh standalone.sh -c standalone-full.xml -Dactivemq.artemis.client.global.thread.pool.max.size=30
Related
I have a deployed queue on JBoss EAP 7.1 (defined in standalone.xml) that is managed by ActiveMQ Artemis (name of the module xmlns="urn:jboss:domain:messaging-activemq:2.0") and at the same time I want to connect to that queue with the JMS plugin in logstash (with JNDI) for consuming the messages that are sent by the deployed app on my JBoss server, but when I try I get a NameNotFoundException for the connection factory (the property jndi_name in the logstash conf file).
I tried to find the default connection factories JNDI entries but even then it didn't work.
So i want to know if the connection factories are created by the message broker or exist by default for the client? Because if I'm not mistaken, the connection factory is the only way to connect to the broker and the queue and correct if I'm wrong please but they have to exist by default for the client.
I hope you can help me guys here is my logstash conf file:
input {
jms {
# Logstash Configuration Settings.
include_header => false
include_properties => false
include_body => true
use_jms_timestamp => false
destination => "AuditTrailMDB"
pub_sub => false
# JNDI Settings
jndi_name => 'queueConnectionFactory'
jndi_context => {
'java.naming.factory.initial' => 'org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory'
'java.naming.security.principal' => 'admin'
'java.naming.provider.url' => 'tcp://localhost:5445?type=QUEUE_CF'
'java.naming.security.credentials' => 'admin'
}
# Jar files to be imported
require_jars=> ['/home/Alternant/logstash/dependencies/jboss-client.jar',
'/home/Alternant/logstash/dependencies/artemis-ra.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-core-impl.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-core-api.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-common-api.jar']
}
}output{
stdout{}
}
and here is my queue definition in the standalone.xml:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:2.0">
<server name="default">
<security-setting name="#">
<role consume="true" create-non-durable-queue="true" delete-non-durable-queue="true" name="guest" send="true"/>
</security-setting>
<address-setting dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" message-counter-history-day-limit="10" name="#" page-size-bytes="2097152"/>
<http-connector endpoint="http-acceptor" name="http-connector" socket-binding="http"/>
<http-connector endpoint="http-acceptor-throughput" name="http-connector-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<remote-connector name="netty" socket-binding="remote-messaging"/>
<http-acceptor http-listener="default" name="http-acceptor"/>
<http-acceptor http-listener="default" name="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<remote-acceptor name="netty" socket-binding="messaging"/>
<jms-queue entries="java:/jms/queue/ExpiryQueue" name="ExpiryQueue"/>
<jms-queue entries="java:/jms/queue/DLQ" name="DLQ"/>
<jms-queue entries="queue/clientPending" name="clientPending"/>
<jms-queue name="AuditTrailMDB" entries="queue/AuditTrailMDB"/>
<connection-factory connectors="in-vm" entries="java:/ConnectionFactory" name="InVmConnectionFactory"/>
<pooled-connection-factory connectors="netty" entries="java:jboss/exported/jms/RemoteConnectionFactory" name="RemoteConnectionFactory" user="admin" password="admin"/>
<pooled-connection-factory connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" name="activemq-ra" transaction="xa"/>
<connection-factory connectors="in-vm" entries="/ApplicationsQueueConnectionFactory" name="ApplicationsQueueConnectionFactory"/>
</server>
</subsystem>
...
<socket-binding name="messaging" port="5445"/>
...
log:
[WARN ][logstash.inputs.jms ][main] JMS Consumer Died {:exception=>"Java::JavaxNaming::NameNotFoundException", :exception_message=>"queueConnectionFactory", :backtrace=>["org.apache.activemq.artemis.jndi.ReadOnlyContext.lookup(org/apache/activemq/artemis/jndi/ReadOnlyContext.java:236)", "javax.naming.InitialContext.lookup(javax/naming/InitialContext.java:417)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:455)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:316)", "home.Alternant.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.jruby_minus_jms_minus_1_dot_3_dot_0_minus_java.lib.jms.connection.initialize
There's a handful of things wrong with your configuration both for JBoss EAP and for Logstash.
Let's start with JBoss EAP...
First, you changed the default configuration of RemoteConnectionFactory to this:
<pooled-connection-factory connectors="netty" entries="java:jboss/exported/jms/RemoteConnectionFactory" name="RemoteConnectionFactory" user="admin" password="admin"/>
This is incorrect. A remote client cannot use a pooled-connection-factory, only a client in the same JVM as the application server can (e.g. an MDB which needs to send a message). You should use the default configuration instead:
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
Second, your AuditTrailMDB queue will not be available to remote clients. Here's its configuration:
<jms-queue name="AuditTrailMDB" entries="queue/AuditTrailMDB"/>
It needs a new JNDI entry in the java:jboss/exported/ namespace in order to be available to remote clients (e.g. like RemoteConnectionFactory has). Therefore you should use this:
<jms-queue name="AuditTrailMDB" entries="queue/AuditTrailMDB java:jboss/exported/AuditTrailMDB"/>
Now for Logstash...
First, you're using the wrong JNDI properties. The properties you're using are for the ActiveMQ Artemis JNDI implementation. Here's your current configuration:
jndi_context => {
'java.naming.factory.initial' => 'org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory'
'java.naming.security.principal' => 'admin'
'java.naming.provider.url' => 'tcp://localhost:5445?type=QUEUE_CF'
'java.naming.security.credentials' => 'admin'
}
This is incorrect. When ActiveMQ Artemis is embedded into JBoss EAP then EAP itself handles all the JNDI lookups. Therefore you should be using this configuration instead:
jndi_context => {
'java.naming.factory.initial' => 'org.wildfly.naming.client.WildFlyInitialContextFactory'
'java.naming.security.principal' => 'admin'
'java.naming.provider.url' => 'http-remoting://127.0.0.1:8080'
'java.naming.security.credentials' => 'admin'
}
This assumes, of course, that you've added the proper admin user to EAP.
Second, your connection factory JNDI name is incorrect. You're currently using this:
jndi_name => 'queueConnectionFactory'
You should be using this instead:
jndi_name => 'jms/RemoteConnectionFactory'
Third, the jars you're using are incorrect. Here's your current configuration:
require_jars=> ['/home/Alternant/logstash/dependencies/jboss-client.jar',
'/home/Alternant/logstash/dependencies/artemis-ra.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-core-impl.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-core-api.jar',
'/home/Alternant/logstash/dependencies/ironjacamar-common-api.jar']
You don't need most of these at all. You can simplify your configuration by using the wildfly-client-all "uber" jar which is available here. Then your configuration would look like this:
require_jars=> ['/home/Alternant/logstash/dependencies/wildfly-client-all-7.1.0.GA-redhat-11.jar']
Unable to lookup invm queue thru ConnectionFactory
Hashtable<String, Object> properties = new Hashtable<>();
properties.put("connectionFactory.ConnectionFactory", "(tcp://localhost:8080)?httpUpgradeEnabled=true&retryInterval=3000&reconnectAttempts=-1&initialConnectAttempts=10&maxRetryInterval=3000&clientFailureCheckPeriod=1000");
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
InitialContext jndiContext = new InitialContext(properties);
ConnectoryFactory connFactory = (ConnectionFactory) jndiContext.lookup(connectionFactory);
Connection connection = connFactory.createConnection(userName, password);
session = connection.createSession(true, javax.jms.Session.AUTO_ACKNOWLEDGE);
Hashtable<String, Object> properties = new Hashtable<>();
properties.put(Context.INITIAL_CONTEXT_FACTORY, factoryInitial);
InitialContext ctx = new InitialContext(properties);
destination = (Destination) ctx.lookup("dynamicQueues/TestQueue"); //I can't put queue name in jndi.properties
MessageProducer producer = session.createProducer(destination);
producer.send(message, Message.DEFAULT_DELIVERY_MODE, Message.DEFAULT_PRIORITY, msgTTL);
if (session.getTransacted() && session.getAcknowledgeMode() == Session.SESSION_TRANSACTED) {
session.commit();
}
When I execute the above code then it throws error saying that Queue "TestQueue" does not exists. I have tried with lookup queue with dynamicQueues/TestQueue and jms/TestQueue but in both cases I got same error
Can you please let me know what is wrong with this code.
Please find below Wildfly ActiveMQ Artemis configuration
<server name="default" persistence-enabled="true">
<cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>
<bindings-directory path="/opt/shared/messaging/live/bindings"/>
<journal-directory path="/opt/shared/messaging/live/journal"/>
<large-messages-directory path="/opt/shared/messaging/live/largemessages"/>
<paging-directory path="/opt/shared/messaging/live/paging"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-durable-queue="true" delete-durable-queue="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" redelivery-delay="60000" max-delivery-attempts="5" max-size-bytes="50485760" page-size-bytes="10485760" address-full-policy="PAGE" redistribution-delay="1000"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQueue" entries="java:/jms/TestQueue java:jboss/exported/jms/TestQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
I just want to share some links with you, for further reading.
The quickstarts are a good entry point if you start developing with wildfly.
Here you have an external client example:
https://github.com/wildfly/quickstart/tree/14.x/helloworld-jms
Here one where it everything runs inside the wildfly container:
https://github.com/wildfly/quickstart/tree/14.x/helloworld-mdb
Here you have general documentation about messaging in wildfly 14:
https://docs.wildfly.org/14/Admin_Guide.html#Messaging
The initial context factory you're using (i.e. org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory) is a client-side only JNDI implementation for use with standalone ActiveMQ Artemis. Since you are using Wildfly you should be using their JNDI implementation (i.e. org.wildfly.naming.client.WildFlyInitialContextFactory). Then you can lookup both the connection factory and the destination from the Wildfly server and you won't need to specify the connection factory URL in your code.
Also, there is no such thing as an "invm queue".
Could someone provide an example of messaging application working under Wildfly 10 cluster (domain)? We are struggling with it and given that it is a new technology, there is a terrible lack of resources.
Currently we have the following:
A domain consisting of two hosts (nodes) and three groups on each, i.e. six separate servers in the domain.
A relevant part of server configuration (in domain.xml):
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password}"/>
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-siz
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQ" entries="java:jboss/exported/jms/queue/testq"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" co
</server>
</subsystem>
The configuration is more or less default, except added TestQ queue.
tcphq stack is defined in the JGroups configuration as follows:
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
I have written a testing application consisting from a simple "server", meaning MDB and a client as follows:
Server (MDB):
#MessageDriven(mappedName = "test", activationConfig = {
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/exported/jms/queue/testq"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
public class MessageServer implements MessageListener {
#Override
public void onMessage(Message message) {
try {
ObjectMessage msg = null;
if (message instanceof ObjectMessage) {
msg = (ObjectMessage) message;
}
System.out.print("The number in the message: "+ msg.getIntProperty("count"));
} catch (JMSException ex) {
Logger.getLogger(MessageServer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Client:
#Singleton
#Startup
public class ClientBean implements ClientBeanLocal {
#Resource(mappedName = "java:jboss/exported/jms/RemoteConnectionFactory")
private ConnectionFactory factory;
#Resource(mappedName = "java:jboss/exported/jms/queue/testq")
private Queue queue;
#PostConstruct
public void sendMessage() {
Connection connection = null;
try {
connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
Message message = session.createObjectMessage();
message.setIntProperty("count", 1);
producer.send(message);
System.out.println("Message sent.");
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} catch (NamingException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (connection != null) connection.close();
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
It actually works well if both client and server reside in the same group. In such a case it even seems it communicates between hosts (nodes). However if the server and client are in different groups, MDB is not invoked. Moreover, it even seems that MDB is invoked only if it resides in the group with 0 offset. When I moved the server MDB into a different group, it was not responding even if the client was in the same group.
I am a bit confused about JMS in Wildfly 10. There is a lot of examples and materials for older versions with HornetQ, however very few for Artemis. Could someone help? Many thanks.
As I came with the same question - put the answer that works for me.
Actually as Miroslav answered on developer.jboss.org the first thing to be checked is socket-binding for the "jgroups-tcp-hq" and the port-offset config on each server.
Should be <socket-binding name="jgroups-tcp-hq" ... port="7600"/> and port-offset is set (e.g. with the jboss.socket.binding.port-offset property) to 60 on dev1[7660] server ; 210 on dev1[7810] ; 360 on dev1[7960]. Same for dev2 servers.
And the second is jboss.bind.address.private property.
Usually default jgroups socket-binding refers to the "private" interface, e.g.
<socket-binding name="jgroups-tcp-hq" interface="private" port="7600"/>
So "private" interface address must be provided with the jboss.bind.address.private property (e.g. jboss.bind.address.private=dev1 ) - otherwise ClusterConnectionBridge will not be established between nodes!
See also this post for more details.
If communication between ActiveMQ server instances is established then the log entry must appear in server.log: AMQ221027: Bridge ClusterConnectionBridge#63549ead [name=sf.my-cluster ...] is connected.
See also this answer.
I am trying to make remote calls to multiple servers running on one instance of JBoss EAP 6 from a client server running on a separate instance of JBoss EAP 6. I have configured for JBoss-to-JBoss remote communication, and have read about scoped EJB client contexts, but the two do not appear to be compatible. Currently, I have two EJB Receivers configured (one for each remote server), but it appears when I try to make a remote call, the initialized Context randomly selects the EJB Receiver it will use. It would seem reasonable that I can force which EJB Receiver is used when the Context is initialized if I have the remote ip and port, or the remote connection name, but alas, I don't know the the secret handshake.
host.xml:
<security-realm name="ejb-security-realm">
<server-identities>
<secret value="ZWpiUEBzc3cwcmQ="/>
</server-identities>
</security-realm>
domain.xml:
<subsystem xmlns="urn:jboss:domain:remoting:1.2">
<connector name="remoting-connector" socket binding="remoting" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection name="remote-ejb-connection" outbound-socket-binding-ref="mpg1-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection name="remote-ejb-connection2" outbound-socket-binding-ref="mpg2-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
...
<socket-binding-group name="full-sockets" default-interface="public">
...
<socket-binding name="remoting" port="44447"/>
<outbound-socket-binding name="mpg1-app1">
<remote-destination host="localhost" port="44452"/>
</outbound-socket-binding>
<outbound-socket-binding name="mpg2-app1">
<remote-destination host="localhost" port="44453"/>
</outbound-socket-binding>
</socket-binding-group>
jboss-ejb-client.xml
<jboss-ejb-client xmlns="urn:jboss:ejb-client:1.0">
<client-context>
<ejb-receivers>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection"/>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection2"/>
</ejb-receivers>
</client-context>
</jboss-ejb-client>
The remote call:
Context ctx = null;
final Properties props = new Properties();
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
try {
ctx = new InitialContext(props);
MyInterfaceObject ourInterface = ctx.lookup("ejb:" + appName + "/" + moduleName + "/" + beanName + "!"
+ viewClassName);
ourInteface.refreshProperties();//remote method call
}
Any Help would be greatly appreciated!
have you try cluster-node-selector
jboss-ejb-client.xml
<!-- if an outbound connection connect to a cluster a list of members is provided after successful connection.
To connect to this node this cluster element must be defined.
-->
<clusters>
<!-- cluster of remote-ejb-connection-1 -->
<cluster name="ejb" security-realm="ejb-security-realm-1" username="test" cluster-node-selector="org.jboss.as.quickstarts.ejb.clients.selector.AllClusterNodeSelector">
<connection-creation-options>
<property name="org.xnio.Options.SSL_ENABLED" value="false" />
<property name="org.xnio.Options.SASL_POLICY_NOANONYMOUS" value="false" />
</connection-creation-options>
</cluster>
</clusters>
</client-context>
</jboss-ejb-client>
Selector Implementation
#Override
public String selectNode(final String clusterName, final String[] connectedNodes, final String[] availableNodes) {
if (availableNodes.length == 1) {
return availableNodes[0];
}
// Go through all the nodes and point to the one you want
for (int i = 0; i < availableNodes.length; i++) {
if (availableNodes[i].contains("someoneYouInterestIn")) {
return availableNodes[i];
}
}
final Random random = new Random();
final int randomSelection = random.nextInt(availableNodes.length);
return availableNodes[randomSelection];
}
For more information you can check
https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/7.0/developing-ejb-applications/chapter-8-clustered-enterprise-javab
I am trying to configure my custom ActiveMQ producer to use XA transaction. Unfortunately it does't work as expected because messages are sent to queue immediately instead of waiting for transactions to commit.
Here is the producer:
public class MyProducer {
#Autowired
#Qualifier("myTemplate")
private JmsTemplate template;
#Transactional
public void sendMessage(final Order order) {
template.send(new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
ObjectMessage message = new ActiveMQObjectMessage();
message.setObject(order);
return message;
}
});
}
}
And this is template and connection factory configuration:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:/activemq/ConnectionFactory" />
</bean>
<bean id="myTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsConnectionFactory"
p:defaultDestination-ref="myDestination"
p:sessionTransacted="true"
p:sessionAcknowledgeModeName="SESSION_TRANSACTED" />
As you can see I am using ConnectionFactory initiated via JNDI. It is configured on JBoss EAP 6.3:
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
<resource-adapters>
<resource-adapter id="activemq-rar.rar">
<module slot="main" id="org.apache.activemq.ra"/>
<transaction-support>XATransaction</transaction-support>
<config-property name="ServerUrl">
tcp://localhost:61616
</config-property>
<connection-definitions>
<connection-definition class-name="org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name="java:/activemq/ConnectionFactory" enabled="true" use-java-context="true" pool-name="ActiveMQConnectionFactoryPool" use-ccm="true">
<xa-pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
</xa-pool>
</connection-definition>
</connection-definitions>
</resource-adapter>
</resource-adapters>
</subsystem>
When I debug I can see that JmsTemplate is configured properly:
it has a reference to valid connection factory org.apache.activemq.ra.ActiveMQConnectionFactory
connection factory has a reference to valid transaction manager: org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl
session transacted is set to true
session acknowledge mode is set to SESSION_TRANSACTED(0)
Do you have any idea why these messages are pushed to the queue immediately and they are not removed when transaction is rolled back (e.g. when I throw exception at the end of "sendMessage" method?
You need to show the rest of your configuration (transaction manager etc).
It looks like you don't have transactions enabled in the application context so the template is committing the transaction itself.
Do you have <tx:annotation-driven/> in the context?