How to configure HornetQ client with standalone server cluster (configured using JGroups TCP) - hornetq

I have configured 2 hornetq standalone servers in clustered mode using groups (tcp) as i cant use default UDP. Below is the configuration.
hornetq-configuration.xml:
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-file>jgroups-tcp.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<connector-ref>netty</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-file>jgroups-tcp.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
Jgroups.xml:
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP bind_port="7800"
recv_buf_size="${tcp.recv_buf_size:5M}"
send_buf_size="${tcp.send_buf_size:5M}"
max_bundle_size="64K"
max_bundle_timeout="30"
use_send_queues="true"
sock_conn_timeout="300"
timer_type="new3"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="8"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="discard"/>
<TCPPING
initial_hosts="${jgroups.tcpping.initial_hosts:hornetq-server1-ip[7800], hornetq-server1-ip[7900], hornetq-server2-ip[7800], hornetq-server2-ip[7900]}"
port_range="1"/>
<MERGE3 min_interval="10000"
max_interval="30000"/>
<FD_SOCK/>
<FD timeout="3000" max_tries="3" />
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 use_mcast_xmit="false"
discard_delivered_msgs="true"/>
<UNICAST3 />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="2000"
view_bundling="true"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<!--RSVP resend_interval="2000" timeout="10000"/-->
<pbcast.STATE_TRANSFER/>
Servers work fine i.e., if the live goes down, backup takes its place.
Client producer:
TransportConfiguration[] servers = new TransportConfiguration[2];
List<Configuration> configurations = ... // user defined class
for (int i = 0; i < configurations.size(); i++) {
Map<String, Object> map = new HashMap<>();
map.put("host", configurations.get(i).getHost());
map.put("port", configurations.get(i).getPort());
servers[i] = new TransportConfiguration(NettyConnectorFactory.class.getName(), map);
}
ServerLocator locator = HornetQClient.createServerLocatorWithHA(servers);
locator.setReconnectAttempts(5);
factory = locator.createSessionFactory();
session = factory.createSession();
producer = session.createProducer(queueName);
Client Consumer:
ClientSessionFactory factory = locator.createSessionFactory();
for (int i = 1; i <= nReceivers; i++) {
ClientSession session = factory.createSession(true, true, 1);
sessions.add(session);
if (i == 1) {
Thread.sleep(10000); // waiting to download cluster information
}
session.start();
ClientConsumer consumer = session.createConsumer(queueName);
consumer.setMessageHandler(handler);
}
Issue:
Client (producer) doesnt automatically fall back if the server connected to, goes down, while sending messages.
The sessions created using same client factory is always connecting to one server (as opposed to documentation http://docs.jboss.org/hornetq/2.3.0.beta1/docs/user-manual/html/clusters.html#clusters.client.loadbalancing)
So it seems the client never gets the cluster information. I also dont find any documentation for configuring a client to use jgroups (needed?) to connect to a hornetq cluster.
Any help is appreciated.

Figured out that i can use jgroups on client side too.
Detailed solution can be found here

Related

Wildfly14 + Unable to lookup invm queue

Unable to lookup invm queue thru ConnectionFactory
Hashtable<String, Object> properties = new Hashtable<>();
properties.put("connectionFactory.ConnectionFactory", "(tcp://localhost:8080)?httpUpgradeEnabled=true&retryInterval=3000&reconnectAttempts=-1&initialConnectAttempts=10&maxRetryInterval=3000&clientFailureCheckPeriod=1000");
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
InitialContext jndiContext = new InitialContext(properties);
ConnectoryFactory connFactory = (ConnectionFactory) jndiContext.lookup(connectionFactory);
Connection connection = connFactory.createConnection(userName, password);
session = connection.createSession(true, javax.jms.Session.AUTO_ACKNOWLEDGE);
Hashtable<String, Object> properties = new Hashtable<>();
properties.put(Context.INITIAL_CONTEXT_FACTORY, factoryInitial);
InitialContext ctx = new InitialContext(properties);
destination = (Destination) ctx.lookup("dynamicQueues/TestQueue"); //I can't put queue name in jndi.properties
MessageProducer producer = session.createProducer(destination);
producer.send(message, Message.DEFAULT_DELIVERY_MODE, Message.DEFAULT_PRIORITY, msgTTL);
if (session.getTransacted() && session.getAcknowledgeMode() == Session.SESSION_TRANSACTED) {
session.commit();
}
When I execute the above code then it throws error saying that Queue "TestQueue" does not exists. I have tried with lookup queue with dynamicQueues/TestQueue and jms/TestQueue but in both cases I got same error
Can you please let me know what is wrong with this code.
Please find below Wildfly ActiveMQ Artemis configuration
<server name="default" persistence-enabled="true">
<cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>
<bindings-directory path="/opt/shared/messaging/live/bindings"/>
<journal-directory path="/opt/shared/messaging/live/journal"/>
<large-messages-directory path="/opt/shared/messaging/live/largemessages"/>
<paging-directory path="/opt/shared/messaging/live/paging"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-durable-queue="true" delete-durable-queue="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" redelivery-delay="60000" max-delivery-attempts="5" max-size-bytes="50485760" page-size-bytes="10485760" address-full-policy="PAGE" redistribution-delay="1000"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
<cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQueue" entries="java:/jms/TestQueue java:jboss/exported/jms/TestQueue"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
I just want to share some links with you, for further reading.
The quickstarts are a good entry point if you start developing with wildfly.
Here you have an external client example:
https://github.com/wildfly/quickstart/tree/14.x/helloworld-jms
Here one where it everything runs inside the wildfly container:
https://github.com/wildfly/quickstart/tree/14.x/helloworld-mdb
Here you have general documentation about messaging in wildfly 14:
https://docs.wildfly.org/14/Admin_Guide.html#Messaging
The initial context factory you're using (i.e. org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory) is a client-side only JNDI implementation for use with standalone ActiveMQ Artemis. Since you are using Wildfly you should be using their JNDI implementation (i.e. org.wildfly.naming.client.WildFlyInitialContextFactory). Then you can lookup both the connection factory and the destination from the Wildfly server and you won't need to specify the connection factory URL in your code.
Also, there is no such thing as an "invm queue".

WCF ServiceHost REST API goes idle after some time, returns timeout error

We are facing a production-issue with WCF REST API hosted in a Windows Service.
We have clients making GET and PUT requests and also regular Ping() requests to the service from clients every 30 sec.
All the Get,Put requests work well for some time (2 or 3 days) and later at some point no WEB-API requests are served. We had to restart the Windows service again to bring back the REST-service into working state.
Client Error msg: Status Code is 503 service unavailable.
Able to reproduce the issue in local Dev-environment by below scenario.
Simulated continuous REST calls to service in local with the help of sample test client upon making Ping request every 2 seconds and Put request every 4 Seconds continuously we are able to reproduce the issue within 5 Minutest after making 68 Put requests and 152 Get requests . There was no errors logged in service. Status Code is 503 service unavailable.
Here is the server configuration for WCF REST service.
WCF REST Service Configuration:
var restURL = string.Format("{0}{1}/v{2}", (isHttps ? WsSprotocol : WsProtocol), Config.Server, Config.Version);
var webServiceHost = new WebServiceHost(typeof(EngageWebServiceHostREST), new Uri(restURL));
var webHttpBinding = new WebHttpBinding
{
Security = new WebHttpSecurity { Mode = isHttps ? WebHttpSecurityMode.Transport : WebHttpSecurityMode.None },
MaxReceivedMessageSize = int.MaxValue,
ReaderQuotas = { MaxArrayLength = int.MaxValue },
OpenTimeout = new TimeSpan(0, 01, 00),
CloseTimeout = new TimeSpan(0, 10, 00),
SendTimeout = new TimeSpan(0, 10, 00),
CrossDomainScriptAccessEnabled = true,
TransferMode = TransferMode.StreamedResponse
};
if (isHttps)
{
bindHttpCertificate(webServiceHost);
if (webServiceHost.Credentials.ServiceCertificate != null && webServiceHost.Credentials.ServiceCertificate.Certificate != null)
{
webHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate;
Log.Info(string.Format("Https Certificate {0} binded to {1}", webServiceHost.Credentials.ServiceCertificate.Certificate.SubjectName.Name, restURL));
}
}
var customBinding = new CustomBinding(webHttpBinding);
for (int counter = 0; counter < customBinding.Elements.Count; counter++)
{
if (customBinding.Elements[counter] is WebMessageEncodingBindingElement)
{
WebMessageEncodingBindingElement webBE = (WebMessageEncodingBindingElement)customBinding.Elements[counter];
customBinding.Elements[counter] = new GZipMessageEncodingBindingElement(webBE);
}
else if (customBinding.Elements[counter] is TransportBindingElement)
{
((TransportBindingElement)customBinding.Elements[counter]).MaxReceivedMessageSize = int.MaxValue;
}
}
ServiceEndpoint endpoint = webServiceHost.AddServiceEndpoint(typeof(IEngageWebServiceREST), customBinding, "");
endpoint.Behaviors.Add(new WebHttpBehavior() { AutomaticFormatSelectionEnabled = true, DefaultOutgoingResponseFormat = WebMessageFormat.Json });
endpoint.Behaviors.Add(new EnableCrossOriginResourceSharingBehavior());
endpoint.Behaviors.Add(new HelpPageEndPointBehavior("Product Suite"));
var serviceDebugBehaviorLocal = webServiceHost.Description.Behaviors.Find<ServiceDebugBehavior>();
if (serviceDebugBehaviorLocal == null)
{
webServiceHost.Description.Behaviors.Add(new ServiceDebugBehavior
{
IncludeExceptionDetailInFaults = true
});
}
else
{
if (!serviceDebugBehaviorLocal.IncludeExceptionDetailInFaults)
serviceDebugBehaviorLocal.IncludeExceptionDetailInFaults = true;
}
Await and appreciate ideas & thoughts to troubleshoot/resolve this issue.
Thanks,
Dileep
I suggest you refer to the following configuration.
<system.serviceModel>
<services>
<service behaviorConfiguration="Service1Behavior" name="VM1.MyService">
<endpoint address="" binding="webHttpBinding" contract="VM1.IService" behaviorConfiguration="rest" bindingConfiguration="mybinding" >
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
</services>
<bindings>
<webHttpBinding>
<binding name="mybinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" sendTimeout="00:10:00" receiveTimeout="00:10:00">
<readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" />
<security mode="Transport">
<transport clientCredentialType="None"></transport>
</security>
</binding>
</webHttpBinding>
</bindings>
<behaviors>
<serviceBehaviors>
<behavior name="Service1Behavior">
<serviceMetadata httpGetEnabled="true"/>
<serviceDebug includeExceptionDetailInFaults="False"/>
</behavior>
</serviceBehaviors>
<endpointBehaviors>
<behavior name="rest">
<webHttp/>
<dataContractSerializer maxItemsInObjectGraph="2147483647"/>
</behavior>
</endpointBehaviors>
</behaviors>
</system.serviceModel>
Feel free to let me know if the problem still exists.

Artemis HA and cluster not working

Below are setting of artemis cluster (3 servers) in broker.xml
<!-- Clustering configuration -->
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>test-jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>active_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<colocated>
<backup-port-offset>100</backup-port-offset>
<backup-request-retries>-1</backup-request-retries>
<backup-request-retry-interval>2000</backup-request-retry-interval>
<max-backups>2</max-backups>
<request-backup>true</request-backup>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
<slave>
<scale-down/>
</slave>
</colocated>
</shared-store>
</ha-policy>
Cluster and ha configuration are same in all servers. The failover scenario which i am trying to understand and execute is as below.
Start broker1,broker2,broker3 in sequence mentioned. Here I can
see from admin UI that broker1 have backing up broker2 and broker3.
broker2 have backing of broker1. broker3 does not have any backup.
I wrote below program to connect to server
public static void main(final String[] args) throws Exception {
Connection connection = null;
InitialContext initialContext = null;
try {
Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
properties.put("connectionFactory.ConnectionFactory",
"(tcp://localhost:61616,tcp://localhost:61617,tcp://localhost:61618)?ha=true&retryInterval=1000&retryIntervalMultiplier=1.0&reconnectAttempts=-1");
properties.put("queue.queue/exampleQueue", "exampleQueue");
// Step 1. Create an initial context to perform the JNDI lookup.
initialContext = new InitialContext(properties);
ConnectionFactory cf = (ConnectionFactory) initialContext.lookup("ConnectionFactory");
// Step 2. Look-up the JMS Queue object from JNDI
Queue queue = (Queue) initialContext.lookup("queue/exampleQueue");
// Step 3. Create a JMS Connection
connection = cf.createConnection("admin", "admin");
// Step 4. Start the connection
connection.start();
// Step 5. Create a JMS session with AUTO_ACKNOWLEDGE mode
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Step 8. Create a text message
BytesMessage message = session.createBytesMessage();
message.setStringProperty(InfoSearchEngine.QUERY_ID_HEADER_PARAM, "123");
MessageConsumer consumer0 = session.createConsumer(queue);
// Step 9. Send the text message to the queue
while (true) {
try {
Thread.sleep(500);
// Step 7. Create a JMS message producer
MessageProducer messageProducer = session.createProducer(queue);
messageProducer.send(message);
System.out.println("Sent message: " + message.getBodyLength());
} catch (Exception e) {
System.out.println("Exception - " + e.getLocalizedMessage());
}
}
} finally {
if (connection != null) {
// Step 20. Be sure to close our JMS resources!
connection.close();
}
if (initialContext != null) {
// Step 21. Also close the initialContext!
initialContext.close();
}
}
}
if I shutdown broker1, program diverts to broker2 and runs fine. If
i shutdown broker2 then the program doest not connect to broker3.
I expected that broker3 should have started taking up the request since it was in cluster.
I can see from admin UI that broker1 have backing up broker2 and broker3. broker2 have backing of broker1. broker3 does not have any backup.
Failover in Artemis only works between a live and a backup. In your scenario broker1 is backing up broker2 so when you shutdown broker1 that means broker2 no longer has a backup so that when you shutdown broker2 no failover happens. You should specify <group-name> in your master and slave configurations so that your backups form in a more organized way so that this kind of situation doesn't happen.

Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)

Could someone provide an example of messaging application working under Wildfly 10 cluster (domain)? We are struggling with it and given that it is a new technology, there is a terrible lack of resources.
Currently we have the following:
A domain consisting of two hosts (nodes) and three groups on each, i.e. six separate servers in the domain.
A relevant part of server configuration (in domain.xml):
<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
<server name="default">
<security enabled="false"/>
<cluster password="${jboss.messaging.cluster.password}"/>
<security-setting name="#">
<role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
</security-setting>
<address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-siz
<http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
<http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0"/>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
<broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>
<cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<jms-queue name="TestQ" entries="java:jboss/exported/jms/queue/testq"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java
<pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" co
</server>
</subsystem>
The configuration is more or less default, except added TestQ queue.
tcphq stack is defined in the JGroups configuration as follows:
<stack name="tcphq">
<transport type="TCP" socket-binding="jgroups-tcp-hq"/>
<protocol type="TCPPING">
<property name="initial_hosts">
dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]
</property>
<property name="port_range">
0
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
I have written a testing application consisting from a simple "server", meaning MDB and a client as follows:
Server (MDB):
#MessageDriven(mappedName = "test", activationConfig = {
#ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
#ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/exported/jms/queue/testq"),
#ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")
})
public class MessageServer implements MessageListener {
#Override
public void onMessage(Message message) {
try {
ObjectMessage msg = null;
if (message instanceof ObjectMessage) {
msg = (ObjectMessage) message;
}
System.out.print("The number in the message: "+ msg.getIntProperty("count"));
} catch (JMSException ex) {
Logger.getLogger(MessageServer.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
Client:
#Singleton
#Startup
public class ClientBean implements ClientBeanLocal {
#Resource(mappedName = "java:jboss/exported/jms/RemoteConnectionFactory")
private ConnectionFactory factory;
#Resource(mappedName = "java:jboss/exported/jms/queue/testq")
private Queue queue;
#PostConstruct
public void sendMessage() {
Connection connection = null;
try {
connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
Message message = session.createObjectMessage();
message.setIntProperty("count", 1);
producer.send(message);
System.out.println("Message sent.");
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} catch (NamingException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
} finally {
try {
if (connection != null) connection.close();
} catch (JMSException ex) {
Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
It actually works well if both client and server reside in the same group. In such a case it even seems it communicates between hosts (nodes). However if the server and client are in different groups, MDB is not invoked. Moreover, it even seems that MDB is invoked only if it resides in the group with 0 offset. When I moved the server MDB into a different group, it was not responding even if the client was in the same group.
I am a bit confused about JMS in Wildfly 10. There is a lot of examples and materials for older versions with HornetQ, however very few for Artemis. Could someone help? Many thanks.
As I came with the same question - put the answer that works for me.
Actually as Miroslav answered on developer.jboss.org the first thing to be checked is socket-binding for the "jgroups-tcp-hq" and the port-offset config on each server.
Should be <socket-binding name="jgroups-tcp-hq" ... port="7600"/> and port-offset is set (e.g. with the jboss.socket.binding.port-offset property) to 60 on dev1[7660] server ; 210 on dev1[7810] ; 360 on dev1[7960]. Same for dev2 servers.
And the second is jboss.bind.address.private property.
Usually default jgroups socket-binding refers to the "private" interface, e.g.
<socket-binding name="jgroups-tcp-hq" interface="private" port="7600"/>
So "private" interface address must be provided with the jboss.bind.address.private property (e.g. jboss.bind.address.private=dev1 ) - otherwise ClusterConnectionBridge will not be established between nodes!
See also this post for more details.
If communication between ActiveMQ server instances is established then the log entry must appear in server.log: AMQ221027: Bridge ClusterConnectionBridge#63549ead [name=sf.my-cluster ...] is connected.
See also this answer.

Is it possible to explicitly dictate which EJB Receiver is used within JBoss EAP 6?

I am trying to make remote calls to multiple servers running on one instance of JBoss EAP 6 from a client server running on a separate instance of JBoss EAP 6. I have configured for JBoss-to-JBoss remote communication, and have read about scoped EJB client contexts, but the two do not appear to be compatible. Currently, I have two EJB Receivers configured (one for each remote server), but it appears when I try to make a remote call, the initialized Context randomly selects the EJB Receiver it will use. It would seem reasonable that I can force which EJB Receiver is used when the Context is initialized if I have the remote ip and port, or the remote connection name, but alas, I don't know the the secret handshake.
host.xml:
<security-realm name="ejb-security-realm">
<server-identities>
<secret value="ZWpiUEBzc3cwcmQ="/>
</server-identities>
</security-realm>
domain.xml:
<subsystem xmlns="urn:jboss:domain:remoting:1.2">
<connector name="remoting-connector" socket binding="remoting" security-realm="ApplicationRealm"/>
<outbound-connections>
<remote-outbound-connection name="remote-ejb-connection" outbound-socket-binding-ref="mpg1-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
<remote-outbound-connection name="remote-ejb-connection2" outbound-socket-binding-ref="mpg2-app1" username="ejbuser" security-realm="ejb-security-realm">
<properties>
<property name="SASL_POLICY_NOANONYMOUS" value="false"/>
<property name="SSL_ENABLED" value="false"/>
</properties>
</remote-outbound-connection>
</outbound-connections>
</subsystem>
...
<socket-binding-group name="full-sockets" default-interface="public">
...
<socket-binding name="remoting" port="44447"/>
<outbound-socket-binding name="mpg1-app1">
<remote-destination host="localhost" port="44452"/>
</outbound-socket-binding>
<outbound-socket-binding name="mpg2-app1">
<remote-destination host="localhost" port="44453"/>
</outbound-socket-binding>
</socket-binding-group>
jboss-ejb-client.xml
<jboss-ejb-client xmlns="urn:jboss:ejb-client:1.0">
<client-context>
<ejb-receivers>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection"/>
<remoting-ejb-receiver outbound-connection-ref="remote-ejb-connection2"/>
</ejb-receivers>
</client-context>
</jboss-ejb-client>
The remote call:
Context ctx = null;
final Properties props = new Properties();
props.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming");
try {
ctx = new InitialContext(props);
MyInterfaceObject ourInterface = ctx.lookup("ejb:" + appName + "/" + moduleName + "/" + beanName + "!"
+ viewClassName);
ourInteface.refreshProperties();//remote method call
}
Any Help would be greatly appreciated!
have you try cluster-node-selector
jboss-ejb-client.xml
<!-- if an outbound connection connect to a cluster a list of members is provided after successful connection.
To connect to this node this cluster element must be defined.
-->
<clusters>
<!-- cluster of remote-ejb-connection-1 -->
<cluster name="ejb" security-realm="ejb-security-realm-1" username="test" cluster-node-selector="org.jboss.as.quickstarts.ejb.clients.selector.AllClusterNodeSelector">
<connection-creation-options>
<property name="org.xnio.Options.SSL_ENABLED" value="false" />
<property name="org.xnio.Options.SASL_POLICY_NOANONYMOUS" value="false" />
</connection-creation-options>
</cluster>
</clusters>
</client-context>
</jboss-ejb-client>
Selector Implementation
#Override
public String selectNode(final String clusterName, final String[] connectedNodes, final String[] availableNodes) {
if (availableNodes.length == 1) {
return availableNodes[0];
}
// Go through all the nodes and point to the one you want
for (int i = 0; i < availableNodes.length; i++) {
if (availableNodes[i].contains("someoneYouInterestIn")) {
return availableNodes[i];
}
}
final Random random = new Random();
final int randomSelection = random.nextInt(availableNodes.length);
return availableNodes[randomSelection];
}
For more information you can check
https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/7.0/developing-ejb-applications/chapter-8-clustered-enterprise-javab