WildFly 10, JGroups and EC2 - wildfly

I'm trying to run WildFly 10 with the HA profile in EC2, but am getting the following errors:
05:03:28,308 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=jgroups/stack=tcp/protocol=FD_SOCK' are not available:
[Server:server-one] org.wildfly.network.socket-binding.jgroups-tcp-fd; There are no known registration points which can provide this capability.
[Server:server-one] 05:03:28,310 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=jgroups/stack=tcp/transport=TCP' are not available:
[Server:server-one] org.wildfly.network.socket-binding.jgroups-tcp; There are no known registration points which can provide this capability.
My JGroups config looks like this
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="S3_PING">
<property name="access_key">accesskey</property>
<property name="secret_access_key">secretkey</property>
<property name="location">bucketname</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property>
<property name="use_mcast_xmit_req">false</property>
</protocol>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
</subsystem>
Does anyone know what There are no known registration points which can provide this capability means?

Turns out that I had mixed up my socket bindings. I was using the ha profile with full-ha-sockets socket binding, like this:
<server-groups>
<server-group name="main-server-group" profile="ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/> <!-- THIS IS BROKEN -->
<deployments>
<deployment name="activemq-rar" runtime-name="activemq-rar"/>
<deployment name="hawtio.war" runtime-name="hawtio.war"/>
</deployments>
</server-group>
<server-group name="other-server-group" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
</server-group>
</server-groups>
Once I had fixed the socket-bindings, the errors went away:
<server-groups>
<server-group name="main-server-group" profile="ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="ha-sockets"/> <!-- THIS IS FIXED -->
<deployments>
<deployment name="activemq-rar" runtime-name="activemq-rar"/>
<deployment name="hawtio.war" runtime-name="hawtio.war"/>
</deployments>
</server-group>
<server-group name="other-server-group" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
</server-group>
</server-groups>

I have a similar problem too. But, instead of the problem be in the <server-group />, my problem was in my host.
I create an initial host to use the profile full-ha and full-ha-sockets in a server-group already existent. After that, I create a new server-group using the profile ha and ha-sockets and move this host to this new server-group.
The problem? My host was using the profile ha but with full-ha-sockets instead of ha-sockets. I made a setup to use EJB Remote using only ha-sockets and have this same error when I was trying to call the remote method in the EJB for the remote outbound connection:
There are no known registration points which can provide this capability
I was thinking that my host was using the ha-sockets. So, I put the host to use ha-sockets and the error was gone. I lost a lot of time to discover this mistake.

Related

ParseError using JGroups for HornetQ in WildFly 9

Currently I am using WildFly 9. We are trying to configure TCP-based load sharing with HornetQ. We have use JGroups for dynamic discovery, and we have added following setting in our standalone configuration file.
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<jgroups-file>jgroups-stacks.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<broadcast-period>2000</broadcast-period>
<connector-ref connector-name="netty-connector"/>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-file>jgroups-stacks.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="tcp-based-cluster-node1-to-node2">
<address>jms</address>
<connector-ref>netty</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>true</forward-when-no-consumers>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
I have do configuration from this documentation, but still I faced issue here with jgroup-file. It is give following error:
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[393,21]
Message: WFLYCTL0198: Unexpected element '{urn:jboss:domain:messaging:3.0}jgroups-file' encountered
at org.jboss.as.controller.parsing.ParseUtils.unexpectedElement(ParseUtils.java:89)
at org.jboss.as.messaging.MessagingSubsystemParser.handleUnknownBroadcastGroupAttribute(MessagingSubsystemParser.java:739)
at org.jboss.as.messaging.Messaging13SubsystemParser.handleUnknownBroadcastGroupAttribute(Messaging13SubsystemParser.java:182)
Here, issue is file was not found at server run time. As per document we have to set in resource folder but in HornetQ WildFly server have no resource folder so where we can set file and share load with multiple queue?
So my question is first we are on right way or do we need to do any other configuration for the same?
The documentation you're looking at is for HornetQ, not for WildFly. WildFly has a unique XML format which is 100% independent of HornetQ's format, although they are very similar in many places in early versions of WildFly. However, WildFly doesn't support the jgroups-file configuration element you're attempting to use. Only standalone HornetQ supports that.
You should look at the standalone-full-ha.xml file that ships with WildFly 9. It has an example of JGroups configuration, e.g.:
...
<subsystem xmlns="urn:jboss:domain:jgroups:3.0">
<channels default="ee">
<channel name="ee"/>
</channels>
<stacks default="udp">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="MPING" socket-binding="jgroups-mping"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:messaging:3.0">
<hornetq-server>
...
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-stack>udp</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<connector-ref>http-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-stack>udp</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>http-connector</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
...
</hornetq-server>
</subsystem>
...
It's worth noting that WildFly 9 was release in July of 2015, almost 7 years ago now. The current version is 26.1.0.Final. I strongly encourage you to upgrade to a more recent version.

Timeout message logging, and retry storm, in infinispan cache after upgrading from Wildfly 20 to 23.0.1

We just upgraded from wildfly 20 to 23, and are now seeing issues with infinispan erroring out, and getting into log and retry loops. The issues happens 100's of times per second after it starts, and only stops when one node of the cluster is turned off.
We get the below error, that retires indefinitely, using about 30Mbs of bandwidth between the servers, when normally it is ~10-30Kbs. The confusing part of the error, is node 1, it received an error from node 2, and node 2's error is a timeout from node 1. I have tried moving from the udp to tcp stack, and am still seeing the same issue (it is a 2 node cluster).
I increased the remote timeout from the default of 10 seconds to 30, and almost immediately saw the same error.
Is there a new setting needed in wildfly 23, or some other miss on my side, or am I hitting a new bug?
Here is the jgroups config:
<stack name="udp" statistics-enabled="true">
<transport type="UDP" shared="false" socket-binding="jgroups-udp" statistics-enabled="true">
<property name="log_discard_msgs">
false
</property>
<property name="port_range">
50
</property>
</transport>
<protocol type="PING" module="org.jgroups" statistics-enabled="true"/>
<protocol type="MERGE3" module="org.jgroups" statistics-enabled="true"/>
<socket-protocol type="FD_SOCK" module="org.jgroups" socket-binding="jgroups-udp-fd" statistics-enabled="true"/>
<protocol type="FD_ALL" module="org.jgroups" statistics-enabled="true"/>
<protocol type="VERIFY_SUSPECT" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.NAKACK2" module="org.jgroups" statistics-enabled="true"/>
<protocol type="UNICAST3" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.STABLE" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.GMS" module="org.jgroups" statistics-enabled="true"/>
<protocol type="UFC" module="org.jgroups" statistics-enabled="true"/>
<protocol type="MFC" module="org.jgroups" statistics-enabled="true"/>
<protocol type="FRAG3"/>
</stack>
and infinispan
<cache-container name="localsite-cachecontainer" default-cache="epi-localsite-default" statistics-enabled="true">
<transport lock-timeout="60000" channel="localsite-appCache"/>
<replicated-cache name="bServiceCache" statistics-enabled="true">
<locking isolation="NONE"/>
<transaction mode="NONE"/>
<expiration lifespan="1800000"/>
</replicated-cache>
22:47:52,823 WARN [org.infinispan.CLUSTER] (thread-223,application-localsite,node1) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='application-bServiceCache',
command=PutKeyValueCommand{key=SimpleKey [XXXX,2021-05-06,1412.0,75.0,null], value=[YYYY[pp=4 Pay,PaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28]], flags=[], commandInvocationId=CommandInvocation:node2:537,
putIfAbsent=true, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{version=null, lifespan=1800000, maxIdle=-1}, successful=true, topologyId=18}}: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.impl.MapResponseCollector.addException(MapResponseCollector.java:64)
at org.infinispan.remoting.transport.impl.MapResponseCollector$IgnoreLeavers.addException(MapResponseCollector.java:102)
at org.infinispan.remoting.transport.ValidResponseCollector.addResponse(ValidResponseCollector.java:29)
at org.infinispan.remoting.transport.impl.MultiTargetRequest.onResponse(MultiTargetRequest.java:93)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135)
at org.jgroups.stack.Protocol.up(Protocol.java:309)
at org.jgroups.protocols.FORK.up(FORK.java:142)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FD.up(FD.java:227)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
at org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4485 from node1
at sun.reflect.GeneratedConstructorAccessor551.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.infinispan.marshall.exts.ThrowableExternalizer.readGenericThrowable(ThrowableExternalizer.java:282)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:259)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 28 more
Can you also attach the configuration for your "localsite-appCache" channel?
Can you also attach a code snippet that demonstrates how you reference the cache in your application?

Jboss7 : Error trying to resolve JNDI name "java:/XAConnectionFactory" : javax.naming.NameNotFoundException: XAConnectionFactory

I was upgrading ATG application from 10.x to 11.x. I also upgraded jboss-eap from 5.1 to 7.2 I have faced various JBoss issues and many of them were fixed.
As of now we are getting the following error while starting the ATG fulfillment server and it seems to be JBoss JMS issue.
07:56:43,346 ERROR [nucleusNamespace.atg.dynamo.messaging.MessagingManager] (ServerService Thread Pool -- 81) PatchBay failed to startup properly : a Scheduler job will be registered to continue trying to bring PatchBay up : note this may result in further errors: atg.nucleus.ServiceException: An error occurred trying to resolve JNDI name "java:/XAConnectionFactory" for the "xa-topic-connection-factory-name" in provider "Hornet" in definition file "/atg/dynamo/messaging/dynamoMessagingSystem.xml": javax.naming.NameNotFoundException: XAConnectionFactory -- service jboss.naming.context.java.XAConnectionFactory
at atg.dms.patchbay.Provider.initializeTopicConnection(Provider.java:364)
at atg.dms.patchbay.PatchBayManager.createInputDestination(PatchBayManager.java:1811)
at atg.dms.patchbay.PatchBayManager.createInputPorts(PatchBayManager.java:1446)
at atg.dms.patchbay.PatchBayManager.createElementManager(PatchBayManager.java:1477)
at atg.dms.patchbay.PatchBayManager.createMessageFilters(PatchBayManager.java:1338)
In Jboss 5, there were following configuration files:
ls jboss-eap-5.1/seam/bootstrap/deploy/messaging/
connection-factories-service.xml hsqldb-persistence-service.xml legacy-service.xml remoting-service.xml
destinations-service.xml jms-ds.xml messaging-service.xml
In Jboss 7.2 we have the following message config in standalone.xml file:
<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
<server name="default">
<journal pool-files="10"/>
<security-setting name="#">
<role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
</security-setting>
<address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-counter-history-day-limit="10"/>
<http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
<http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
<param name="batch-delay" value="50"/>
</http-connector>
<in-vm-connector name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-connector>
<http-acceptor name="http-acceptor" http-listener="default"/>
<http-acceptor name="http-acceptor-throughput" http-listener="default">
<param name="batch-delay" value="50"/>
<param name="direct-deliver" value="false"/>
</http-acceptor>
<in-vm-acceptor name="in-vm" server-id="0">
<param name="buffer-pooling" value="false"/>
</in-vm-acceptor>
<jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
<jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
<connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
<connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
</server>
</subsystem>
Following are the contents of atg/dynamo/messaging/dynamoMessagingSystem.xml in the code
<?xml version="1.0" encoding="UTF-8"?>
<dynamo-message-system>
<patchbay>
<!-- JBoss Hornet provider -->
<provider>
<provider-name>Hornet</provider-name>
<xa-topic-connection-factory-name>
java:/XAConnectionFactory
</xa-topic-connection-factory-name>
<xa-queue-connection-factory-name>
java:/XAConnectionFactory
</xa-queue-connection-factory-name>
<supports-transactions>
true
</supports-transactions>
<supports-xa-transactions>
true
</supports-xa-transactions>
<username>***</username>
<password>***</password>
<initial-context-factory>
/abcd/common/services/HornetQ
</initial-context-factory>
</provider>
<!-- Reporting order message source -->
<message-source>
<nucleus-name>/abcd/commerce/fulfillment/processor/SendReportingSubmitOrderMessage</nucleus-name>
<output-port>
<port-name>ReportingOrderSubmit</port-name>
<output-destination>
<provider-name>local</provider-name>
<destination-name>localdms:/local/Fulfillment/LocalSubmitOrder</destination-name>
<destination-type>Topic</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Split order message source -->
<message-source>
<nucleus-name>/abcd/commerce/fulfillment/processor/SendSplitMessages/</nucleus-name>
<output-port>
<port-name>DEFAULT</port-name>
</output-port>
<output-port>
<port-name>FulfillmentOrderSubmitPort</port-name>
<output-destination>
<destination-name>patchbay:/Fulfillment/SubmitOrder</destination-name>
<destination-type>Topic</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Custom source/sink will take fulfillment failures and forward them, perhaps to multiple queues or none -->
<message-source>
<nucleus-name>
/abcd/commerce/fulfillment/FailureMessageSink
</nucleus-name>
<output-port>
<port-name>
FulfillmentFailureNotifications
</port-name>
<output-destination>
<destination-name>
patchbay:/Fulfillment/FulfillmentFailureNotifications
</destination-name>
<destination-type>
Topic
</destination-type>
</output-destination>
</output-port>
</message-source>
<!-- Custom source/sink will take fulfillment failures and forward them, perhaps to multiple queues or none -->
<message-sink>
<nucleus-name>
/abcd/commerce/fulfillment/FailureMessageSink
</nucleus-name>
<input-port>
<port-name>FulfillmentError</port-name>
<input-destination>
<destination-name>patchbay:/Fulfillment/ErrorNotification</destination-name>
<destination-type>Queue</destination-type>
</input-destination>
</input-port>
</message-sink>
I'm new to both jboss and ATG, Could anyone help me to resolve the issue ?
java:/XAConnectionFactory is not defined in WildFly. You need to configure WildFly to properly create and expose those connection factories like this:
<pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory java:/XAConnectionFactory" connectors="in-vm" transaction="xa"/>
Please note also that you are now on Apache ActiveMQ Artemis and no longer on HornetQ

KeyCloak HA on AWS EC2 with docker - cluster is up but login fails

We are trying to set KeyCloak 1.9.3 with HA on AWS EC2 with docker, the cluster is up without errors however the login fails with the below error:
WARN [org.keycloak.events] (default task-10) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=172.30.200.171, error=invalid_code
we have followed this (http://lists.jboss.org/pipermail/keycloak-user/2016-February/004940.html ) post but used S3_PING instead of JDBC_PING.
It seems that the nodes detect each other:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,6dbce1e2a05a) ISPN000094: Received new cluster view for channel keycloak: [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
We suspect that the nodes doesn't communicate with each other, when we queried the jboss mbean "jboss.as.expr:subsystem=jgroups,channel=ee" the result for the first node was:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 0
And for the second node:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 5
We also verified that the TCP ports 57600 and 7600 are open.
Any idea what might cause it ?
Here is the relevant standalone-ha.xml configuration and below is that startup command:
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<property name="external_addr">200.129.4.189</property>
</transport>
<protocol type="S3_PING">
<property name="access_key">AAAAAAAAAAAAAA</property>
<property name="secret_access_key">BBBBBBBBBBBBBB</property>
<property name="location">CCCCCCCCCCCCCCCCCCCC</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd">
<property name="external_addr">200.129.4.189</property>
</protocol>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
<socket-binding name="jgroups-tcp" interface="public" port="7600"/>
<socket-binding name="jgroups-tcp-fd" interface="public" port="57600"/>
And we start the server using the below (INTERNAL_HOST_IP is the container internal IP address):
standalone.sh -c=standalone-ha.xml -b=$INTERNAL_HOST_IP -bmanagement=$INTERNAL_HOST_IP -bprivate=$INTERNAL_HOST_IP
Any help will be appreciated.
Apparently there was no problem with the setup, the issue was with the DB we accidentally configured in memory DB for each instance instead of our shared DB.
you have to enable the stickiness of AWS load balancer to get successful login.

jboss with s3ping in ec2

Am trying to get s3ping discovery method working in jboss as 7. I have deployed a sample web app which is clustered. As of now i have a single node. But in the near future i ill be adding more nodes in the cluster..
I have modified the file standalone-ha.xml with the required s3 credentials and bucket details.
<stack name="s3ping">
<transport type="TCP" socket-binding="jgroups-tcp" diagnostics-socket-binding="jgroups-diagnostics"/>
<protocol type="S3_PING">
<property name="access_key">
XXXXXXXXXXXXXXX
</property>
<property name="secret_access_key">
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
</property>
<property name="prefix">
MyjbossBucket
</property>
<property name="timeout">
6000
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="BARRIER"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
And i start this jboss instance by using the below command.
./standalone.sh -b 10.1.137.250 -bmanagement=10.1.137.250 -c standalone-ha.xml -Djboss.default.jgroups.stack=s3ping -Djgroups.bind.address=10.1.137.250 -Djboss.node.name=node1
Its getting started successfully, but am unable to see any node information file created inside the s3 bucket. Kindly please guide me through the correct method to get this done...Or am i doing some mistake in the configs...
Regards
Have you changed the default-stack?
<subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="s3ping">