ParseError using JGroups for HornetQ in WildFly 9 - wildfly

Currently I am using WildFly 9. We are trying to configure TCP-based load sharing with HornetQ. We have use JGroups for dynamic discovery, and we have added following setting in our standalone configuration file.
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<jgroups-file>jgroups-stacks.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<broadcast-period>2000</broadcast-period>
<connector-ref connector-name="netty-connector"/>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-file>jgroups-stacks.xml</jgroups-file>
<jgroups-channel>hornetq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="tcp-based-cluster-node1-to-node2">
<address>jms</address>
<connector-ref>netty</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>true</forward-when-no-consumers>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
I have do configuration from this documentation, but still I faced issue here with jgroup-file. It is give following error:
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[393,21]
Message: WFLYCTL0198: Unexpected element '{urn:jboss:domain:messaging:3.0}jgroups-file' encountered
at org.jboss.as.controller.parsing.ParseUtils.unexpectedElement(ParseUtils.java:89)
at org.jboss.as.messaging.MessagingSubsystemParser.handleUnknownBroadcastGroupAttribute(MessagingSubsystemParser.java:739)
at org.jboss.as.messaging.Messaging13SubsystemParser.handleUnknownBroadcastGroupAttribute(Messaging13SubsystemParser.java:182)
Here, issue is file was not found at server run time. As per document we have to set in resource folder but in HornetQ WildFly server have no resource folder so where we can set file and share load with multiple queue?
So my question is first we are on right way or do we need to do any other configuration for the same?

The documentation you're looking at is for HornetQ, not for WildFly. WildFly has a unique XML format which is 100% independent of HornetQ's format, although they are very similar in many places in early versions of WildFly. However, WildFly doesn't support the jgroups-file configuration element you're attempting to use. Only standalone HornetQ supports that.
You should look at the standalone-full-ha.xml file that ships with WildFly 9. It has an example of JGroups configuration, e.g.:
...
<subsystem xmlns="urn:jboss:domain:jgroups:3.0">
<channels default="ee">
<channel name="ee"/>
</channels>
<stacks default="udp">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="MPING" socket-binding="jgroups-mping"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
</subsystem>
...
<subsystem xmlns="urn:jboss:domain:messaging:3.0">
<hornetq-server>
...
<broadcast-groups>
<broadcast-group name="bg-group1">
<jgroups-stack>udp</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
<connector-ref>http-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<jgroups-stack>udp</jgroups-stack>
<jgroups-channel>hq-cluster</jgroups-channel>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>http-connector</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
...
</hornetq-server>
</subsystem>
...
It's worth noting that WildFly 9 was release in July of 2015, almost 7 years ago now. The current version is 26.1.0.Final. I strongly encourage you to upgrade to a more recent version.

Related

Timeout message logging, and retry storm, in infinispan cache after upgrading from Wildfly 20 to 23.0.1

We just upgraded from wildfly 20 to 23, and are now seeing issues with infinispan erroring out, and getting into log and retry loops. The issues happens 100's of times per second after it starts, and only stops when one node of the cluster is turned off.
We get the below error, that retires indefinitely, using about 30Mbs of bandwidth between the servers, when normally it is ~10-30Kbs. The confusing part of the error, is node 1, it received an error from node 2, and node 2's error is a timeout from node 1. I have tried moving from the udp to tcp stack, and am still seeing the same issue (it is a 2 node cluster).
I increased the remote timeout from the default of 10 seconds to 30, and almost immediately saw the same error.
Is there a new setting needed in wildfly 23, or some other miss on my side, or am I hitting a new bug?
Here is the jgroups config:
<stack name="udp" statistics-enabled="true">
<transport type="UDP" shared="false" socket-binding="jgroups-udp" statistics-enabled="true">
<property name="log_discard_msgs">
false
</property>
<property name="port_range">
50
</property>
</transport>
<protocol type="PING" module="org.jgroups" statistics-enabled="true"/>
<protocol type="MERGE3" module="org.jgroups" statistics-enabled="true"/>
<socket-protocol type="FD_SOCK" module="org.jgroups" socket-binding="jgroups-udp-fd" statistics-enabled="true"/>
<protocol type="FD_ALL" module="org.jgroups" statistics-enabled="true"/>
<protocol type="VERIFY_SUSPECT" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.NAKACK2" module="org.jgroups" statistics-enabled="true"/>
<protocol type="UNICAST3" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.STABLE" module="org.jgroups" statistics-enabled="true"/>
<protocol type="pbcast.GMS" module="org.jgroups" statistics-enabled="true"/>
<protocol type="UFC" module="org.jgroups" statistics-enabled="true"/>
<protocol type="MFC" module="org.jgroups" statistics-enabled="true"/>
<protocol type="FRAG3"/>
</stack>
and infinispan
<cache-container name="localsite-cachecontainer" default-cache="epi-localsite-default" statistics-enabled="true">
<transport lock-timeout="60000" channel="localsite-appCache"/>
<replicated-cache name="bServiceCache" statistics-enabled="true">
<locking isolation="NONE"/>
<transaction mode="NONE"/>
<expiration lifespan="1800000"/>
</replicated-cache>
22:47:52,823 WARN [org.infinispan.CLUSTER] (thread-223,application-localsite,node1) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='application-bServiceCache',
command=PutKeyValueCommand{key=SimpleKey [XXXX,2021-05-06,1412.0,75.0,null], value=[YYYY[pp=4 Pay,PaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28], ppAvaliablity[firstPaymentDue=2021-05-28]], flags=[], commandInvocationId=CommandInvocation:node2:537,
putIfAbsent=true, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedExpirableMetadata{version=null, lifespan=1800000, maxIdle=-1}, successful=true, topologyId=18}}: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from node2, see cause for remote stack trace
at org.infinispan.remoting.transport.ResponseCollectors.wrapRemoteException(ResponseCollectors.java:25)
at org.infinispan.remoting.transport.impl.MapResponseCollector.addException(MapResponseCollector.java:64)
at org.infinispan.remoting.transport.impl.MapResponseCollector$IgnoreLeavers.addException(MapResponseCollector.java:102)
at org.infinispan.remoting.transport.ValidResponseCollector.addResponse(ValidResponseCollector.java:29)
at org.infinispan.remoting.transport.impl.MultiTargetRequest.onResponse(MultiTargetRequest.java:93)
at org.infinispan.remoting.transport.impl.RequestRepository.addResponse(RequestRepository.java:52)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1402)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1305)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:131)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1445)
at org.jgroups.JChannel.up(JChannel.java:784)
at org.jgroups.fork.ForkProtocolStack.up(ForkProtocolStack.java:135)
at org.jgroups.stack.Protocol.up(Protocol.java:309)
at org.jgroups.protocols.FORK.up(FORK.java:142)
at org.jgroups.protocols.FRAG3.up(FRAG3.java:165)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:343)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:876)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:243)
at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1049)
at org.jgroups.protocols.UNICAST3.addMessage(UNICAST3.java:772)
at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:753)
at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:405)
at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:592)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
at org.jgroups.protocols.FD.up(FD.java:227)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:254)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:281)
at org.jgroups.protocols.Discovery.up(Discovery.java:300)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1396)
at org.jgroups.util.SubmitToThreadPool$SingleMessageHandler.run(SubmitToThreadPool.java:87)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.jboss.as.clustering.context.ContextReferenceExecutor.execute(ContextReferenceExecutor.java:49)
at org.jboss.as.clustering.context.ContextualExecutor$1.run(ContextualExecutor.java:70)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 4485 from node1
at sun.reflect.GeneratedConstructorAccessor551.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.infinispan.marshall.exts.ThrowableExternalizer.readGenericThrowable(ThrowableExternalizer.java:282)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:259)
at org.infinispan.marshall.exts.ThrowableExternalizer.readObject(ThrowableExternalizer.java:42)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.BytesObjectInput.readObject(BytesObjectInput.java:32)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:49)
at org.infinispan.remoting.responses.ExceptionResponse$Externalizer.readObject(ExceptionResponse.java:41)
at org.infinispan.marshall.core.GlobalMarshaller.readWithExternalizer(GlobalMarshaller.java:728)
at org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:709)
at org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192)
at org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.processResponse(JGroupsTransport.java:1394)
... 28 more
Can you also attach the configuration for your "localsite-appCache" channel?
Can you also attach a code snippet that demonstrates how you reference the cache in your application?

KeyCloak HA on AWS EC2 with docker - cluster is up but login fails

We are trying to set KeyCloak 1.9.3 with HA on AWS EC2 with docker, the cluster is up without errors however the login fails with the below error:
WARN [org.keycloak.events] (default task-10) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=172.30.200.171, error=invalid_code
we have followed this (http://lists.jboss.org/pipermail/keycloak-user/2016-February/004940.html ) post but used S3_PING instead of JDBC_PING.
It seems that the nodes detect each other:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,6dbce1e2a05a) ISPN000094: Received new cluster view for channel keycloak: [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
We suspect that the nodes doesn't communicate with each other, when we queried the jboss mbean "jboss.as.expr:subsystem=jgroups,channel=ee" the result for the first node was:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 0
And for the second node:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 5
We also verified that the TCP ports 57600 and 7600 are open.
Any idea what might cause it ?
Here is the relevant standalone-ha.xml configuration and below is that startup command:
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<property name="external_addr">200.129.4.189</property>
</transport>
<protocol type="S3_PING">
<property name="access_key">AAAAAAAAAAAAAA</property>
<property name="secret_access_key">BBBBBBBBBBBBBB</property>
<property name="location">CCCCCCCCCCCCCCCCCCCC</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd">
<property name="external_addr">200.129.4.189</property>
</protocol>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
<socket-binding name="jgroups-tcp" interface="public" port="7600"/>
<socket-binding name="jgroups-tcp-fd" interface="public" port="57600"/>
And we start the server using the below (INTERNAL_HOST_IP is the container internal IP address):
standalone.sh -c=standalone-ha.xml -b=$INTERNAL_HOST_IP -bmanagement=$INTERNAL_HOST_IP -bprivate=$INTERNAL_HOST_IP
Any help will be appreciated.
Apparently there was no problem with the setup, the issue was with the DB we accidentally configured in memory DB for each instance instead of our shared DB.
you have to enable the stickiness of AWS load balancer to get successful login.

WildFly 10, JGroups and EC2

I'm trying to run WildFly 10 with the HA profile in EC2, but am getting the following errors:
05:03:28,308 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=jgroups/stack=tcp/protocol=FD_SOCK' are not available:
[Server:server-one] org.wildfly.network.socket-binding.jgroups-tcp-fd; There are no known registration points which can provide this capability.
[Server:server-one] 05:03:28,310 ERROR [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0362: Capabilities required by resource '/subsystem=jgroups/stack=tcp/transport=TCP' are not available:
[Server:server-one] org.wildfly.network.socket-binding.jgroups-tcp; There are no known registration points which can provide this capability.
My JGroups config looks like this
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="S3_PING">
<property name="access_key">accesskey</property>
<property name="secret_access_key">secretkey</property>
<property name="location">bucketname</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property>
<property name="use_mcast_xmit_req">false</property>
</protocol>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</stacks>
</subsystem>
Does anyone know what There are no known registration points which can provide this capability means?
Turns out that I had mixed up my socket bindings. I was using the ha profile with full-ha-sockets socket binding, like this:
<server-groups>
<server-group name="main-server-group" profile="ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/> <!-- THIS IS BROKEN -->
<deployments>
<deployment name="activemq-rar" runtime-name="activemq-rar"/>
<deployment name="hawtio.war" runtime-name="hawtio.war"/>
</deployments>
</server-group>
<server-group name="other-server-group" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
</server-group>
</server-groups>
Once I had fixed the socket-bindings, the errors went away:
<server-groups>
<server-group name="main-server-group" profile="ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="ha-sockets"/> <!-- THIS IS FIXED -->
<deployments>
<deployment name="activemq-rar" runtime-name="activemq-rar"/>
<deployment name="hawtio.war" runtime-name="hawtio.war"/>
</deployments>
</server-group>
<server-group name="other-server-group" profile="full-ha">
<jvm name="default">
<heap size="64m" max-size="512m"/>
</jvm>
<socket-binding-group ref="full-ha-sockets"/>
</server-group>
</server-groups>
I have a similar problem too. But, instead of the problem be in the <server-group />, my problem was in my host.
I create an initial host to use the profile full-ha and full-ha-sockets in a server-group already existent. After that, I create a new server-group using the profile ha and ha-sockets and move this host to this new server-group.
The problem? My host was using the profile ha but with full-ha-sockets instead of ha-sockets. I made a setup to use EJB Remote using only ha-sockets and have this same error when I was trying to call the remote method in the EJB for the remote outbound connection:
There are no known registration points which can provide this capability
I was thinking that my host was using the ha-sockets. So, I put the host to use ha-sockets and the error was gone. I lost a lot of time to discover this mistake.

Infinispan server ignores jgroups bind_addr

I'm trying to get to work simple Infinispan Server cluster containing two nodes. The problem is that Infinispan ignores my bind_addr jgroups setting in clustered.xml file. I can specify this setting using -Djgroups.bind_addr=GLOBAL -- it works, but it isn't handy. I start cluster using bin/clustered.sh script, use TCP protocol stack and MPING for nodes autodiscovery.
The part of configuration file standalone/configuration/clustered.xml related to jgroups:
<subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="${jboss.default.jgroups.stack:tcp}">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="MPING" socket-binding="jgroups-mping">
<property name="bind_addr">GLOBAL</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK">
<property name="use_mcast_xmit">false</property>
</protocol>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</subsystem>
I also tried -Djgroups.ignore.bind_addr=true option to prevent Infinispan deriving bind_addr setting from system properties instead of XML, whoever might set it -- it didn't help.
Infinispan version 6.0.
Update: socket-binding-group and interfaces elements:
<interfaces>
<interface name="management">
<!-- <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> -->
<any-address/>
</interface>
<interface name="public">
<!-- <inet-address value="${jboss.bind.address:127.0.0.1}"/> -->
<any-address/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/>
<socket-binding name="ajp" port="8009"/>
<socket-binding name="hotrod" port="11222"/>
<socket-binding name="http" port="8080"/>
<socket-binding name="https" port="8443"/>
<socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" port="7600"/>
<socket-binding name="jgroups-tcp-fd" port="57600"/>
<socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/>
<socket-binding name="jgroups-udp-fd" port="54200"/>
<socket-binding name="memcached" port="11211"/>
<socket-binding name="modcluster" port="0" multicast-address="224.0.1.115" multicast-port="23364"/>
<socket-binding name="remoting" port="4447"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<socket-binding name="websocket" port="8181"/>
</socket-binding-group>
</server>
Any help would be greatly appreciated!
I think you have to define the interface in a <socket-binding-group> or <interfaces> element, so either in jgroups-udp or jgroups-tcp. Those are defined at the end of the config and you can try to see if JGroups variable substitution works, e.g. "${my.interface:GLOBAL}".
I have completely removed socket-binding attributes from JGroups settings and left only bind_addr properties - and now it works. I'm very curious what is the difference between them.
I had the same issue and found out the solution in JGroups documentation,
https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/ch19s07s07.html
Running with -Djgroups.ignore.bind_addr=true will force it to overwrite the system property 'bind_addr' and use the XML bind_addr instead.
Document says,
"This setting tells JGroups to ignore the jgroups.bind_addr system
property, and instead use whatever is specfied in XML"

jboss with s3ping in ec2

Am trying to get s3ping discovery method working in jboss as 7. I have deployed a sample web app which is clustered. As of now i have a single node. But in the near future i ill be adding more nodes in the cluster..
I have modified the file standalone-ha.xml with the required s3 credentials and bucket details.
<stack name="s3ping">
<transport type="TCP" socket-binding="jgroups-tcp" diagnostics-socket-binding="jgroups-diagnostics"/>
<protocol type="S3_PING">
<property name="access_key">
XXXXXXXXXXXXXXX
</property>
<property name="secret_access_key">
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
</property>
<property name="prefix">
MyjbossBucket
</property>
<property name="timeout">
6000
</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="BARRIER"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
And i start this jboss instance by using the below command.
./standalone.sh -b 10.1.137.250 -bmanagement=10.1.137.250 -c standalone-ha.xml -Djboss.default.jgroups.stack=s3ping -Djgroups.bind.address=10.1.137.250 -Djboss.node.name=node1
Its getting started successfully, but am unable to see any node information file created inside the s3 bucket. Kindly please guide me through the correct method to get this done...Or am i doing some mistake in the configs...
Regards
Have you changed the default-stack?
<subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="s3ping">