Infinispan server ignores jgroups bind_addr - jboss

I'm trying to get to work simple Infinispan Server cluster containing two nodes. The problem is that Infinispan ignores my bind_addr jgroups setting in clustered.xml file. I can specify this setting using -Djgroups.bind_addr=GLOBAL -- it works, but it isn't handy. I start cluster using bin/clustered.sh script, use TCP protocol stack and MPING for nodes autodiscovery.
The part of configuration file standalone/configuration/clustered.xml related to jgroups:
<subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="${jboss.default.jgroups.stack:tcp}">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="pbcast.NAKACK"/>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="MPING" socket-binding="jgroups-mping">
<property name="bind_addr">GLOBAL</property>
</protocol>
<protocol type="MERGE2"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK">
<property name="use_mcast_xmit">false</property>
</protocol>
<protocol type="UNICAST2"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
</subsystem>
I also tried -Djgroups.ignore.bind_addr=true option to prevent Infinispan deriving bind_addr setting from system properties instead of XML, whoever might set it -- it didn't help.
Infinispan version 6.0.
Update: socket-binding-group and interfaces elements:
<interfaces>
<interface name="management">
<!-- <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> -->
<any-address/>
</interface>
<interface name="public">
<!-- <inet-address value="${jboss.bind.address:127.0.0.1}"/> -->
<any-address/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/>
<socket-binding name="ajp" port="8009"/>
<socket-binding name="hotrod" port="11222"/>
<socket-binding name="http" port="8080"/>
<socket-binding name="https" port="8443"/>
<socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" port="7600"/>
<socket-binding name="jgroups-tcp-fd" port="57600"/>
<socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:234.99.54.14}" multicast-port="45688"/>
<socket-binding name="jgroups-udp-fd" port="54200"/>
<socket-binding name="memcached" port="11211"/>
<socket-binding name="modcluster" port="0" multicast-address="224.0.1.115" multicast-port="23364"/>
<socket-binding name="remoting" port="4447"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<socket-binding name="websocket" port="8181"/>
</socket-binding-group>
</server>
Any help would be greatly appreciated!

I think you have to define the interface in a <socket-binding-group> or <interfaces> element, so either in jgroups-udp or jgroups-tcp. Those are defined at the end of the config and you can try to see if JGroups variable substitution works, e.g. "${my.interface:GLOBAL}".

I have completely removed socket-binding attributes from JGroups settings and left only bind_addr properties - and now it works. I'm very curious what is the difference between them.

I had the same issue and found out the solution in JGroups documentation,
https://docs.jboss.org/jbossas/docs/Server_Configuration_Guide/4/html/ch19s07s07.html
Running with -Djgroups.ignore.bind_addr=true will force it to overwrite the system property 'bind_addr' and use the XML bind_addr instead.
Document says,
"This setting tells JGroups to ignore the jgroups.bind_addr system
property, and instead use whatever is specfied in XML"

Related

How can I get the port in my thorntail application from a java class?

How could I get from a java class the port where my application?
For example, with JBoss in standalone.xml I used it is:
standalone.xml
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}" />
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}" />
<socket-binding name="ajp" port="${jboss.ajp.port:8009}" />
<socket-binding name="http" port="${jboss.http.port:8080}" />
<socket-binding name="https" port="${jboss.https.port:8443}" />
<socket-binding name="txn-recovery-environment" port="4712" />
<socket-binding name="txn-status-manager" port="4713" />
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25" />
</outbound-socket-binding>
</socket-binding-group>
Java class:
Integer port = (Integer) ManagementFactory.getPlatformMBeanServer()
.getAttribute(new ObjectName("jboss.as:socket-binding-group=standard-sockets,socket-binding=http"),
"port");

JBoss EAP is up and running but isn't accessible through web browser

I am running a deployed application on localhost and accessing it from browser but not able to access it on browser though 8080 is listening.
Sometime it is showing "Refused to connect" or "Webpages not found"
This is my standalone.xml
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:0.0.0.0}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:0.0.0.0}"/>
</interface>
<interface name="unsecure">
<inet-address value="${jboss.bind.address.unsecure:0.0.0.0}"/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/>
<socket-binding name="ajp" port="8009"/>
<socket-binding name="http" port="8080"/>
<socket-binding name="https" port="8443"/>
<socket-binding name="jacorb" interface="unsecure" port="3528"/>
<socket-binding name="jacorb-ssl" interface="unsecure" port="3529"/>
<socket-binding name="messaging" port="5445"/>
<socket-binding name="messaging-group" port="0" multicast-address="${jboss.messaging.group.address:231.7.7.7}" multicast-port="${jboss.messaging.group.port:9876}"/>
<socket-binding name="messaging-throughput" port="5455"/>
<socket-binding name="remoting" port="4447"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
<outbound-socket-binding name="remote-ejb-connection1">
<remote-destination host="localhost" port="4689"/>
</outbound-socket-binding>
</socket-binding-group>
<deployments>
<deployment name="ace-ear-1.0.11-SNAPSHOT.ear" runtime-name="ace-ear-1.0.11-SNAPSHOT.ear">
<content sha1="213bc2a0282e8488d75711d9c49fbdb2c607e84b"/>
</deployment>
<deployment name="ace-admin-ear-1.0.11-SNAPSHOT-LOCAL.ear" runtime-name="ace-admin-ear-1.0.11-SNAPSHOT-LOCAL.ear">
<content sha1="d609bf2cc5284b06229579c64eed2570ebc3b7ca"/>
</deployment>
</deployments>
As you have mentioned sometimes you were getting "Refused to connect" or "Webpages not found", can you clarify is it working most of the times and sometimes you were getting this error or every time?
Can you check in your server log 8080 port is listening on your configured IP? for eg:
INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0006: Undertow HTTP listener http listening on 127.0.0.1:8080

WildFly 11 - JGroups initialization delay

We have a web based application running on WildFly 11 (Migrated from WildFly 9 recently) and we are facing this weird issue when all the nodes in the cluster are started up.
Here is how our application is designed to login and show the home page:
Entering the URL of our application brings us the login page.
Provide valid credentials and click on login.
Backend servlet validates these creds and on successful login,
browser sends a redirect request (HTTP302) with home page URL.
So here is the problem -
For the very first user trying to login to the application (i.e steps
1 - 3 above) is redirected back to login page i.e. even though the
user entered the valid credentials.
In the back end - our home page servlet cannot find the session just
created during login process and thus the user is redirected back to
the login page.
Any login attempt after this is working fine.
We tried the same steps (i.e steps 1 - 3 above) through VPN (which is a slower network) and we did not see this issue occurring there and we also did couple of other tests to conclude that giving it some more time during the redirect on the very first login works fine. so we concluded that it could be JGroups initialization issue as this is happening only for the very first login attempt.
<channels default="ee">
<channel name="ee" stack="tcp" cluster="repl"/>
</channels>
.
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">
10.0.99.11[7600],10.0.99.12[7600]
</property>
<property name="num_initial_members">
2
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
.
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="10.0.99.12"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="jgroups-mping" interface="public" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
<socket-binding name="jgroups-tcp" interface="public" port="7600"/>
<socket-binding name="jgroups-udp" interface="public" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
<socket-binding name="modcluster" port="0" multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
<outbound-socket-binding name="10_0_99_11">
<remote-destination host="10.0.99.11" port="6666"/>
</outbound-socket-binding>
<outbound-socket-binding name="10_0_99_12">
<remote-destination host="10.0.99.12" port="6666"/>
</outbound-socket-binding>
</socket-binding-group>
Please suggest me idea's on how we can fix this or enlighten me if I'm doing something wrong here.

KeyCloak HA on AWS EC2 with docker - cluster is up but login fails

We are trying to set KeyCloak 1.9.3 with HA on AWS EC2 with docker, the cluster is up without errors however the login fails with the below error:
WARN [org.keycloak.events] (default task-10) type=LOGIN_ERROR, realmId=master, clientId=null, userId=null, ipAddress=172.30.200.171, error=invalid_code
we have followed this (http://lists.jboss.org/pipermail/keycloak-user/2016-February/004940.html ) post but used S3_PING instead of JDBC_PING.
It seems that the nodes detect each other:
INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,6dbce1e2a05a) ISPN000094: Received new cluster view for channel keycloak: [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
We suspect that the nodes doesn't communicate with each other, when we queried the jboss mbean "jboss.as.expr:subsystem=jgroups,channel=ee" the result for the first node was:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 0
And for the second node:
jgroups,channel=ee = [6dbce1e2a05a|1] (2) [6dbce1e2a05a, 75f2b2e98cfd]
jgroups,channel=ee receivedMessages = 0
jgroups,channel=ee sentMessages = 5
We also verified that the TCP ports 57600 and 7600 are open.
Any idea what might cause it ?
Here is the relevant standalone-ha.xml configuration and below is that startup command:
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcp"/>
</channels>
<stacks>
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp">
<property name="external_addr">200.129.4.189</property>
</transport>
<protocol type="S3_PING">
<property name="access_key">AAAAAAAAAAAAAA</property>
<property name="secret_access_key">BBBBBBBBBBBBBB</property>
<property name="location">CCCCCCCCCCCCCCCCCCCC</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd">
<property name="external_addr">200.129.4.189</property>
</protocol>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
<socket-binding name="jgroups-tcp" interface="public" port="7600"/>
<socket-binding name="jgroups-tcp-fd" interface="public" port="57600"/>
And we start the server using the below (INTERNAL_HOST_IP is the container internal IP address):
standalone.sh -c=standalone-ha.xml -b=$INTERNAL_HOST_IP -bmanagement=$INTERNAL_HOST_IP -bprivate=$INTERNAL_HOST_IP
Any help will be appreciated.
Apparently there was no problem with the setup, the issue was with the DB we accidentally configured in memory DB for each instance instead of our shared DB.
you have to enable the stickiness of AWS load balancer to get successful login.

Integration Test with Arquillian doesn´t working on JBoss EAP 6 Remote on Linux

When I try to execute integration test with arquillian and jboss eap 6 remote on linux, now return
the:org.jboss.arquillian.container.spi.client.container.DeploymentException: Could not deploy to container: Authentication failed: all available authentication mechanisms failed
On windows work very fine as localhost as other machine.
This is my configuration:
file arquillian.xml
<defaultProtocol type="Servlet 3.0" />
<container qualifier="jboss7" default="true">
<configuration>
<property name="managementAddress">127.0.0.1</property>
<property name="managementPort">9999</property>
<property name="username">deploy</property>
<property name="password">xxxx</property>
</configuration>
</container>
pom.xm:
<profile>
<id>test-int</id>
<dependencies>
<dependency>
<groupId>org.jboss.arquillian.junit</groupId>
<artifactId>arquillian-junit-container</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.as</groupId>
<artifactId>jboss-as-arquillian-container-remote</artifactId>
<version>7.1.2.Final</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.protocol</groupId>
<artifactId>arquillian-protocol-servlet</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</profile>
standalone.xml = jboss eap 6.0
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:0.0.0.0}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:0.0.0.0}"/>
</interface>
<!-- TODO - only show this if the jacorb subsystem is added -->
<interface name="unsecure">
<!--
~ Used for IIOP sockets in the standard configuration.
~ To secure JacORB you need to setup SSL
-->
<inet-address value="${jboss.bind.address.unsecure:0.0.0.0}"/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/>
<socket-binding name="ajp" port="8009"/>
<socket-binding name="http" port="8080"/>
<socket-binding name="https" port="8443"/>
<socket-binding name="osgi-http" interface="management" port="8090"/>
<socket-binding name="remoting" port="4447"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="localhost" port="25"/>
</outbound-socket-binding>
</socket-binding-group>
Anyone can help me ?
Try to add a management user in your remote instance, in your case:
user deploy/