Error when authenticating with a Keycloak cluster - keycloak

I've implemented a test environment to verify a clustered Keycloak authentication server with two Java web applications that need single signon. There are two Keycloak nodes in the cluster and there is Apache2 mod_proxy load balancer in front of them. I've followed the guidelines in the Keycloak documentation and everything seems to work fine, Keycloak logs report caches are started properly and syncronized:
[Server:server-one] 11:28:04,298 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,306 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,318 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,319 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,321 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
The problem is that when authenticating from webapp, using the Keycloak Java adapter for Tomcat, I get a 403 Forbidden and looking at Keycloak log I see this error message:
[Server:server-one] 11:33:30,700 WARN [org.keycloak.events] (default task-3) type=CODE_TO_TOKEN_ERROR, realmId=test, clientId=customer-portal, userId=null, ipAddress=192.168.10.111, error=user_not_found, grant_type=authorization_code, code_id=889ab790-0c3a-44ea-a1df-247ba501260f, client_auth_method=client-secret
Seems that the problem is related to clustering mode, since everything works fine in standalone mode.
Is there anyone who is able to provide an example of a clustered Keycloak installation with an external load balancer like mod_proxy?

Setting session affinity with the nginx ingress fixes the issue, but I suspect that's just a band-aid on some broken functionality. I suspect the AuthenticationSessions replication is not working correctly, but I don't see any indication that it's failing, and I'm not sure where to look or how to confirm it's the root issue.
Here are the nginx affinity settings I used on the ingress:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent

Related

com.hazelcast.client.AuthenticationException: Invalid credentials! Principal :null

I have configured my multi-cluster Hazelcast server on Kubernetes via the Kubernetes API discovery strategy. (Please see Two separate hazelcast clusters in kubernetes) And the members of each cluster are successfully discovering each other.
My client project is running on the k8s cluster as my Hazelcast server.
I have added the following dependency to my client project pom:
<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast-kubernetes</artifactId>
<version>1.3.1</version>
</dependency>
I have configured my Hazelcast client as given in the official documentation:
clientConfig.getNetworkConfig().getKubernetesConfig()
.setEnabled(true)
.setProperty("namespace", "default")
.setProperty("service-name", "xyz");
(I have a namespace called "default" and k8s service object named "xyz")
These are the logs on client startup. Although it recognized the Hazelcast server pod, it gave an AuthenticationException (as expanded below). Also, want to point out that it did not try to connect to the correct port.
2019-09-18 12:59:36,699 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.HazelcastClient (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] A non-empty group password is configured for the Hazelcast client. Starting with Hazelcast version 3.11, clients with the same group name, but with different group passwords (that do not use authentication) will be accepted to a cluster. The group password configuration will be removed completely in a future release.
2019-09-18 12:59:36,709 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTING
2019-09-18 12:59:36,977 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: xyz, service-port: 0, service-label: null, service-label-value: true, namespace: default, resolve-not-ready-addresses: false, kubernetes-master: https://kubernetes.default.svc}
2019-09-18 12:59:36,980 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.spi.discovery.integration.DiscoveryService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Kubernetes Discovery activated resolver: ServiceEndpointResolver
2019-09-18 12:59:36,999 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.client.spi.ClientInvocationService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Running with 2 response threads
2019-09-18 12:59:37,060 [instance=local-service_01.devciny-dock] [localhost-startStop-1] INFO com.hazelcast.core.LifecycleService (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] HazelcastClient 3.11.1 (20181218 - d294f31) is STARTED
2019-09-18 12:59:37,390 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.cluster-] INFO com.hazelcast.client.connection.ClientConnectionManager (Slf4jFactory.java:65) - local-service_01.devciny-dock [instance_identifier] [3.11.1] Trying to connect to [10.42.1.111]:5701 as owner member
2019-09-18 12:59:37,432 [instance=local-service_01.devciny-dock] [local-service_01.devciny-dock.internal-3] WARN com.hazelcast.client.connection.nio.ClientConnection (Slf4jFactory.java:67) - local-service_01.devciny-dock [instance_identifier] [3.11.1] ClientConnection{alive=false, connectionId=1, channel=NioChannel{/10.42.1.121:39003->/10.42.1.111:5701}, remoteEndpoint=null, lastReadTime=2019-09-18 12:59:37.426, lastWriteTime=2019-09-18 12:59:37.425, closedTime=2019-09-18 12:59:37.431, connected server version=null} closed. Reason: com.hazelcast.client.AuthenticationException[Invalid credentials! Principal: null]
com.hazelcast.client.AuthenticationException: Invalid credentials! Principal: null
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:747)
at com.hazelcast.client.connection.nio.ClientConnectionManagerImpl$AuthCallback.onResponse(ClientConnectionManagerImpl.java:702)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:130)
at com.hazelcast.client.spi.impl.ClientInvocationFuture$InternalDelegatingExecutionCallback.onResponse(ClientInvocationFuture.java:118)
at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:255)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
Your Hazelcast client tries to connect to 10.42.1.111:5701 and it find Hazelcast server there. The port looks correct and it correctly finds the Hazelcast server there.
What happens next is that it cannot authenticate with the server, which probably means that you didn't specify the cluster password in your Hazelcast configuration. You can read more on how to do it in this StackOverflow question.
You didn't share the most important part of configuration related to client authentication. That is the group config of clusters and group config from your client. I anticipate, the problem is rooted there.
The default authentication compares group name on the member with username coming from the client. The username is filled by the client's group name (by default).
Check the AuthenticationBaseMessageTask code
private AuthenticationStatus authenticate(UsernamePasswordCredentials credentials) {
GroupConfig groupConfig = nodeEngine.getConfig().getGroupConfig();
String nodeGroupName = groupConfig.getName();
boolean usernameMatch = nodeGroupName.equals(credentials.getUsername());
return usernameMatch ? AuthenticationStatus.AUTHENTICATED : AuthenticationStatus.CREDENTIALS_FAILED;
}

Wildfly 10 - Authentication failure in domain mode

I meet misunderstanding situation.
I try to set Wildfly 10.1.0 on Ubuntu 16.04 to work in domain mode. To testing I have additional virtual machine.
Base system: Domain Controller
Virtual Machine: Host Controller
Generally to configuration I'm using wildfly documentation but it doesn't work correct.
Without authentication Host Server can connect to Domain Controller, but problem occur when I want use authentication - there is some strange behavior which I don't understand.
On domain controller:
set everything in host-master.xml
create Management User with below option:
user: test
password: test
Is this new user going to be used for one AS process to connect to
another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? yes
To represent the user add the following to the server-identities definition secret value="dGVzdA=="
3.server start without problem using domain.sh --host-config=host-master.xml
On Host Controller:
set everything in host-slave.xml with secret value:
<security-realm name="ManagementRealm">
<server-identities>
<secret value="dGVzdA==" />
</server-identities>
<authentication>
<local default-user="$local" skip-group-loading="true"/>
<properties path="mgmt-users.properties" relative-to="jboss.domain.config.dir"/>
</authentication>
<authorization map-groups-to-roles="false">
<properties path="mgmt-groups.properties" relative-to="jboss.domain.config.dir"/>
</authorization>
</security-realm>
start server using domain.sh --host-config=host-slave.xml
When I am starting server obtain following error:
*[Host Controller] 22:23:03,553 WARN [org.jboss.as.host.controller] **(Controller Boot Thread) WFLYHC0001: Could not connect to remote domain controller remote://192.168.56.1:9999 -- java.lang.IllegalStateException: WFLYHC0043: Unable to connect due to authentication failure.*
./domain.sh --host-config=host-slave.xml
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /home/test1/Warsztat/wildfly
JAVA: java
JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
22:22:59,931 INFO [org.jboss.modules] (main) JBoss Modules version 1.5.2.Final
22:23:00,212 INFO [org.jboss.as.process.Host Controller.status] (main) WFLYPC0018: Starting process 'Host Controller'
[Host Controller] 22:23:01,207 INFO [org.jboss.modules] (main) JBoss Modules version 1.5.2.Final
[Host Controller] 22:23:01,521 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
[Host Controller] 22:23:01,586 INFO [org.jboss.as] (MSC service thread 1-1) WFLYSRV0049: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) starting
[Host Controller] 22:23:02,624 INFO [org.xnio] (MSC service thread 1-1) XNIO version 3.4.0.Final
[Host Controller] 22:23:02,634 INFO [org.xnio.nio] (MSC service thread 1-1) XNIO NIO Implementation Version 3.4.0.Final
[Host Controller] 22:23:02,741 WARN [org.jboss.as.domain.management.security] (MSC service thread 1-2) WFLYDM0111: Keystore /home/test1/Warsztat/wildfly/domain/configuration/application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost
[Host Controller] 22:23:02,752 INFO [org.jboss.remoting] (MSC service thread 1-1) JBoss Remoting version 4.0.21.Final
[Host Controller] 22:23:02,834 INFO [org.jboss.as.remoting] (MSC service thread 1-1) WFLYRMT0001: Listening on 192.168.56.111:9999
[Host Controller] 22:23:03,553 WARN [org.jboss.as.host.controller] **(Controller Boot Thread) WFLYHC0001: Could not connect to remote domain controller remote://192.168.56.1:9999 -- java.lang.IllegalStateException: WFLYHC0043: Unable to connect due to authentication failure.**
[Host Controller] 22:23:03,554 WARN [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0147: No domain controller discovery options remain.
[Host Controller] 22:23:03,555 ERROR [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0002: Could not connect to master. Aborting. Error was: java.lang.IllegalStateException: WFLYHC0120: Tried all domain controller discovery option(s) but unable to connect
[Host Controller] 22:23:03,556 FATAL [org.jboss.as.host.controller] (Controller Boot Thread) WFLYHC0178: Aborting with exit code 99
[Host Controller] 22:23:03,603 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0050: WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final) stopped in 22ms
[Host Controller]
22:23:04,063 INFO [org.jboss.as.process.Host Controller.status] (reaper for Host Controller) WFLYPC0011: Process 'Host Controller' finished with an exit status of 99
22:23:04,066 INFO [org.jboss.as.process] (Thread-8) WFLYPC0017: Shutting down process controller
22:23:04,066 INFO [org.jboss.as.process] (Thread-8) WFLYPC0016: All processes finished; exiting
But if I add name="test" on Host Controller to host-slave.xml file like below (name must be the same as a user management creating in Domain Controller) it works!
<host xmlns="urn:jboss:domain:4.2" name="test">
I completely don't understand it and I can't find any explanation that situation?
Any body knows why do I have to add name="test"?
Ok - I found explanations.
In Security Realms documentation is information about how to define your own username for authentication:
By default when a slave host controller authenticates against the
master domain controller it uses its configured name as its username.
If you want to override the username used for authentication a
username attribute can be added to the element.
In my cases I have to add user name like below:
<domain-controller>
<remote security-realm="ManagementRealm" username="atest">
<discovery-options>
<static-discovery name="primary" protocol="${jboss.domain.master.protocol:remote}" host="${jboss.domain.master.address:192.168.56.1}" port="${jboss.domain.master.port:9999}"/>
</discovery-options>
</remote>
</domain-controller>
And now I can set name freely.
This is expected behavior. You need to mention name in host-slave.xml same as user name created on master EAP. With the help of that only master instance able to authenticate slave instance.
In wildfly documentation too they created user slave and used same in host-slave.xml file.

Openshift - ROOT.war deployed but no app

That's what I see in my logs:
[jbossas-DOMAIN.rhcloud.com SHA]\> tail_all
2014/05/26 09:08:35,527 INFO [org.jboss.as.messaging] (MSC service thread 1-4)
JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory
2014/05/26 09:08:35,599 INFO [org.jboss.as.deployment.connector] (MSC service t
hread 1-2) JBAS010406: Registered connection factory java:/JmsXA
2014/05/26 09:08:35,620 INFO [org.hornetq.ra.HornetQResourceAdapter] (MSC servi
ce thread 1-2) HornetQ resource adaptor started
2014/05/26 09:08:35,621 INFO [org.jboss.as.connector.services.ResourceAdapterAc
tivatorService$ResourceAdapterActivator] (MSC service thread 1-2) IJ020002: Depl
oyed: file://RaActivatorhornetq-ra
2014/05/26 09:08:35,623 INFO [org.jboss.as.deployment.connector] (MSC service t
hread 1-4) JBAS010401: Bound JCA ConnectionFactory [java:/JmsXA]
2014/05/26 09:08:35,785 INFO [org.jboss.as.server.deployment] (MSC service thre
ad 1-4) JBAS015876: Starting deployment of "ROOT.war"
2014/05/26 09:08:38,512 INFO [org.jboss.web] (MSC service thread 1-4) JBAS01821
0: Registering web context:
2014/05/26 09:08:38,532 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015951
: Admin console listening on http://127.2.148.129:9990
2014/05/26 09:08:38,532 INFO [org.jboss.as] (MSC service thread 1-1) JBAS015874
: JBoss AS 7.1.1.Final "Brontes" started in 19348ms - Started 211 of 330 service
s (116 services are passive or on-demand)
2014/05/26 09:08:38,785 INFO [org.jboss.as.server] (DeploymentScanner-threads -
2) JBAS018559: Deployed "ROOT.war"
[jbossas-DOMAIN.rhcloud.com SHA]\> find . -name ROOT.war
find: `./.ssh': Permission denied
find: `./.sandbox': Permission denied
./app-root/runtime/dependencies/jbossas/deployments/ROOT.war
find: `./.tmp': Permission denied
./app-deployments/2014-05-26_08-49-32.024/dependencies/jbossas/deployments/ROOT.
war
[jbossas-DOMAIN.rhcloud.com SHA]\>
But when I go to the root url I see the default Welcome to your JBoss AS application on OpenShift page. I am not using maven. Tried to follow the instructions for Jboss (jboss-as-7.1.1.Final) from this page - did not delete the src folder though. What do I need to do ? Do I need to delete the src folder (?) and add a deployments folder ?
Build output (eclipse Java EE Luna M7)
Stopping jbossas cartridge
Sending SIGTERM to jboss:301473 ...
Building git ref 'master', commit e48bedf
Skipping Maven build due to absence of pom.xml
Preparing build for deployment
Deployment id is 72c749b6
Activating deployment
Deploying JBoss
Starting jbossas cartridge
Found 127.2.148.129:8080 listening port
Found 127.2.148.129:9999 listening port
/var/lib/openshift/538267574382ece7950004a4/jbossas/standalone/deployments /var/lib/openshift/SHA/jbossas
/var/lib/openshift/538267574382ece7950004a4/jbossas
Artifacts deployed: ./ROOT.war
-------------------------
Git Post-Receive Result: success
Activation status: success
Deployment completed with status: success
Repository already uptodate.
EDIT: hmm, the war is 24k which is little - the project is a regular java web application project - runs fine locally both on glassfish and jboss
I would accept an answer telling me how to do it from eclipse (edit: asked here) but till then producing manually the war (right click on the project then export as war) and then scp'ing straight from git Bash (msysgit, mingwin):
$ scp "~/ROOT.war" SHA#APP-DOMAIN.rhcloud.com:/var/lib/\
openshift/SHA/app-root/dependencies/jbossas/deployments
as detailed here (detailed is an euphemism) solved it.
For the gory details of finally setting this up see here
This question is pretty old so I am sure that the problem has been solved, however I was having the same problem and I am sure someone else will do as well. In my case the git wasn't tracking the src folder in my local repository.
You just need to open up a terminal in your git repo folder and type git status to show which folders are not being tracked, then git add [file/foldername] and finally republish it.

Web app won't join Infinispan cluster

I've been playing around with Infinispan lately, having had no previous experience with Infinispan.
I've come across an interesting problem and I wondered if anyone might be able to shed some light on it.
I have a standalone Java application (GridGrabber.jar) that bundles the Infinispan jars and has a class to add/remove and list items from the grid. Within the application I create the CacheManager as follows:
DefaultCacheManager m = new DefaultCacheManager("cluster.xml");
The application runs on the command line with java -jar GridGrabber.jar and works fine, and when I start two instances at the terminal, they both cluster together and I can see the contents of the grid from both nodes:
Sam#macbook $ java -jar GridGrabber.jar -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
Apr 16, 2012 9:50:04 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport start
INFO: ISPN000078: Starting JGroups Channel
Apr 16, 2012 9:50:04 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport buildChannel
INFO: ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
Apr 16, 2012 9:50:05 PM org.jgroups.logging.JDKLogImpl warn
WARNING: receive buffer of socket java.net.DatagramSocket#bc794cc was set to 20MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
Apr 16, 2012 9:50:05 PM org.jgroups.logging.JDKLogImpl warn
WARNING: receive buffer of socket java.net.MulticastSocket#731de9b was set to 25MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
Apr 16, 2012 9:50:06 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [Samuel-Weavers-MacBook-Pro-62704|1] [Samuel-Weavers-MacBook-Pro-62704, Samuel-Weavers-MacBook-Pro-3360]
Apr 16, 2012 9:50:06 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport startJGroupsChannelIfNeeded
INFO: ISPN000079: Cache local address is Samuel-Weavers-MacBook-Pro-3360, physical addresses are [fe80:0:0:0:e2f8:47ff:fe06:109a%5:51422]
Apr 16, 2012 9:50:06 PM org.jgroups.logging.JDKLogImpl warn
WARNING: Samuel-Weavers-MacBook-Pro-3360: not member of view [Samuel-Weavers-MacBook-Pro-62704|2]; discarding it
Apr 16, 2012 9:50:06 PM org.infinispan.factories.GlobalComponentRegistry start
INFO: ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
Apr 16, 2012 9:50:06 PM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform mbean server.
Apr 16, 2012 9:50:12 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000093: Received new, MERGED cluster view: MergeView::[Samuel-Weavers-MacBook-Pro-3360|3] [Samuel-Weavers-MacBook-Pro-3360, Samuel-Weavers-MacBook-Pro-62704], subgroups=[Samuel-Weavers-MacBook-Pro-62704|1] [Samuel-Weavers-MacBook-Pro-3360], [Samuel-Weavers-MacBook-Pro-62704|2] [Samuel-Weavers-MacBook-Pro-62704]
Now, here it where it gets interesting. I also have a Web application (servlet + JSP) that it deployed on AS7, and who's aim is to provide a real time update web page of data entering the grid. The servlet calls essentially the same code as the GridGrabber and uses the same cluster.xml.
When it starts up everything deploys ok, but but for some reason it doesn't pick up the other nodes in the Grid although they are both running on the same machine and see each other no problem. Deployment messages are:
17:06:16,889 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA
17:06:17,107 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA
17:06:17,155 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting
17:06:17,972 INFO [org.xnio] XNIO Version 3.0.3.GA
17:06:17,972 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http)
17:06:17,981 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA
17:06:17,990 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA
17:06:18,006 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers
17:06:18,013 INFO [org.jboss.as.configadmin] (ServerService Thread Pool -- 26) JBAS016200: Activating ConfigAdmin Subsystem
17:06:18,017 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 31) JBAS010280: Activating Infinispan subsystem.
17:06:18,038 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 38) JBAS011800: Activating Naming Subsystem
17:06:18,062 INFO [org.jboss.as.osgi] (ServerService Thread Pool -- 39) JBAS011940: Activating OSGi Subsystem
17:06:18,070 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem
17:06:18,077 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension
17:06:18,103 INFO [org.jboss.as.security] (MSC service thread 1-8) JBAS013100: Current PicketBox version=4.0.7.Final
17:06:18,121 INFO [org.jboss.as.connector] (MSC service thread 1-3) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final)
17:06:18,134 INFO [org.jboss.as.naming] (MSC service thread 1-4) JBAS011802: Starting Naming Service
17:06:18,173 INFO [org.jboss.as.mail.extension] (MSC service thread 1-4) JBAS015400: Bound mail session [java:jboss/mail/Default]
17:06:18,241 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-7) JBoss Web Services - Stack CXF Server 4.0.2.GA
17:06:18,290 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
17:06:18,407 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-7) Starting Coyote HTTP/1.1 on http-localhost-127.0.0.1-8080
17:06:18,802 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-7) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
17:06:18,885 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-7) JBAS015012: Started FileSystemDeploymentService for directory /Users/Sam/Downloads/jboss-as-7.1.1.Final/standalone/deployments
17:06:18,893 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found TwitterWebApp.war in deployment directory. To trigger deployment create a file called TwitterWebApp.war.dodeploy
17:06:18,894 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found SBTwitterStreamGui.war in deployment directory. To trigger deployment create a file called SBTwitterStreamGui.war.dodeploy
17:06:18,920 INFO [org.jboss.as.remoting] (MSC service thread 1-5) JBAS017100: Listening on localhost/127.0.0.1:4447
17:06:18,920 INFO [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on /127.0.0.1:9999
17:06:19,018 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
17:06:19,019 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final "Brontes" started in 2479ms - Started 133 of 208 services (74 services are passive or on-demand)
17:06:19,031 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015876: Starting deployment of "TwitterWebApp.war"
17:06:19,031 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015876: Starting deployment of "SBTwitterStreamGui.war"
17:06:19,426 INFO [org.jboss.web] (MSC service thread 1-7) JBAS018210: Registering web context: /SBTwitterStreamGui
17:06:20,267 INFO [org.jboss.web] (MSC service thread 1-3) JBAS018210: Registering web context: /TwitterWebApp
17:06:20,313 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "TwitterWebApp.war"
17:06:20,313 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "SBTwitterStreamGui.war"
17:06:23,399 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000078: Starting JGroups Channel
17:06:23,400 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
17:06:23,805 WARN [org.jgroups.protocols.UDP] (http-localhost-127.0.0.1-8080-1) receive buffer of socket java.net.DatagramSocket#4ad6be89 was set to 20MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
17:06:23,807 WARN [org.jgroups.protocols.UDP] (http-localhost-127.0.0.1-8080-1) receive buffer of socket java.net.MulticastSocket#7bb28246 was set to 25MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
17:06:26,824 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000094: Received new cluster view: [Samuel-Weavers-MacBook-Pro-102|0] [Samuel-Weavers-MacBook-Pro-102]
17:06:26,946 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000079: Cache local address is Samuel-Weavers-MacBook-Pro-102, physical addresses are [10.36.5.166:64952]
17:06:26,955 INFO [org.infinispan.factories.GlobalComponentRegistry] (http-localhost-127.0.0.1-8080-1) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
17:06:26,956 WARN [org.infinispan.config.TimeoutConfigurationValidatingVisitor] (http-localhost-127.0.0.1-8080-1) ISPN000148: Invalid <transport>: distributedSyncTimout value of 240000. It can not be higher than <stateRetrieval>:timeout which is 120000
17:06:27,045 INFO [org.infinispan.jmx.CacheJmxRegistration] (http-localhost-127.0.0.1-8080-1) ISPN000031: MBeans were successfully registered to the platform mbean server.
17:06:27,061 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cache Size: 0
17:06:27,062 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Name: demoCluster
17:06:27,063 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Size: 1
17:06:27,063 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Members: [Samuel-Weavers-MacBook-Pro-102]
17:06:27,064 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cache Manager: org.infinispan.manager.DefaultCacheManager#3c818c4#Address:Samuel-Weavers-MacBook-Pro-102
Does anyone know if there is a reason why an the AS7 deployment wouldn't see the other nodes in the grid? It seems to me that it should work as they all are looking for the same cluster name (demoCluster) and they all use the same cluster.xml. The cluster.xml is as follows:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
</infinispan>
One thing I have noticed from the deployment messages is that the web app seems to use an ipv4 address and the jar seems to use an ipv6 address with JGroups but I'm not sure if this would could be the root cause. Any help appreciated.
I seemed to have resolved this issue by doing the following:
I was along the right lines about the JGroups ipv4/ipv6 address, so I forced the standalone application to use IPv4 by giving the following JVM argument: -Djava.net.preferIPv4Stack=true
One thing to note here - I was running the standalone jar from the terminal, and the web app was being deployed onto AS7 from within Eclipse. Passing the JVM arguments when running the jar from the terminal was unsuccessful. However, running the application from inside Eclipse also, and giving it the JVM argument in the Run Configuration "Run" menu -> "Run Configurations" -> Add new under Java application (or edit existing one) -> Arguments -> VM arguments -> paste above preferIPv4stack argument in the box.
Now when running both standalone application and web application from within Eclipse, they should join the same cluster and start sending messages to each other.
On a side note - I had a JGroups error after this saying "Message Dropped - sender not in window". After some further investigation, it turns out that being connected to a (corporate) VPN was causing the messages to be dropped. As soon as I turned VPN off it started working perfectly.
Hopefully this might help someone with the same issue.
If you wanna deploy Infinispan instances in AS7, I'd strongly recommend you follow the guidance here. That way, if you end up using other clustering services in AS7, you won't need to keep a separate communications layer with the burden it has...etc.

Unable to start JBoss from within Eclipse

I am unable to start JBoss server 5.1.0.GA version from eclipse Indigo.
Eclipse shows me message box saying 'Server JBoss v5.0 at localhost was unable to start within 500 seconds. If the server requires more time, try increasing the timeout in the server editor.' but in the console window I can see that JBoss has been actually started.
here is some part of log which I can see in console window of eclipse :
SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface
SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface
15:14:20,212 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3
15:14:20,212 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView
15:14:20,222 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureManagementView/remote - EJB3.x Default Remote Business Interface
SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface
15:14:20,252 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3
15:14:20,262 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService
15:14:20,272 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureProfileService/remote - EJB3.x Default Remote Business Interface
SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface
15:14:20,362 INFO [TomcatDeployment] deploy, ctxPath=/admin-console
15:14:20,412 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console'
15:14:23,486 INFO [TomcatDeployment] deploy, ctxPath=/BannedListSearch
15:14:27,532 INFO [TomcatDeployment] deploy, ctxPath=/IWorkWebApp
15:14:27,813 INFO [TomcatDeployment] deploy, ctxPath=/
15:14:29,155 INFO [TomcatDeployment] deploy, ctxPath=/TestWebProject
15:14:30,036 INFO [TomcatDeployment] deploy, ctxPath=/displaytag-examples-1.2
15:14:30,136 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console
15:14:30,276 INFO [TomcatDeployment] deploy, ctxPath=/HelloWebService
15:14:30,407 ERROR [EngineConfigurationFactoryServlet] Unable to find config file. Creating new servlet engine config file: /WEB-INF/server-config.wsdd
15:14:30,687 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
15:14:30,707 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009
15:14:30,707 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 48s:110ms
I have increased start Timeout of server to 500 seconds then also I am getting same error. I have not changed anything else.
I am able to start JBoss from command prompt successfully but same server is not getting started from eclipse.
Please help me to start the JBoss server.
Sound to me like the http port you are configured in JBoss is different to the port you have in the Eclipse configuration for JBoss.
Eclipse uses the port configuration to listen to JBoss' port so that it can determine that JBoss has actually started. If they differ, Eclipse thinks JBoss has never started although it actually has according to the log console. Make the ports match and it will probably work.
Updated: According to your log, JBoss is using port 8081 for HTTP:
Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
Now you have to tell Eclipse to listen to that port so that it can figure out whether JBoss has started (default is 8080 and therefore Eclipse will never be aware of it!). Go to your servers view, double click on your JBoss server, and the configuration screen will come up:
You have to edit the HTTP port (in the 'Port' box) and set it to 8081 so that it matches your server's.