I configured the kubernetes init container that imports an existing Realm, and override the one that is in environment already.
I'm using this command:
/opt/keycloak/bin/kc.sh import --file=/opt/keycloak/data/import/tyk-realm-export.json
The problem that I'm having, is, when the existing realm is replaced, it deletes all users in it.
Is there any way to import a new configuration for realm without loosing the users?
In particular, my DB is expecting to have hundred thousands of users.
PS: using keycloak >=18.0.0
Here is a log:
Appending additional Java properties to JAVA_OPTS: -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
2022-06-17 10:17:30,048 INFO [org.keycloak.common.Profile] (main) Preview feature enabled: scripts
2022-06-17 10:17:30,198 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: <MyHostname>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: true
2022-06-17 10:17:32,225 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-06-17 10:17:32,505 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-06-17 10:17:32,559 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-06-17 10:17:33,004 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-06-17 10:17:33,311 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2022-06-17 10:17:33,312 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2022-06-17 10:17:33,599 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:35,614 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) sb-keycloak-bd4778849-n8jh5-3122: no members discovered after 2004 ms: creating cluster as coordinator
2022-06-17 10:17:35,636 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [sb-keycloak-bd4778849-n8jh5-3122|0] (1) [sb-keycloak-bd4778849-n8jh5-3122]
2022-06-17 10:17:35,647 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `sb-keycloak-bd4778849-n8jh5-3122`, physical addresses are `[10.2.0.74:41912]`
2022-06-17 10:17:36,678 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: sb-keycloak-bd4778849-n8jh5-3122, Site name: null
2022-06-17 10:17:37,972 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: OVERWRITE_EXISTING
2022-06-17 10:17:37,983 INFO [org.keycloak.exportimport.singlefile.SingleFileImportProvider] (main) Full importing from file /opt/keycloak/data/import/tyk-realm-export.json
2022-06-17 10:17:38,388 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' already exists. Removing it before import
2022-06-17 10:17:49,348 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' imported
2022-06-17 10:17:49,540 INFO [org.keycloak.services] (main) KC-SERVICES0032: Import finished successfully
2022-06-17 10:17:49,832 INFO [io.quarkus] (main) Keycloak 18.0.1 on JVM (powered by Quarkus 2.7.5.Final) started in 25.524s. Listening on: http://0.0.0.0:8080
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Profile import_export activated.
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2022-06-17 10:17:49,922 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2022-06-17 10:17:50,012 INFO [io.quarkus] (main) Keycloak stopped in 0.165s
Done
Maybe you could export both realms and stitch the dumps together.
I don't know your exact use-case.
But the question I ask: is it mandatory to import the realm again or do you just need an update?
First time you import the realm, it's perfectly fine. When importing you have to choose between two strategies: OVERWRITE_EXISTING and IGNORE_EXISTING .
However, both don't fit the use-case of updating particular items of your realm, like the smtp-server settings.
Let's say you have three environments: development, release, production.
Your configuration evolves and runs through each stage.
With ignore_existing no import will happen.
With overwrite_existing it will remove all your users, since overwrite_existing works this way: delete the existing, completly create a new realm. No need to say that this is not wanted in a productive environment.
What you need in this case is just an update via the REST-API. (note that this links points to a specific version AND PLEASE NOTE THAT THE SPECIFIED PATH IN THE DOCUMENTATION IS WRONG, THAT'S WHY IT DIFFERS IN MY CURL COMMAND)
E.g.:
Let's say you get the requirement, that the emails sent by keycloak should have a new "from"-mail. You develop it, it will be tested and than run in production. In this case you can run cUrl-scripts like this:
------------------------------
# First initialize your variables
export KEYCLOAK_HOST="http://localhost:8471"
export REALM_NAME="myrealm"
export CLIENT_SECRET="client-secret-from-your-admin-cli-user-in-the-myrealm"
export CLIENT_ID="admin-cli"
# get the token (mandatory for any action as an admin)
export TOKEN=$( \
curl -s \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d 'grant_type=client_credentials' \
"$KEYCLOAK_HOST/auth/realms/$REALM_NAME/protocol/openid-connect/token" \
| jq -j '.access_token')
#update your specific resource, in this case we're updating the attribute smtpServer with the according values
curl -X PUT \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"smtpServer" : { "replyToDisplayName" : "my Example Display Name", "starttls" : "false", "auth" : "", "port" : "12345", "host" : "my-host.local", "replyTo" : "my-new-address-requested#supermail.com", "from" : "my-new-address-requested#supermail.com", "fromDisplayName" : "", "ssl" : ""} }' \
$KEYCLOAK_HOST/auth/admin/realms/ekc
With this approach you can update your realm and let it evolve according to its stage.
As I said, I don't know if it solves your problem, but if so I am happy I could help.
Related
As title, I am unable to connect to Postgresql 12 on VM but for Postgresql deployed on K8S it connects normally.
Here is my info:
Docker Desktop 4.16.3 (96739)
Keycloak v20.0.3 (v20.0.1 also got the same error)
VM: Postgresql 12 - Ubuntu 18.04
My env:
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin
KC_DB=postgres
KC_DB_URL_DATABASE=web-oidc-v20
KC_DB_URL_HOST=192.168.201.23
KC_DB_URL_PORT=5432
KC_DB_USERNAME=web-oidc
KC_DB_PASSWORD=Abcd#1234
KC_HOSTNAME_ADMIN_URL=http://localhost:8080
KC_HOSTNAME_URL=http://localhost:8080
(I also tried to use KC_DB_URL=jdbc:postgresql://192.168.201.23:5432/web-oidc-v20 as replacement for KC_DB_URL_DATABASE, KC_DB_URL_HOST, KC_DB_URL_PORT but it didn't work either)
Here is the docker run command:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://localhost:8080 -e KC_HOSTNAME_URL=http://localhost:8080 web-oidc-v20
Container logs:
Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2023-02-08 08:55:41,942 WARN [org.keycloak.services] (build-26) KC-SERVICES0047: load-digo-user (vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
2023-02-08 08:55:46,242 WARN [io.quarkus.deployment.steps.ReflectiveHierarchyStep] (build-104) Unable to properly register the hierarchy of the following classes for reflection as they are not in the Jandex index:
- javax.servlet.http.Cookie (source: JacksonProcessor > org.springframework.security.web.savedrequest.DefaultSavedRequest$Builder)
Consider adding them to the index either by creating a Jandex index for your dependency via the Maven plugin, an empty META-INF/beans.xml or quarkus.index-dependency properties.
2023-02-08 08:55:53,256 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 14734ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --optimized
2023-02-08 08:55:55,649 INFO [vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory] (main) init
2023-02-08 08:55:55,751 INFO [vn.vnpt.digo.iscs.keycloak.spi.user.storage.DigoUserStorageProviderFactory] (main) init
2023-02-08 08:55:55,753 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: http://localhost:8080, Hostname: localhost, Strict HTTPS: false, Path: /, Strict BackChannel: false, Admin URL: http://localhost:8080, Admin: localhost, Port: 8080, Proxied: false
2023-02-08 08:55:58,207 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-02-08 08:55:59,484 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-02-08 08:55:59,546 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-02-08 08:55:59,608 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-02-08 08:56:00,138 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-02-08 08:56:00,135 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-02-08 08:56:00,195 WARN [org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory] (main) Unable to prepare operational info due database exception: Connection has been closed.
2023-02-08 08:56:00,428 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2023-02-08 08:56:00,429 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2023-02-08 08:56:00,585 WARN [io.agroal.pool] (main) Datasource '<default>': Connection has been closed.
2023-02-08 08:56:00,613 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,618 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,620 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,622 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:02,644 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) 87e253982a2b-10337: no members discovered after 2001 ms: creating cluster as coordinator
2023-02-08 08:56:02,662 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [87e253982a2b-10337|0] (1) [87e253982a2b-10337]
2023-02-08 08:56:02,668 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `87e253982a2b-10337`, physical addresses are `[192.168.0.2:33278]`
2023-02-08 08:56:03,341 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2023-02-08 08:56:03,507 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (production) mode
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to validate database
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: org.postgresql.util.PSQLException: Connection has been closed.
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Connection has been closed.
2023-02-08 08:56:03,509 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
Postgresql Logs:
2023-02-08 14:59:19.553 +07 [2652184] web-oidc#web-oidc-v20 LOG: connection authorized: user=web-oidc database=web-oidc-v20 SSL enabled (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 FATAL: relation "migration_model" does not exist at character 25
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 LOG: disconnection: session time: 0:00:01.577 user=web-oidc database=web-oidc-v20 host=192.168.200.25 port=62448
Please let me know where I am wrong, sincerely thanks everyone! 🙏
try to replace localhost in your docker command with host.docker.internal.
Docker cannot resolve localhost.
So, you need to run
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://host.docker.internal:8080 -e KC_HOSTNAME_URL=http://host.docker.internal:8080 web-oidc-v20
Also, not sure about this
-e KC_DB_URL_HOST=192.168.201.23
Also, I see you are trying to start it in production mode. Is it expected? If no then add in the end start-dev.
I found the problem I was having. It was because 1 value in Postgresql config was blocking my connection!
exit_on_error
I am trying to configure Keycloak to use postgres using docker-compose.
Docker compose file for reference:
version: "3.9"
services:
keycloak-postgres:
image: postgres:latest
restart: unless-stopped
ports:
- 5432:5432
environment:
POSTGRES_DB: ${POSTGRESQL_DB}
POSTGRES_USER: ${POSTGRESQL_USER}
POSTGRES_PASSWORD: ${POSTGRESQL_PASS}
volumes:
- postgres_data:/var/lib/postgresql/data
keycloak:
depends_on:
- keycloak-postgres
image: quay.io/keycloak/keycloak
container_name: keycloak
ports:
- 8030:8080
environment:
KC_DB: postgres
KC_DB_URL_HOST: keycloak-postgres
KC_DB_URL_DATABASE: ${POSTGRESQL_DB}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_HOSTNAME: ${KEYCLOAK_HOSTNAME}
KC_PROXY: edge
KC_HTTP_ENABLED: true
restart: unless-stopped
command:
- start --optimized
volumes:
postgres_data:
driver: local
I have found that if I run start without the optimized flag, keycloak starts without any issues, but also does not use the postgres database - as there are no tables or anything created by Keycloak when I connect to the DB.
When I run with the optimized flag, I get the following error:
URL format error; must be "jdbc:h2:{ {.|mem:}[name] | [file:]fileName | {tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value...]" but is "jdbc:postgresql://keycloak-postgres:5432/keycloak" [90046-214]
From what I can make out the postgres connection string which Keycloak has generated is correct. However it is trying to connect to a h2 database, which is clearly incorrect.
I have looked through all the configuration options and just can't make out why:
a) Keycloak isn't storing any data in postgres in start mode.
b) Keycloak is trying to access a H2 database in --optimized mode.
Update
Following advice from sonOfRa and to try and simplify the problem I have now tried the following:
Run postgres as a seperate docker.
Created the below Dockerfile as per the documentation (have also tried with sonOfRa's cut down Dockerfile):
FROM quay.io/keycloak/keycloak:latest as builder
# Enable health and metrics support
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
# Configure a database vendor
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
ENV KC_DB_URL_HOST=192.168.1.25
ENV KC_DB_USERNAME=keycloak
ENV KC_DB_PASSWORD=keycloak_db_password
ENV KC_HOSTNAME=localhost
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Run the following command to build the new Dockerfile:
docker build . -t mykeycloak
Run the following command to start Keycloak:
docker run --name mykeycloak \
-p 8030:8080 \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e KC_HOSTNAME=auth.url.com \
-e KC_PROXY=edge \
-e KC_HTTP_ENABLED=true \
mykeycloak start
Output from console:
2023-01-11 14:06:19,961 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: auth.url.com, Strict HTTPS: true, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true
2023-01-11 14:06:25,844 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-01-11 14:06:28,797 INFO [org.infinispan.server.core.transport.EPollAvailable] (keycloak-cache-init) ISPN005028: Native Epoll transport not available, using NIO instead: java.lang.UnsatisfiedLinkError: could not load a native library: netty_transport_native_epoll_aarch_64
2023-01-11 14:06:29,311 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-01-11 14:06:29,436 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-01-11 14:06:29,541 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-01-11 14:06:29,581 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-01-11 14:06:30,440 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-01-11 14:06:30,819 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2023-01-11 14:06:30,820 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2023-01-11 14:06:31,143 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,144 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,146 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,147 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:33,179 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) cb354516ab9d-30183: no members discovered after 2009 ms: creating cluster as coordinator
2023-01-11 14:06:33,213 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [cb354516ab9d-30183|0] (1) [cb354516ab9d-30183]
2023-01-11 14:06:33,228 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `cb354516ab9d-30183`, physical addresses are `[172.17.0.2:52593]`
2023-01-11 14:06:35,021 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: cb354516ab9d-30183, Site name: null
2023-01-11 14:06:41,372 INFO [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2023-01-11 14:06:53,286 INFO [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2023-01-11 14:07:00,559 INFO [io.quarkus] (main) Keycloak 20.0.2 on JVM (powered by Quarkus 2.13.3.Final) started in 45.755s. Listening on: http://0.0.0.0:8080
2023-01-11 14:07:00,561 INFO [io.quarkus] (main) Profile prod activated.
2023-01-11 14:07:00,562 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2023-01-11 14:07:02,212 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
Unforuntely the result is the same.
I can access keycloak from the set URL and login using the admin user created on run. Everything seemingly works on the UI, except it does not store any data in the postgres database.
This is due to your use of the --optimized parameter. If you use it, it is assumed that you have already ran "build", which you did not do. It is recommended to create your own docker image which uses the upstream docker image as a base. This is described in the documentation here.
Essentially, you need to run the build command with --db=postgres (or the KC_DB=postgres environment variable), in order to tell Quarkus to build an optimized image that will later use postgres. That image can then be started with --optimized and it will correctly use postgres instead of H2.
Step 1 is to create a Dockerfile (not a docker-compose.yml!)
FROM quay.io/keycloak/keycloak
# Configure a database vendor
ENV KC_DB=postgres
WORKDIR /opt/keycloak
RUN /opt/keycloak/bin/kc.sh build
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
You can also include additional things at this point, like custom providers, but this is the minimal data that you need in order to make it work.
Now you have 2 options: You can build this image with docker build and push it to your own docker registry with docker push, or you can use it directly from your docker-compose.yaml. If you build and push, replace the image: quay.io/keycloak/keycloak line with image: your.registry/wherever/you/pushed. If you want to use it directly in your compose-file, you can remove the image: line completely, and replace it with
build: .
When doing this, you must ensure that the Dockerfile is in the same directory as the docker-compose.yaml
I've implemented a test environment to verify a clustered Keycloak authentication server with two Java web applications that need single signon. There are two Keycloak nodes in the cluster and there is Apache2 mod_proxy load balancer in front of them. I've followed the guidelines in the Keycloak documentation and everything seems to work fine, Keycloak logs report caches are started properly and syncronized:
[Server:server-one] 11:28:04,298 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,306 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,318 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,319 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,321 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
The problem is that when authenticating from webapp, using the Keycloak Java adapter for Tomcat, I get a 403 Forbidden and looking at Keycloak log I see this error message:
[Server:server-one] 11:33:30,700 WARN [org.keycloak.events] (default task-3) type=CODE_TO_TOKEN_ERROR, realmId=test, clientId=customer-portal, userId=null, ipAddress=192.168.10.111, error=user_not_found, grant_type=authorization_code, code_id=889ab790-0c3a-44ea-a1df-247ba501260f, client_auth_method=client-secret
Seems that the problem is related to clustering mode, since everything works fine in standalone mode.
Is there anyone who is able to provide an example of a clustered Keycloak installation with an external load balancer like mod_proxy?
Setting session affinity with the nginx ingress fixes the issue, but I suspect that's just a band-aid on some broken functionality. I suspect the AuthenticationSessions replication is not working correctly, but I don't see any indication that it's failing, and I'm not sure where to look or how to confirm it's the root issue.
Here are the nginx affinity settings I used on the ingress:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
I'm bit new to JBoss, could you please help me out in finding the root cause in the below mentioned issue.Thanks and much appreciated,
Issue: Session replication is not happening between master(domain
controller) and slave(host). Following are the logs,
From master:
[Server:server-three] 07:53:38,246 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 65) ISPN000078: Starting JGroups Channel
[Server:server-three] 07:53:38,260 INFO [stdout] (ServerService Thread Pool -- 65)
[Server:server-three] 07:53:38,260 INFO [stdout] (ServerService Thread Pool -- 65) -------------------------------------------------------------------
[Server:server-three] 07:53:38,261 INFO [stdout] (ServerService Thread Pool -- 65) GMS: address=master:server-three/web, cluster=web, physical address=10.78.216.145:7850
[Server:server-three] 07:53:38,261 INFO [stdout] (ServerService Thread Pool -- 65) -------------------------------------------------------------------
[Host Controller] 07:53:40,511 INFO [org.jboss.as.domain] (Host Controller Service Threads - 25) JBAS010918: Registered remote slave host "slave", JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19)
[Server:server-three] 07:53:41,273 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 65) ISPN000094: Received new cluster view: [master:server-three/web|0] [master:server-three/web]
[Server:server-three] 07:53:41,275 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 65) ISPN000079: Cache local address is master:server-three/web, physical addresses are [10.78.216.145:7850]
[Server:server-three] 07:53:41,279 INFO [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 65) ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.10.Final
[Server:server-three] 07:53:41,288 INFO [org.jboss.as.clustering] (MSC service thread 1-6) JBAS010238: Number of cluster members: 1
From slave:
[Server:server-three-slave] 07:53:36,274 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 67) ISPN000078: Starting JGroups Channel
[Server:server-three-slave] 07:53:36,287 INFO [stdout] (ServerService Thread Pool -- 67)
[Server:server-three-slave] 07:53:36,287 INFO [stdout] (ServerService Thread Pool -- 67) -------------------------------------------------------------------
[Server:server-three-slave] 07:53:36,288 INFO [stdout] (ServerService Thread Pool -- 67) GMS: address=slave:server-three-slave/web, cluster=web, physical address=10.78.216.36:7850
[Server:server-three-slave] 07:53:36,288 INFO [stdout] (ServerService Thread Pool -- 67) -------------------------------------------------------------------
[Server:server-three-slave] 07:53:39,301 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 67) ISPN000094: Received new cluster view: [slave:server-three-slave/web|0] [slave:server-three-slave/web]
[Server:server-three-slave] 07:53:39,302 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 67) ISPN000079: Cache local address is slave:server-three-slave/web, physical addresses are [10.78.216.36:7850]
[Server:server-three-slave] 07:53:39,306 INFO [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 67) ISPN000128: Infinispan version: Infinispan 'Delirium' 5.2.10.Final
[Server:server-three-slave] 07:53:39,315 INFO [org.jboss.as.clustering] (MSC service thread 1-5) JBAS010238: Number of cluster members: 1
I'm advertising my cluster and i can see the both master and slave nodes are present in the UI,
https://10.78.X.X:8445/mod_cluster_manager,
I have disabled firewall and SELinux to avoid network problems.I believe, i have made all the required configurations in domain.xml and ssl.conf of httpd. I have also added <distributable/> tag and jboss-web.xml changes in my war. It'd be of more help if anyone of you direct me in a different direction as i'm totally lost,
Even i can see the following in my master log:
[Host Controller] 07:18:58,862 INFO [org.jboss.as.domain] (Host Controller Service Threads - 28) JBAS010918: Registered remote slave host "slave", JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19)
[Server:server-three] 07:18:59,029 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 69) ISPN000094: Received new cluster view: [master:server-three/web|0] [master:server-three/web]
I have tried adding ProxyPass with stickySession, didn't get much luck. any suggestion or help will be much appreciated,
Thanks,
Joy.
The <distributable/> tag must be added to the web.xml file not in jboss-web.xml
Enable Session Replication in Your Application
Overview
To take advantage of the JBoss Enterprise Application Platform's High Availability (HA) > features, configure your application
to be distributable. This procedure shows how to do that, and then
explains some of the advanced configuration options you can use.
Required: Indicate that your application is distributable.
If your application is not marked as distributable, its sessions will
never be distributed. Add the <distributable /> element inside the
<web-app> tag of your application's web.xml discriptor file. Here
is an example.
<web-app xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee
http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
version="2.4">
<distributable/>
</web-app>
See more: Chapter 7. Clustering in Web Applications
I've been playing around with Infinispan lately, having had no previous experience with Infinispan.
I've come across an interesting problem and I wondered if anyone might be able to shed some light on it.
I have a standalone Java application (GridGrabber.jar) that bundles the Infinispan jars and has a class to add/remove and list items from the grid. Within the application I create the CacheManager as follows:
DefaultCacheManager m = new DefaultCacheManager("cluster.xml");
The application runs on the command line with java -jar GridGrabber.jar and works fine, and when I start two instances at the terminal, they both cluster together and I can see the contents of the grid from both nodes:
Sam#macbook $ java -jar GridGrabber.jar -Djava.net.preferIPv4Stack=true -Djgroups.bind_addr=127.0.0.1
Apr 16, 2012 9:50:04 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport start
INFO: ISPN000078: Starting JGroups Channel
Apr 16, 2012 9:50:04 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport buildChannel
INFO: ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
Apr 16, 2012 9:50:05 PM org.jgroups.logging.JDKLogImpl warn
WARNING: receive buffer of socket java.net.DatagramSocket#bc794cc was set to 20MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
Apr 16, 2012 9:50:05 PM org.jgroups.logging.JDKLogImpl warn
WARNING: receive buffer of socket java.net.MulticastSocket#731de9b was set to 25MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
Apr 16, 2012 9:50:06 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000094: Received new cluster view: [Samuel-Weavers-MacBook-Pro-62704|1] [Samuel-Weavers-MacBook-Pro-62704, Samuel-Weavers-MacBook-Pro-3360]
Apr 16, 2012 9:50:06 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport startJGroupsChannelIfNeeded
INFO: ISPN000079: Cache local address is Samuel-Weavers-MacBook-Pro-3360, physical addresses are [fe80:0:0:0:e2f8:47ff:fe06:109a%5:51422]
Apr 16, 2012 9:50:06 PM org.jgroups.logging.JDKLogImpl warn
WARNING: Samuel-Weavers-MacBook-Pro-3360: not member of view [Samuel-Weavers-MacBook-Pro-62704|2]; discarding it
Apr 16, 2012 9:50:06 PM org.infinispan.factories.GlobalComponentRegistry start
INFO: ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
Apr 16, 2012 9:50:06 PM org.infinispan.jmx.CacheJmxRegistration start
INFO: ISPN000031: MBeans were successfully registered to the platform mbean server.
Apr 16, 2012 9:50:12 PM org.infinispan.remoting.transport.jgroups.JGroupsTransport viewAccepted
INFO: ISPN000093: Received new, MERGED cluster view: MergeView::[Samuel-Weavers-MacBook-Pro-3360|3] [Samuel-Weavers-MacBook-Pro-3360, Samuel-Weavers-MacBook-Pro-62704], subgroups=[Samuel-Weavers-MacBook-Pro-62704|1] [Samuel-Weavers-MacBook-Pro-3360], [Samuel-Weavers-MacBook-Pro-62704|2] [Samuel-Weavers-MacBook-Pro-62704]
Now, here it where it gets interesting. I also have a Web application (servlet + JSP) that it deployed on AS7, and who's aim is to provide a real time update web page of data entering the grid. The servlet calls essentially the same code as the GridGrabber and uses the same cluster.xml.
When it starts up everything deploys ok, but but for some reason it doesn't pick up the other nodes in the Grid although they are both running on the same machine and see each other no problem. Deployment messages are:
17:06:16,889 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA
17:06:17,107 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA
17:06:17,155 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting
17:06:17,972 INFO [org.xnio] XNIO Version 3.0.3.GA
17:06:17,972 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http)
17:06:17,981 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA
17:06:17,990 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA
17:06:18,006 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers
17:06:18,013 INFO [org.jboss.as.configadmin] (ServerService Thread Pool -- 26) JBAS016200: Activating ConfigAdmin Subsystem
17:06:18,017 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 31) JBAS010280: Activating Infinispan subsystem.
17:06:18,038 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 38) JBAS011800: Activating Naming Subsystem
17:06:18,062 INFO [org.jboss.as.osgi] (ServerService Thread Pool -- 39) JBAS011940: Activating OSGi Subsystem
17:06:18,070 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem
17:06:18,077 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension
17:06:18,103 INFO [org.jboss.as.security] (MSC service thread 1-8) JBAS013100: Current PicketBox version=4.0.7.Final
17:06:18,121 INFO [org.jboss.as.connector] (MSC service thread 1-3) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final)
17:06:18,134 INFO [org.jboss.as.naming] (MSC service thread 1-4) JBAS011802: Starting Naming Service
17:06:18,173 INFO [org.jboss.as.mail.extension] (MSC service thread 1-4) JBAS015400: Bound mail session [java:jboss/mail/Default]
17:06:18,241 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-7) JBoss Web Services - Stack CXF Server 4.0.2.GA
17:06:18,290 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
17:06:18,407 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-7) Starting Coyote HTTP/1.1 on http-localhost-127.0.0.1-8080
17:06:18,802 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-7) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
17:06:18,885 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-7) JBAS015012: Started FileSystemDeploymentService for directory /Users/Sam/Downloads/jboss-as-7.1.1.Final/standalone/deployments
17:06:18,893 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found TwitterWebApp.war in deployment directory. To trigger deployment create a file called TwitterWebApp.war.dodeploy
17:06:18,894 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found SBTwitterStreamGui.war in deployment directory. To trigger deployment create a file called SBTwitterStreamGui.war.dodeploy
17:06:18,920 INFO [org.jboss.as.remoting] (MSC service thread 1-5) JBAS017100: Listening on localhost/127.0.0.1:4447
17:06:18,920 INFO [org.jboss.as.remoting] (MSC service thread 1-2) JBAS017100: Listening on /127.0.0.1:9999
17:06:19,018 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
17:06:19,019 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final "Brontes" started in 2479ms - Started 133 of 208 services (74 services are passive or on-demand)
17:06:19,031 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015876: Starting deployment of "TwitterWebApp.war"
17:06:19,031 INFO [org.jboss.as.server.deployment] (MSC service thread 1-4) JBAS015876: Starting deployment of "SBTwitterStreamGui.war"
17:06:19,426 INFO [org.jboss.web] (MSC service thread 1-7) JBAS018210: Registering web context: /SBTwitterStreamGui
17:06:20,267 INFO [org.jboss.web] (MSC service thread 1-3) JBAS018210: Registering web context: /TwitterWebApp
17:06:20,313 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "TwitterWebApp.war"
17:06:20,313 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "SBTwitterStreamGui.war"
17:06:23,399 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000078: Starting JGroups Channel
17:06:23,400 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
17:06:23,805 WARN [org.jgroups.protocols.UDP] (http-localhost-127.0.0.1-8080-1) receive buffer of socket java.net.DatagramSocket#4ad6be89 was set to 20MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
17:06:23,807 WARN [org.jgroups.protocols.UDP] (http-localhost-127.0.0.1-8080-1) receive buffer of socket java.net.MulticastSocket#7bb28246 was set to 25MB, but the OS only allocated 65.51KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
17:06:26,824 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000094: Received new cluster view: [Samuel-Weavers-MacBook-Pro-102|0] [Samuel-Weavers-MacBook-Pro-102]
17:06:26,946 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (http-localhost-127.0.0.1-8080-1) ISPN000079: Cache local address is Samuel-Weavers-MacBook-Pro-102, physical addresses are [10.36.5.166:64952]
17:06:26,955 INFO [org.infinispan.factories.GlobalComponentRegistry] (http-localhost-127.0.0.1-8080-1) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL
17:06:26,956 WARN [org.infinispan.config.TimeoutConfigurationValidatingVisitor] (http-localhost-127.0.0.1-8080-1) ISPN000148: Invalid <transport>: distributedSyncTimout value of 240000. It can not be higher than <stateRetrieval>:timeout which is 120000
17:06:27,045 INFO [org.infinispan.jmx.CacheJmxRegistration] (http-localhost-127.0.0.1-8080-1) ISPN000031: MBeans were successfully registered to the platform mbean server.
17:06:27,061 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cache Size: 0
17:06:27,062 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Name: demoCluster
17:06:27,063 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Size: 1
17:06:27,063 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cluster Members: [Samuel-Weavers-MacBook-Pro-102]
17:06:27,064 INFO [stdout] (http-localhost-127.0.0.1-8080-1) Cache Manager: org.infinispan.manager.DefaultCacheManager#3c818c4#Address:Samuel-Weavers-MacBook-Pro-102
Does anyone know if there is a reason why an the AS7 deployment wouldn't see the other nodes in the grid? It seems to me that it should work as they all are looking for the same cluster name (demoCluster) and they all use the same cluster.xml. The cluster.xml is as follows:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
</infinispan>
One thing I have noticed from the deployment messages is that the web app seems to use an ipv4 address and the jar seems to use an ipv6 address with JGroups but I'm not sure if this would could be the root cause. Any help appreciated.
I seemed to have resolved this issue by doing the following:
I was along the right lines about the JGroups ipv4/ipv6 address, so I forced the standalone application to use IPv4 by giving the following JVM argument: -Djava.net.preferIPv4Stack=true
One thing to note here - I was running the standalone jar from the terminal, and the web app was being deployed onto AS7 from within Eclipse. Passing the JVM arguments when running the jar from the terminal was unsuccessful. However, running the application from inside Eclipse also, and giving it the JVM argument in the Run Configuration "Run" menu -> "Run Configurations" -> Add new under Java application (or edit existing one) -> Arguments -> VM arguments -> paste above preferIPv4stack argument in the box.
Now when running both standalone application and web application from within Eclipse, they should join the same cluster and start sending messages to each other.
On a side note - I had a JGroups error after this saying "Message Dropped - sender not in window". After some further investigation, it turns out that being connected to a (corporate) VPN was causing the messages to be dropped. As soon as I turned VPN off it started working perfectly.
Hopefully this might help someone with the same issue.
If you wanna deploy Infinispan instances in AS7, I'd strongly recommend you follow the guidance here. That way, if you end up using other clustering services in AS7, you won't need to keep a separate communications layer with the burden it has...etc.