Keycloak does not use postgres database and tries to connect h2 database - postgresql

I am trying to configure Keycloak to use postgres using docker-compose.
Docker compose file for reference:
version: "3.9"
services:
keycloak-postgres:
image: postgres:latest
restart: unless-stopped
ports:
- 5432:5432
environment:
POSTGRES_DB: ${POSTGRESQL_DB}
POSTGRES_USER: ${POSTGRESQL_USER}
POSTGRES_PASSWORD: ${POSTGRESQL_PASS}
volumes:
- postgres_data:/var/lib/postgresql/data
keycloak:
depends_on:
- keycloak-postgres
image: quay.io/keycloak/keycloak
container_name: keycloak
ports:
- 8030:8080
environment:
KC_DB: postgres
KC_DB_URL_HOST: keycloak-postgres
KC_DB_URL_DATABASE: ${POSTGRESQL_DB}
KC_DB_USERNAME: ${POSTGRESQL_USER}
KC_DB_PASSWORD: ${POSTGRESQL_PASS}
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_HOSTNAME: ${KEYCLOAK_HOSTNAME}
KC_PROXY: edge
KC_HTTP_ENABLED: true
restart: unless-stopped
command:
- start --optimized
volumes:
postgres_data:
driver: local
I have found that if I run start without the optimized flag, keycloak starts without any issues, but also does not use the postgres database - as there are no tables or anything created by Keycloak when I connect to the DB.
When I run with the optimized flag, I get the following error:
URL format error; must be "jdbc:h2:{ {.|mem:}[name] | [file:]fileName | {tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value...]" but is "jdbc:postgresql://keycloak-postgres:5432/keycloak" [90046-214]
From what I can make out the postgres connection string which Keycloak has generated is correct. However it is trying to connect to a h2 database, which is clearly incorrect.
I have looked through all the configuration options and just can't make out why:
a) Keycloak isn't storing any data in postgres in start mode.
b) Keycloak is trying to access a H2 database in --optimized mode.
Update
Following advice from sonOfRa and to try and simplify the problem I have now tried the following:
Run postgres as a seperate docker.
Created the below Dockerfile as per the documentation (have also tried with sonOfRa's cut down Dockerfile):
FROM quay.io/keycloak/keycloak:latest as builder
# Enable health and metrics support
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
# Configure a database vendor
ENV KC_DB=postgres
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
ENV KC_DB_URL_HOST=192.168.1.25
ENV KC_DB_USERNAME=keycloak
ENV KC_DB_PASSWORD=keycloak_db_password
ENV KC_HOSTNAME=localhost
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Run the following command to build the new Dockerfile:
docker build . -t mykeycloak
Run the following command to start Keycloak:
docker run --name mykeycloak \
-p 8030:8080 \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e KC_HOSTNAME=auth.url.com \
-e KC_PROXY=edge \
-e KC_HTTP_ENABLED=true \
mykeycloak start
Output from console:
2023-01-11 14:06:19,961 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: <unset>, Hostname: auth.url.com, Strict HTTPS: true, Path: <request>, Strict BackChannel: false, Admin URL: <unset>, Admin: <request>, Port: -1, Proxied: true
2023-01-11 14:06:25,844 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-01-11 14:06:28,797 INFO [org.infinispan.server.core.transport.EPollAvailable] (keycloak-cache-init) ISPN005028: Native Epoll transport not available, using NIO instead: java.lang.UnsatisfiedLinkError: could not load a native library: netty_transport_native_epoll_aarch_64
2023-01-11 14:06:29,311 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-01-11 14:06:29,436 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-01-11 14:06:29,541 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-01-11 14:06:29,581 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-01-11 14:06:30,440 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-01-11 14:06:30,819 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2023-01-11 14:06:30,820 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2023-01-11 14:06:31,143 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,144 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,146 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:31,147 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2023-01-11 14:06:33,179 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) cb354516ab9d-30183: no members discovered after 2009 ms: creating cluster as coordinator
2023-01-11 14:06:33,213 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [cb354516ab9d-30183|0] (1) [cb354516ab9d-30183]
2023-01-11 14:06:33,228 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `cb354516ab9d-30183`, physical addresses are `[172.17.0.2:52593]`
2023-01-11 14:06:35,021 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: cb354516ab9d-30183, Site name: null
2023-01-11 14:06:41,372 INFO [org.keycloak.quarkus.runtime.storage.legacy.liquibase.QuarkusJpaUpdaterProvider] (main) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
2023-01-11 14:06:53,286 INFO [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2023-01-11 14:07:00,559 INFO [io.quarkus] (main) Keycloak 20.0.2 on JVM (powered by Quarkus 2.13.3.Final) started in 45.755s. Listening on: http://0.0.0.0:8080
2023-01-11 14:07:00,561 INFO [io.quarkus] (main) Profile prod activated.
2023-01-11 14:07:00,562 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2023-01-11 14:07:02,212 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
Unforuntely the result is the same.
I can access keycloak from the set URL and login using the admin user created on run. Everything seemingly works on the UI, except it does not store any data in the postgres database.

This is due to your use of the --optimized parameter. If you use it, it is assumed that you have already ran "build", which you did not do. It is recommended to create your own docker image which uses the upstream docker image as a base. This is described in the documentation here.
Essentially, you need to run the build command with --db=postgres (or the KC_DB=postgres environment variable), in order to tell Quarkus to build an optimized image that will later use postgres. That image can then be started with --optimized and it will correctly use postgres instead of H2.
Step 1 is to create a Dockerfile (not a docker-compose.yml!)
FROM quay.io/keycloak/keycloak
# Configure a database vendor
ENV KC_DB=postgres
WORKDIR /opt/keycloak
RUN /opt/keycloak/bin/kc.sh build
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
You can also include additional things at this point, like custom providers, but this is the minimal data that you need in order to make it work.
Now you have 2 options: You can build this image with docker build and push it to your own docker registry with docker push, or you can use it directly from your docker-compose.yaml. If you build and push, replace the image: quay.io/keycloak/keycloak line with image: your.registry/wherever/you/pushed. If you want to use it directly in your compose-file, you can remove the image: line completely, and replace it with
build: .
When doing this, you must ensure that the Dockerfile is in the same directory as the docker-compose.yaml

Related

Keycloak v20.0.3 - I can't connect to Postgresql 12 via IP:Port

As title, I am unable to connect to Postgresql 12 on VM but for Postgresql deployed on K8S it connects normally.
Here is my info:
Docker Desktop 4.16.3 (96739)
Keycloak v20.0.3 (v20.0.1 also got the same error)
VM: Postgresql 12 - Ubuntu 18.04
My env:
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=admin
KC_DB=postgres
KC_DB_URL_DATABASE=web-oidc-v20
KC_DB_URL_HOST=192.168.201.23
KC_DB_URL_PORT=5432
KC_DB_USERNAME=web-oidc
KC_DB_PASSWORD=Abcd#1234
KC_HOSTNAME_ADMIN_URL=http://localhost:8080
KC_HOSTNAME_URL=http://localhost:8080
(I also tried to use KC_DB_URL=jdbc:postgresql://192.168.201.23:5432/web-oidc-v20 as replacement for KC_DB_URL_DATABASE, KC_DB_URL_HOST, KC_DB_URL_PORT but it didn't work either)
Here is the docker run command:
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://localhost:8080 -e KC_HOSTNAME_URL=http://localhost:8080 web-oidc-v20
Container logs:
Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2023-02-08 08:55:41,942 WARN [org.keycloak.services] (build-26) KC-SERVICES0047: load-digo-user (vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory) is implementing the internal SPI authenticator. This SPI is internal and may change without notice
2023-02-08 08:55:46,242 WARN [io.quarkus.deployment.steps.ReflectiveHierarchyStep] (build-104) Unable to properly register the hierarchy of the following classes for reflection as they are not in the Jandex index:
- javax.servlet.http.Cookie (source: JacksonProcessor > org.springframework.security.web.savedrequest.DefaultSavedRequest$Builder)
Consider adding them to the index either by creating a Jandex index for your dependency via the Maven plugin, an empty META-INF/beans.xml or quarkus.index-dependency properties.
2023-02-08 08:55:53,256 INFO [io.quarkus.deployment.QuarkusAugmentor] (main) Quarkus augmentation completed in 14734ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --optimized
2023-02-08 08:55:55,649 INFO [vn.vnpt.digo.iscs.keycloak.spi.authenticator.LoadDigoUserAuthenticatorFactory] (main) init
2023-02-08 08:55:55,751 INFO [vn.vnpt.digo.iscs.keycloak.spi.user.storage.DigoUserStorageProviderFactory] (main) init
2023-02-08 08:55:55,753 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: Base URL: http://localhost:8080, Hostname: localhost, Strict HTTPS: false, Path: /, Strict BackChannel: false, Admin URL: http://localhost:8080, Admin: localhost, Port: 8080, Proxied: false
2023-02-08 08:55:58,207 WARN [io.quarkus.agroal.runtime.DataSources] (main) Datasource <default> enables XA but transaction recovery is not enabled. Please enable transaction recovery by setting quarkus.transaction-manager.enable-recovery=true, otherwise data may be lost if the application is terminated abruptly
2023-02-08 08:55:59,484 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-02-08 08:55:59,546 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2023-02-08 08:55:59,608 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2023-02-08 08:56:00,138 INFO [org.keycloak.broker.provider.AbstractIdentityProviderMapper] (main) Registering class org.keycloak.broker.provider.mappersync.ConfigSyncEventListener
2023-02-08 08:56:00,135 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.10.Final
2023-02-08 08:56:00,195 WARN [org.keycloak.quarkus.runtime.storage.legacy.database.LegacyJpaConnectionProviderFactory] (main) Unable to prepare operational info due database exception: Connection has been closed.
2023-02-08 08:56:00,428 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2023-02-08 08:56:00,429 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2023-02-08 08:56:00,585 WARN [io.agroal.pool] (main) Datasource '<default>': Connection has been closed.
2023-02-08 08:56:00,613 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,618 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,620 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:00,622 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2023-02-08 08:56:02,644 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) 87e253982a2b-10337: no members discovered after 2001 ms: creating cluster as coordinator
2023-02-08 08:56:02,662 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [87e253982a2b-10337|0] (1) [87e253982a2b-10337]
2023-02-08 08:56:02,668 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `87e253982a2b-10337`, physical addresses are `[192.168.0.2:33278]`
2023-02-08 08:56:03,341 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2023-02-08 08:56:03,507 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (production) mode
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to validate database
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: org.postgresql.util.PSQLException: Connection has been closed.
2023-02-08 08:56:03,508 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Connection has been closed.
2023-02-08 08:56:03,509 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command.
Postgresql Logs:
2023-02-08 14:59:19.553 +07 [2652184] web-oidc#web-oidc-v20 LOG: connection authorized: user=web-oidc database=web-oidc-v20 SSL enabled (protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 FATAL: relation "migration_model" does not exist at character 25
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 STATEMENT: SELECT ID, VERSION FROM MIGRATION_MODEL ORDER BY UPDATE_TIME DESC
2023-02-08 14:59:20.758 +07 [2652184] web-oidc#web-oidc-v20 LOG: disconnection: session time: 0:00:01.577 user=web-oidc database=web-oidc-v20 host=192.168.200.25 port=62448
Please let me know where I am wrong, sincerely thanks everyone! 🙏
try to replace localhost in your docker command with host.docker.internal.
Docker cannot resolve localhost.
So, you need to run
docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -e KC_DB=postgres -e KC_DB_URL_DATABASE=web-oidc-v20 -e KC_DB_URL_HOST=192.168.201.23 -e KC_DB_URL_PORT=5432 -e KC_DB_USERNAME=web-oidc -e KC_DB_PASSWORD=Abcd#1234 -e KC_HOSTNAME_ADMIN_URL=http://host.docker.internal:8080 -e KC_HOSTNAME_URL=http://host.docker.internal:8080 web-oidc-v20
Also, not sure about this
-e KC_DB_URL_HOST=192.168.201.23
Also, I see you are trying to start it in production mode. Is it expected? If no then add in the end start-dev.
I found the problem I was having. It was because 1 value in Postgresql config was blocking my connection!
exit_on_error

Import Keycloak existing realm without losing existing users

I configured the kubernetes init container that imports an existing Realm, and override the one that is in environment already.
I'm using this command:
/opt/keycloak/bin/kc.sh import --file=/opt/keycloak/data/import/tyk-realm-export.json
The problem that I'm having, is, when the existing realm is replaced, it deletes all users in it.
Is there any way to import a new configuration for realm without loosing the users?
In particular, my DB is expecting to have hundred thousands of users.
PS: using keycloak >=18.0.0
Here is a log:
Appending additional Java properties to JAVA_OPTS: -Dkeycloak.profile.feature.upload_scripts=enabled -Dkeycloak.migration.strategy=OVERWRITE_EXISTING
2022-06-17 10:17:30,048 INFO [org.keycloak.common.Profile] (main) Preview feature enabled: scripts
2022-06-17 10:17:30,198 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: <MyHostname>, Strict HTTPS: false, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: true
2022-06-17 10:17:32,225 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-06-17 10:17:32,505 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-06-17 10:17:32,559 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-06-17 10:17:33,004 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-06-17 10:17:33,311 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2022-06-17 10:17:33,312 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2022-06-17 10:17:33,599 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:33,600 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2022-06-17 10:17:35,614 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) sb-keycloak-bd4778849-n8jh5-3122: no members discovered after 2004 ms: creating cluster as coordinator
2022-06-17 10:17:35,636 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [sb-keycloak-bd4778849-n8jh5-3122|0] (1) [sb-keycloak-bd4778849-n8jh5-3122]
2022-06-17 10:17:35,647 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `sb-keycloak-bd4778849-n8jh5-3122`, physical addresses are `[10.2.0.74:41912]`
2022-06-17 10:17:36,678 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: sb-keycloak-bd4778849-n8jh5-3122, Site name: null
2022-06-17 10:17:37,972 INFO [org.keycloak.services] (main) KC-SERVICES0030: Full model import requested. Strategy: OVERWRITE_EXISTING
2022-06-17 10:17:37,983 INFO [org.keycloak.exportimport.singlefile.SingleFileImportProvider] (main) Full importing from file /opt/keycloak/data/import/tyk-realm-export.json
2022-06-17 10:17:38,388 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' already exists. Removing it before import
2022-06-17 10:17:49,348 INFO [org.keycloak.exportimport.util.ImportUtils] (main) Realm 'tyk' imported
2022-06-17 10:17:49,540 INFO [org.keycloak.services] (main) KC-SERVICES0032: Import finished successfully
2022-06-17 10:17:49,832 INFO [io.quarkus] (main) Keycloak 18.0.1 on JVM (powered by Quarkus 2.7.5.Final) started in 25.524s. Listening on: http://0.0.0.0:8080
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Profile import_export activated.
2022-06-17 10:17:49,834 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
2022-06-17 10:17:49,922 INFO [org.infinispan.CLUSTER] (main) ISPN000080: Disconnecting JGroups channel `ISPN`
2022-06-17 10:17:50,012 INFO [io.quarkus] (main) Keycloak stopped in 0.165s
Done
Maybe you could export both realms and stitch the dumps together.
I don't know your exact use-case.
But the question I ask: is it mandatory to import the realm again or do you just need an update?
First time you import the realm, it's perfectly fine. When importing you have to choose between two strategies: OVERWRITE_EXISTING and IGNORE_EXISTING .
However, both don't fit the use-case of updating particular items of your realm, like the smtp-server settings.
Let's say you have three environments: development, release, production.
Your configuration evolves and runs through each stage.
With ignore_existing no import will happen.
With overwrite_existing it will remove all your users, since overwrite_existing works this way: delete the existing, completly create a new realm. No need to say that this is not wanted in a productive environment.
What you need in this case is just an update via the REST-API. (note that this links points to a specific version AND PLEASE NOTE THAT THE SPECIFIED PATH IN THE DOCUMENTATION IS WRONG, THAT'S WHY IT DIFFERS IN MY CURL COMMAND)
E.g.:
Let's say you get the requirement, that the emails sent by keycloak should have a new "from"-mail. You develop it, it will be tested and than run in production. In this case you can run cUrl-scripts like this:
------------------------------
# First initialize your variables
export KEYCLOAK_HOST="http://localhost:8471"
export REALM_NAME="myrealm"
export CLIENT_SECRET="client-secret-from-your-admin-cli-user-in-the-myrealm"
export CLIENT_ID="admin-cli"
# get the token (mandatory for any action as an admin)
export TOKEN=$( \
curl -s \
-d "client_id=$CLIENT_ID" \
-d "client_secret=$CLIENT_SECRET" \
-d 'grant_type=client_credentials' \
"$KEYCLOAK_HOST/auth/realms/$REALM_NAME/protocol/openid-connect/token" \
| jq -j '.access_token')
#update your specific resource, in this case we're updating the attribute smtpServer with the according values
curl -X PUT \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"smtpServer" : { "replyToDisplayName" : "my Example Display Name", "starttls" : "false", "auth" : "", "port" : "12345", "host" : "my-host.local", "replyTo" : "my-new-address-requested#supermail.com", "from" : "my-new-address-requested#supermail.com", "fromDisplayName" : "", "ssl" : ""} }' \
$KEYCLOAK_HOST/auth/admin/realms/ekc
With this approach you can update your realm and let it evolve according to its stage.
As I said, I don't know if it solves your problem, but if so I am happy I could help.

On-Premise Data-Prepper Connection Refused

Recently i've been experimenting with deploying Statefull applications onto Kubernetes. For my dev environment everything is on-premise, either on my local machine or on remote VMs. I deployed OpenSearch through its helm chart, got it and dashboards up and running, and everything was going well. I am now trying to setup data-prepper running as a docker container on my local machine (the kubernetes cluster is on remote VMs, not sure if this matters). I have the kube service that defines access to OpenSearch port-forwarded to my machine and am able to access it using "curl -u : https://localhost:9200 -k". Since my only interest is seeing it up and running I don't care (yet) that it is insecure. When I setup my data-prepper pipeline to hit OpenSearch in the exact same way, it is refusing the connection and I'm at a loss as to why.
pipelines.yaml:
simple-sample-pipeline:
workers: 2
delay: "5000"
source:
random:
sink:
- opensearch:
hosts: [ "https://localhost:9200" ]
insecure: true
username: <user>
password: <admin>
index: test
data-prepper-config.yaml
ssl: false
Docker command to run container:
docker run --name data-prepper \
-v C:/users/<profile>/documents/pipelines.yaml:/usr/share/data-prepper/pipelines.yaml \
-v C:/users/<profile>/documents/data-prepper.yaml:/usr/share/data-prepper/data-prepper-config.yaml \
opensearchproject/data-prepper:latest
logs exerpt:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-06-07T19:39:50,959 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-06-07T19:39:50,960 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
2022-06-07T19:39:54,599 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building pipeline [simple-sample-pipeline] from provided configuration
2022-06-07T19:39:54,600 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [random] as source component for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,624 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building buffer for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,634 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building processors for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building sinks for the pipeline [simple-sample-pipeline]
2022-06-07T19:39:54,635 [main] INFO com.amazon.dataprepper.parser.PipelineParser - Building [opensearch] as sink component
2022-06-07T19:39:54,643 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.OpenSearchSink - Initializing OpenSearch sink
2022-06-07T19:39:54,649 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the username provided in the config.
2022-06-07T19:39:54,789 [main] INFO com.amazon.dataprepper.plugins.sink.opensearch.ConnectionConfiguration - Using the trust all strategy
2022-06-07T19:39:54,881 [main] ERROR com.amazon.dataprepper.plugin.PluginCreator - Encountered exception while instantiating the plugin OpenSearchSink
java.lang.reflect.InvocationTargetException: null
-----
Caused by: java.net.ConnectException: Connection refused

Kafka & Zookeeper in Gitlab CI

I'm trying to run a simple test if my application is running properly without any issues. My issue is that faust needs a connection to kafka on initialization - so I'm trying to run kafka with zookeeper as services but I'm not able to connect them properly.
Error:
2021-12-16T13:53:51.385341793Z [2021-12-16 13:53:51,385] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2021-12-16T13:53:51.391012666Z [2021-12-16 13:53:51,390] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.395158219Z [2021-12-16 13:53:51,395] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
2021-12-16T13:53:51.399485772Z [2021-12-16 13:53:51,397] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:51.399499707Z java.net.UnknownHostException: zookeeper: Name or service not known
2021-12-16T13:53:51.399503169Z at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
2021-12-16T13:53:51.399506400Z at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
2021-12-16T13:53:51.399509510Z at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)
2021-12-16T13:53:51.399512353Z at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
2021-12-16T13:53:51.399531020Z at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
2021-12-16T13:53:51.399534098Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
2021-12-16T13:53:51.399537044Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
2021-12-16T13:53:51.399540881Z at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:88)
2021-12-16T13:53:51.399544771Z at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:141)
2021-12-16T13:53:51.399548877Z at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:368)
2021-12-16T13:53:51.399553025Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1207)
2021-12-16T13:53:51.406655054Z [2021-12-16 13:53:51,406] WARN Session 0x0 for sever zookeeper:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.406696302Z java.lang.IllegalArgumentException: Unable to canonicalize address zookeeper:2181 because it's not resolvable
2021-12-16T13:53:51.406703099Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:78)
2021-12-16T13:53:51.406707676Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:41)
2021-12-16T13:53:51.406711700Z at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
2021-12-16T13:53:51.406715631Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
2021-12-16T13:53:52.508636206Z [2021-12-16 13:53:52,508] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:52.508665462Z java.net.UnknownHostException: zookeeper
.gitlab-ci.yml:
.zoo_service: &zoo_service
name: zookeeper:latest
alias: zookeeper
.kafka_service: &kafka_service
name: bitnami/kafka:latest
alias: kafka
faust:
variables:
ALLOW_ANONYMOUS_LOGIN: "yes"
KAFKA_BROKER_ID: 1
KAFKA_CFG_LISTENERS: "PLAINTEXT://:9092"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://127.0.0.1:9092"
KAFKA_CFG_ZOOKEEPER_CONNECT: "zookeeper:2181"
ALLOW_PLAINTEXT_LISTENER: "yes"
stage: test
<<: *python_image
services:
- *zoo_service
- *kafka_service
before_script:
- *setup_venv_script
script:
- faust -A runner worker -l info & sleep 15; kill -HUP $!
<<: *load_env
except:
- schedules
I was hoping I'm doing it the right way - sadly there is not many resources I can read about this issue. I understand the issue is between kafka and zookeeper, but I'm not sure how to fix it (Thought this is the correct way). Can even 2 services communicate to each other in CI?
Thanks!
Glancing over the GitLab CI docs about connecting to different services, it mentions a feature flag to allow cross-service communication, so try
faust:
variables:
FF_NETWORK_PER_BUILD: 1
...
services:
...
Also, for Kafka communication, it need to advertise its alias rather than localhost, so change
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"

Keycloak MySQL setup failed with error "Timeout after [300] seconds waiting for service container stability. Operation will roll back."

I tried with 8.x and 10.x version of Keycloak, also with tried with Keycloak docker image but getting below issue, while configuring Keycloak with MySQL
12:27:16,047 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Foreign key constraint added to USER_GROUP_MEMBERSHIP (USER_ID)
12:27:17,356 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Primary key added to GROUP_ROLE_MAPPING (ROLE_ID, GROUP_ID)
12:27:18,637 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Foreign key constraint added to GROUP_ROLE_MAPPING (GROUP_ID)
12:27:19,384 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting for service container stability. Operation will roll back. Step that first updated the service container was 'add' at address '[
("core-service" => "management"),
("management-interface" => "http-interface")
]'
12:27:20,326 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Foreign key constraint added to GROUP_ROLE_MAPPING (ROLE_ID)
12:27:21,381 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Unique constraint added to REALM_DEFAULT_GROUPS(GROUP_ID)
12:27:23,153 DEBUG [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 71) Foreign key constraint added to REALM_DEFAULT_GROUPS (REALM_ID)
12:27:24,389 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0190: Step handler org.jboss.as.server.DeployerChainAddHandler$FinalRuntimeStepHandler#2b5e08f5 for operation add-deployer-chains at address [] failed -- java.util.concurrent.TimeoutException: java.util.concurrent.TimeoutException
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.OperationContextImpl.waitForRemovals(OperationContextImpl.java:523)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext$Step.handleResult(AbstractOperationContext.java:1518)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext$Step.finalizeInternal(AbstractOperationContext.java:1472)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext$Step.finalizeStep(AbstractOperationContext.java:1445)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext$Step.access$400(AbstractOperationContext.java:1319)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.executeResultHandlerPhase(AbstractOperationContext.java:876)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:726)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:467)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1413)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.ModelControllerImpl.boot(ModelControllerImpl.java:527)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:515)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:477)
at org.jboss.as.server#11.1.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:448)
at org.jboss.as.server#11.1.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:401)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:416)
at java.base/java.lang.Thread.run(Thread.java:834)
12:27:24,391 ERROR [org.jboss.as.controller.client] (Controller Boot Thread) WFLYCTL0190: Step handler org.jboss.as.server.DeployerChainAddHandler$FinalRuntimeStepHandler#2b5e08f5 for operation add-deployer-chains at address [] failed -- java.util.concurrent.TimeoutException
By increasing timeout using command
$ bin/standalone.sh -Djboss.as.management.blocking.timeout=3600
It failed with error below
17:26:32,383 INFO [org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider] (ServerService Thread Pool -- 68) Initializing database schema. Using changelog META-INF/jpa-changelog-master.xml
17:31:25,854 WARN [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff7f000101:84ae906:5ee761e5:f in state RUN
17:31:25,870 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] successfully canceled TX 0:ffff7f000101:84ae906:5ee761e5:f
17:31:27,355 WARN [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff7f000101:84ae906:5ee761e5:12 in state RUN
17:31:27,356 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] successfully canceled TX 0:ffff7f000101:84ae906:5ee761e5:12
17:31:31,222 WARN [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff7f000101:84ae906:5ee761e5:15 in state RUN
17:31:31,225 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffff7f000101:84ae906:5ee761e5:15 invoked while multiple threads active within it.
17:31:31,250 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012381: Action id 0:ffff7f000101:84ae906:5ee761e5:15 completed with multiple threads - thread ServerService Thread Pool -- 68 was in progress with java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
java.net.SocketInputStream.read(SocketInputStream.java:171)
java.net.SocketInputStream.read(SocketInputStream.java:141)
com.mysql.cj.protocol.ReadAheadInputStream.fill(ReadAheadInputStream.java:107)
com.mysql.cj.protocol.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:150)
com.mysql.cj.protocol.ReadAheadInputStream.read(ReadAheadInputStream.java:180)
java.io.FilterInputStream.read(FilterInputStream.java:133)
com.mysql.cj.protocol.FullReadInputStream.readFully(FullReadInputStream.java:64)
com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:63)
com.mysql.cj.protocol.a.SimplePacketReader.readHeader(SimplePacketReader.java:45)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:52)
com.mysql.cj.protocol.a.TimeTrackingPacketReader.readHeader(TimeTrackingPacketReader.java:41)
com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:54)
com.mysql.cj.protocol.a.MultiPacketReader.readHeader(MultiPacketReader.java:44)
com.mysql.cj.protocol.a.NativeProtocol.readMessage(NativeProtocol.java:533)
com.mysql.cj.protocol.a.NativeProtocol.checkErrorMessage(NativeProtocol.java:703)
com.mysql.cj.protocol.a.NativeProtocol.sendCommand(NativeProtocol.java:642)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryPacket(NativeProtocol.java:941)
com.mysql.cj.protocol.a.NativeProtocol.sendQueryString(NativeProtocol.java:887)
com.mysql.cj.NativeSession.execSQL(NativeSession.java:1073)
com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:724)
com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:648)
org.jboss.jca.adapters.jdbc.WrappedStatement.execute(WrappedStatement.java:198)
liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:307)
liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:113)
liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1277)
liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1259)
liquibase.changelog.ChangeSet.execute(ChangeSet.java:582)
liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:79)
liquibase.Liquibase.update(Liquibase.java:214)
liquibase.Liquibase.update(Liquibase.java:192)
liquibase.Liquibase.update(Liquibase.java:188)
org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.updateChangeSet(LiquibaseJpaUpdaterProvider.java:182)
org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:102)
org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:81)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory$2.run(DefaultJpaConnectionProviderFactory.java:341)
org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.update(DefaultJpaConnectionProviderFactory.java:334)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.migration(DefaultJpaConnectionProviderFactory.java:306)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lambda$lazyInit$0(DefaultJpaConnectionProviderFactory.java:182)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory$$Lambda$802/938288417.run(Unknown Source)
org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:682)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lazyInit(DefaultJpaConnectionProviderFactory.java:133)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:81)
org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:59)
org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:204)
org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:51)
org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:33)
org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:204)
org.keycloak.services.DefaultKeycloakSession.realmLocalStorage(DefaultKeycloakSession.java:157)
org.keycloak.models.cache.infinispan.RealmCacheSession.getRealmDelegate(RealmCacheSession.java:148)
org.keycloak.models.cache.infinispan.RealmCacheSession.getMigrationModel(RealmCacheSession.java:141)
org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:97)
org.keycloak.services.resources.KeycloakApplication.migrateModel(KeycloakApplication.java:244)
org.keycloak.services.resources.KeycloakApplication.migrateAndBootstrap(KeycloakApplication.java:185)
org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:147)
org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
org.keycloak.services.resources.KeycloakApplication.startup(KeycloakApplication.java:138)
org.keycloak.services.resources.KeycloakApplication$$Lambda$778/1366630785.run(Unknown Source)
org.keycloak.provider.wildfly.WildflyPlatform.onStartup(WildflyPlatform.java:29)
org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:125)
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:152)
org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2805)
org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:369)
org.jboss.resteasy.spi.ResteasyDeployment.startInternal(ResteasyDeployment.java:281)
org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:92)
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:119)
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:305)
io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:145)
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:585)
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:556)
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction$$Lambda$734/2095679667.call(Unknown Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$735/1593765930.call(Unknown Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$735/1593765930.call(Unknown Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$735/1593765930.call(Unknown Source)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction$$Lambda$735/1593765930.call(Unknown Source)
io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:598)
org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:97)
org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:78)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
java.lang.Thread.run(Thread.java:748)
org.jboss.threads.JBossThread.run(JBossThread.java:485)
17:31:31,252 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012108: CheckedAction::check - atomic action 0:ffff7f000101:84ae906:5ee761e5:15 aborting with 1 threads active!
17:31:31,252 WARN [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012121: TransactionReaper::doCancellations worker Thread[Transaction Reaper Worker 0,5,main] successfully canceled TX 0:ffff7f000101:84ae906:5ee761e5:15
17:31:31,514 WARN [com.arjuna.ats.arjuna] (ServerService Thread Pool -- 68) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000101:84ae906:5ee761e5:15
17:31:31,516 WARN [com.arjuna.ats.arjuna] (ServerService Thread Pool -- 68) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000101:84ae906:5ee761e5:12
17:31:31,517 WARN [com.arjuna.ats.arjuna] (ServerService Thread Pool -- 68) ARJUNA012077: Abort called on already aborted atomic action 0:ffff7f000101:84ae906:5ee761e5:f
17:31:31,518 FATAL [org.keycloak.services] (ServerService Thread Pool -- 68) java.lang.RuntimeException: Failed to update database
17:31:31,520 INFO [org.jboss.as.server] (Thread-2) WFLYSRV0220: Server shutdown has been requested via an OS signal
17:31:32,024 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 75) WFLYCLINF0003: Stopped sessions cache from keycloak container
17:31:32,038 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0028: Stopped deployment keycloak-server.war (runtime-name: keycloak-server.war) in 490ms
17:31:32,068 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("subsystem" => "microprofile-metrics-smallrye")]): java.lang.NullPointerException
at org.wildfly.extension.microprofile.metrics.MicroProfileMetricsSubsystemAdd$2.execute(MicroProfileMetricsSubsystemAdd.java:86)
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:999)
at org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:743)
at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:467)
at org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1413)
at org.jboss.as.controller.ModelControllerImpl.boot(ModelControllerImpl.java:527)
at org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:515)
at org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:477)
at org.jboss.as.server.ServerService.boot(ServerService.java:448)
at org.jboss.as.server.ServerService.boot(ServerService.java:401)
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:416)
at java.lang.Thread.run(Thread.java:748)
17:31:32,081 ERROR [org.jboss.as.server] (ServerService Thread Pool -- 55) WFLYSRV0022: Deploy of deployment "keycloak-server.war" was rolled back with no failure message
17:31:32,132 INFO [org.jboss.as] (MSC service thread 1-5) WFLYSRV0050: Keycloak 10.0.1 (WildFly Core 11.1.1.Final) stopped in 584ms
I am looking for solution like:
If I could get initial DDL & DML of Keycloak
If Keycloak could continue from where it failed
If timeout could be increased. In this case, I tried below command but it not work
docker run -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=password -e DB_VENDOR=mysql -e DB_DATABASE=keycloak -e DB_USER=keycloak -e DB_PASSWORD=keycloak -e DB_ADDR=mysql -e ROOT_LOGLEVEL=DEBUG -e JAVA_OPTIONS="-Djboss.as.management.blocking.timeout=900" -e KEYCLOAK_LOGLEVEL=DEBUG --link=mysql jboss/keycloak
I ran into this issue today, after my Keycloak instance suddenly stopped working.
I took a look at my underlying PostgreSQL database and retrieved the locks.
And what I saw, was that a Liquibase lock was not released.
So I used this commmand to search for open activities inside the PostgreSQL DB:
SELECT pid, state, username, query, query_start
FROM pg_stat_activity
WHERE pid in (
select pid from pg_locks l
join pg_class t on l.relation = t.oid
where t.relkind = 'r'
);
which returned the following:
pid | state | usename | query | query_start
-----+--------+----------+----------------------------------------------------------------------+-------------------------------
32 | active | keycloak | SELECT ID FROM public.databasechangeloglock WHERE ID=1000 FOR UPDATE | 2020-10-11 14:21:07.396058+00
354 | active | keycloak | SELECT ID FROM public.databasechangeloglock WHERE ID=1000 FOR UPDATE | 2020-10-11 19:42:06.636659+00
Those database locks come from Liquibase which is used by Keycloak to upgrade your data structure/definition. Those locks were not released while upgrading the schema.
After dropping the table, the service now works again.
DROP TABLE databasechangeloglock;
In my case neither increasing the blocking timeout nor adding a default timeout to the transaction manager worked.
<coordinator-environment default-timeout="600" .../> # did not work
...
JAVA_TOOLS_OPTS: "-Djboss.as.management.blocking.timeout=3600" # did not work
Seems that your keycloak tried to update schema during startup and it took too much time, so wildfly broke Keycloak deployment by timeout. Try to add following property to wildfly:
${KEYCLOAK_HOME}/bin/standalone.sh -Djboss.as.management.blocking.timeout=3600
If you running keycloak as docker container then try using the following env variable
JAVA_TOOLS_OPTS: "-Djboss.as.management.blocking.timeout=3600"
Note: This will be displayed in the keycloak container logs. (This can be helpful to confirm whether keycloak picks up the env variable or not)
Let me know if this works and if you need anything.
jboss.as.management.blocking.timeout seems wildfly's deployment timeout.
In the above error case (ARJUNA012121: TransactionReaper::doCancellations worker) you can adjust the transaction timeout. It can be set in the standalone-ha.xml file. For examples, to set it to 600 seconds:
<subsystem xmlns="urn:jboss:domain:transactions:5.0">
<coordinator-environment default-timeout="600"/>
...
</subsystem>
For docker deployments, the environment variable to set is JAVA_OPTS_APPEND
Example:
JAVA_OPTS_APPEND="-Djboss.as.management.blocking.timeout=7200"
I also had this problem.
All solutions regarding the modification of timeout or system resources are just workarounds and does not always work.
Initially, I thought to a network problem. It was not. It was because of the database which became "locked". I did not tried the previous solution https://stackoverflow.com/a/64308446/7529228, but I think it can work.
I did an "hardest" solution :
I dumped the Postgres database (docker exec -it f84 pg_dump -c -U keycloak keycloak --no-owner > db-dump.sql where f84 is the ID of the container)
I removed the Keycloak instance (Volume, network, ... everything)
I deployed a new Keycloak instance.
I restored my database (cat db-dump.sql | docker exec -i keycloak_postgres.1.kyu24eqvnwaahg1obmy5omtfh psql -U keycloak keycloak - Adapt with your own container name)
I restarted the keycloak instance, then problem solved.
Thanks to https://pvelati.dev/2021/03/dump-and-restore-postgres-db-with-docker/ for the database dump/restoration procedure.
Make sure that you choose the right MySQL version.
Use MySQL 5.7 instead of MySQL 8.0.32 for Keycloak versions between 4.0.0 and 16.1.1. Or migrate to a newer Keycloak version.
Example of docker-compose.yml with Keycloak 16.1.0 using MySQL 5.7 (you can change the version; I also tested it with Keycloak 10).
version: '3'
volumes:
mysql_data:
driver: local
services:
mysql:
image: mysql:5.7
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
keycloak:
image: quay.io/keycloak/keycloak:16.1.0
environment:
DB_VENDOR: MYSQL
DB_ADDR: mysql
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Pa55w0rd
# Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the MySQL JDBC driver documentation in order to use it.
#JDBC_PARAMS: "connectTimeout=30000"
ports:
- 8080:8080
depends_on:
- mysql
Run the following docker-compose.yml example to reproduce the error. The error example is using Keycloak 16.1.0 with MySQL 8.0.32, instead of MySQL 5.7.
version: '3'
volumes:
mysql_data:
driver: local
services:
mysql:
image: mysql:8.0.32
volumes:
- mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
keycloak:
image: quay.io/keycloak/keycloak:16.1.0
environment:
DB_VENDOR: MYSQL
DB_ADDR: mysql
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Pa55w0rd
# Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the MySQL JDBC driver documentation in order to use it.
#JDBC_PARAMS: "connectTimeout=30000"
ports:
- 8080:8080
depends_on:
- mysql
And the following error message will appear:
Timeout after [300] seconds waiting for service container stability. Operation will roll back.
My advice is to check which MySQL version is compatible with your Keycloak.
Source:
https://www.keycloak.org/docs/16.1/server_installation/index.html#mysql-database
https://github.com/keycloak/keycloak-containers/blob/ef2c7027be5fb9a65c34e3511dc45c950831634e/docker-compose-examples/keycloak-mysql.yml