Keycloak upgrade from 11.0.2 to 16.1.0 - docker-compose

What is the mariadb supported version for keyclaok16.1.0.?
I have mariadb10.6.5 and keycloak11.0.2. I want to upgrade keycloak from 11.0.2 to any higher version. keycloak 16.10 is working fine with mariadb10.6.5 but when I restore my maridb backup, keycloak is faling.
keycloak | 06:56:13,814 DEBUG [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] (default task-5) could not clear warnings: java.sql.SQLException: IJ031070: Transaction cannot proceed: STATUS_ROLLEDBACK keycloak | at org.jboss.ironjacamar.jdbcadapters#1.5.3.Final//org.jboss.jca.adapters.jdbc.WrapperDataSource.checkTransactionActive(WrapperDataSource.java:272) keycloak | at org.jboss.ironjacamar.jdbcadapters#1.5.3.Final//org.jboss.jca.adapters.jdbc.WrappedConnection.checkTransactionActive(WrappedConnection.java:2005) keycloak | at org.jboss.ironjacamar.jdbcadapters#1.5.3.Final//org.jboss.jca.adapters.jdbc.WrappedConnection.checkStatus(WrappedConnection.java:2020) keycloak | at org.jboss.ironjacamar.jdbcadapters#1.5.3.Final//org.jboss.jca.adapters.jdbc.WrappedConnection.checkTransaction(WrappedConnection.java:1994) keycloak | at org.jboss.ironjacamar.jdbcadapters#1.5.3.Final//org.jboss.jca.adapters.jdbc.WrappedConnection.clearWarnings(WrappedConnection.java:1153) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.engine.jdbc.spi.SqlExceptionHelper.handleAndClearWarnings(SqlExceptionHelper.java:299) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.engine.jdbc.spi.SqlExceptionHelper.logAndClearWarnings(SqlExceptionHelper.java:269) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.releaseConnection(LogicalConnectionManagedImpl.java:194) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.afterTransaction(LogicalConnectionManagedImpl.java:162) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.afterTransaction(JdbcCoordinatorImpl.java:274) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.afterTransactionCompletion(JdbcCoordinatorImpl.java:452) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.transaction.backend.jta.internal.JtaTransactionCoordinatorImpl.afterCompletion(JtaTransactionCoordinatorImpl.java:381) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.transaction.backend.jta.internal.synchronization.SynchronizationCallbackCoordinatorNonTrackingImpl.doAfterCompletion(SynchronizationCallbackCoordinatorNonTrackingImpl.java:60) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.transaction.backend.jta.internal.synchronization.SynchronizationCallbackCoordinatorTrackingImpl.afterCompletion(SynchronizationCallbackCoordinatorTrackingImpl.java:72) keycloak | at org.hibernate#5.3.24.Final//org.hibernate.resource.transaction.backend.jta.internal.synchronization.RegisteredSynchronization.afterCompletion(RegisteredSynchronization.java:44) keycloak | at org.wildfly.transaction.client#2.0.0.Final//org.wildfly.transaction.client.AbstractTransaction.performConsumer(AbstractTransaction.java:223) keycloak | at org.wildfly.transaction.client#2.0.0.Final//org.wildfly.transaction.client.AbstractTransaction$AssociatingSynchronization.afterCompletion(AbstractTransaction.java:306) keycloak | at org.jboss.jts//com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.afterCompletion(SynchronizationImple.java:96) keycloak | at org.jboss.jts//com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.afterCompletion(TwoPhaseCoordinator.java:545) keycloak | at org.jboss.jts//com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.afterCompletion(TwoPhaseCoordinator.java:472) keycloak | at org.jboss.jts//com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.cancel(TwoPhaseCoordinator.java:127) keycloak | at org.jboss.jts//com.arjuna.ats.arjuna.AtomicAction.abort(AtomicAction.java:186) keycloak | at org.jboss.jts//com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.rollbackAndDisassociate(TransactionImple.java:1377) keycloak | at org.jboss.jts//com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.rollback(BaseTransaction.java:145) keycloak | at org.jboss.jts.integration//com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.rollback(BaseTransactionManagerDelegate.java:139) keycloak | at org.wildfly.transaction.client#2.0.0.Final//org.wildfly.transaction.client.LocalTransaction.rollbackAndDissociate(LocalTransaction.java:118) keycloak | at org.wildfly.transaction.client#2.0.0.Final//org.wildfly.transaction.client.ContextTransactionManager.rollback(ContextTransactionManager.java:83) keycloak | at org.keycloak.keycloak-services#16.1.0//org.keycloak.transaction.JtaTransactionWrapper.rollback(JtaTransactionWrapper.java:102) keycloak | at org.keycloak.keycloak-services#16.1.0//org.keycloak.services.DefaultKeycloakTransactionManager.rollback(DefaultKeycloakTransactionManager.java:182) keycloak | at org.keycloak.keycloak-services#16.1.0//org.keycloak.services.DefaultKeycloakTransactionManager.rollback(DefaultKeycloakTransactionManager.java:176) keycloak | at org.keycloak.keycloak-services#16.1.0//org.keycloak.services.filters.AbstractRequestFilter.close(AbstractRequestFilter.java:62) keycloak | at org.keycloak.keycloak-services#16.1.0//org.keycloak.services.filters.AbstractRequestFilter.filter(AbstractRequestFilter.java:49) keycloak | at org.keycloak.keycloak-wildfly-extensions#16.1.0//org.keycloak.provider.wildfly.WildFlyRequestFilter.doFilter(WildFlyRequestFilter.java:39) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletChain$1.handleRequest(ServletChain.java:68) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) keycloak | at org.wildfly.security.elytron-web.undertow-server#1.10.1.Final//org.wildfly.elytron.web.undertow.server.ElytronRunAsHandler.lambda$handleRequest$1(ElytronRunAsHandler.java:68) keycloak | at org.wildfly.security.elytron-base#1.18.1.Final//org.wildfly.security.auth.server.FlexibleIdentityAssociation.runAsFunctionEx(FlexibleIdentityAssociation.java:103) keycloak | at org.wildfly.security.elytron-base#1.18.1.Final//org.wildfly.security.auth.server.Scoped.runAsFunctionEx(Scoped.java:161) keycloak | at org.wildfly.security.elytron-base#1.18.1.Final//org.wildfly.security.auth.server.Scoped.runAs(Scoped.java:73) keycloak | at org.wildfly.security.elytron-web.undertow-server#1.10.1.Final//org.wildfly.elytron.web.undertow.server.ElytronRunAsHandler.handleRequest(ElytronRunAsHandler.java:67) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.RedirectDirHandler.handleRequest(RedirectDirHandler.java:68) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:117) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) keycloak | at org.wildfly.security.elytron-web.undertow-server-servlet#1.10.1.Final//org.wildfly.elytron.web.undertow.server.servlet.CleanUpHandler.handleRequest(CleanUpHandler.java:38) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.deployment.GlobalRequestControllerHandler.handleRequest(GlobalRequestControllerHandler.java:68) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.SendErrorPageHandler.handleRequest(SendErrorPageHandler.java:52) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:275) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:79) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:134) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:131) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1544) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1544) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1544) keycloak | at org.wildfly.extension.undertow#26.0.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1544) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:255) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:79) keycloak | at io.undertow.servlet#2.2.14.Final//io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:100) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.Connectors.executeRootHandler(Connectors.java:387) keycloak | at io.undertow.core#2.2.14.Final//io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:852) keycloak | at org.jboss.threads#2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) keycloak | at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990) keycloak | at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) keycloak | at org.jboss.threads#2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) keycloak | at org.jboss.xnio#3.8.5.Final//org.xnio.XnioWorker$WorkerThreadFactory$1$1.run(XnioWorker.java:1280) keycloak | at java.base/java.lang.Thread.run(Thread.java:829) keycloak | keycloak | 06:56:13,814 DEBUG [org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory] (default task-5) java.sql.Connection#endRequest has been invoked keycloak | 06:56:13,814 DEBUG [org.jbo
I'm using mariadb with keycloak. I want to restore maridb bcakup.

Related

Port forward to postgres kubernetes pod fails with connection reset when executing certain commands via psql

I have a postgres deployment, whose configuration looks like this
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.5-0
postgresVersion: 13
users:
- name: hippo
databases: ["hippo"]
options: "CREATEDB"
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.36-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
And I forward the local port 5432 to it, like so
DB_PORT=5432
PG_CLUSTER_PRIMARY_POD=$(microk8s kubectl get pod -o name \
-l postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master)
microk8s kubectl port-forward "${PG_CLUSTER_PRIMARY_POD}" ${DB_PORT}:${DB_PORT}
And I can then connect via psql. I can list the databases and connect to the hippo database.
rob#rob-Vostro-5402:~$ psql postgresql://hippo:Zw%5EAQuPf%3D%3Bi%3B%3F2%3E1RRbLTLrT#localhost:5432/hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
hippo=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
hippo | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | hippo=CTc/postgres
postgres | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 |
template0 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
hippo=> \c hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You are now connected to database "hippo" as user "hippo".
However, when I run \dt, I get disconnected.
hippo=> \dt
SSL SYSCALL error: EOF detected
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>
And the terminal in which I was running the port-forwarding now shows an error.
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
E0625 15:59:16.859963 72918 portforward.go:406] an error occurred forwarding 5432 -> 5432: error forwarding port 5432 to pod 8f58bd2f87d0ef63b969725920c98793f0dd1f41a25dc04bfe1b06a0ad7b58fc, uid : failed to execute portforward in network namespace "/var/run/netns/cni-0f76b252-b44c-017f-e337-b0285117cc4e": read tcp4 127.0.0.1:46248->127.0.0.1:5432: read: connection reset by peer
E0625 15:59:16.860854 72918 portforward.go:234] lost connection to pod
Any help would be much appreciated. Thanks
I am used to the same brittle behavior of port forwarding to Postgres and resorted to a simple reconnect as a workable solution:
while true; do kubectl port-forward "$path" -n "$namespace" "$ports"; done

Docker-compose service with traefik and internal network

I want to use both internal and an external traefik network in my container.
Problem: when I define an internal network, traefik loses communication with my service.
traefik_webgateway internal network
+----------+ +--------------------------+
| traefik | | +------+ +-----+ |
| +--------------+ | app | | api | |
| | | | | +------+ +-----+ |
| | | proxy | | |
| | | | | +-----+ +------+ |
| | | | | |auth | |worker| |
| +--------------+ +-----+ +------+ |
| | | |
+----------+ +--------------------------+
docker-compose.traefik.yml:
services:
traefik:
image: traefik:v2.4
restart: unless-stopped
ports:
- 80:80
- 8080:8080
- 443:443
networks:
- webgateway
networks:
webgateway:
driver: bridge
docker-compose.yml:
services:
proxy:
networks:
- internal # <=== this causes traefik point the healthcheck to the 172 IP instead of the 192 IP (see Edit below)
- traefik
labels:
- traefik.http....
app:
networks:
- internal
api:
networks:
- internal
auth:
networks:
- internal
worker:
networks:
- internal
networks:
internal:
traefik:
external:
name: traefik_webgateway
I don't want my services to use the external traefik network because I want my services to be namespaced for my green/blue deployment.
I was wondering why this is happening and if there's a solution.
Thanks in advance!
EDIT:
I obtained the proxy container's network:
docker inspect -f '{{range.NetworkSettings.Networks}} {{.IPAddress}}{{end}}' <container id>
And received 2 IPS:
172.18.A.A 192.168.B.B
I have a healthcheck that pings /health. On the traefik dashboard:
http://172.18.A.A:8000
Traefik is unable to communicate with this IP. Sometimes it's able to when it picks the other IP: 192.168.B.B.
From within the proxy container, I'm able to ping proxy (it uses the "B" IP)
PING proxy (192.168.B.B): 56 data bytes
64 bytes from 192.168.16.4: seq=0 ttl=64 time=0.090 ms
64 bytes from 192.168.16.4: seq=1 ttl=64 time=0.080 ms
64 bytes from 192.168.16.4: seq=2 ttl=64 time=0.065 ms
64 bytes from 192.168.16.4: seq=3 ttl=64 time=0.069 ms
I am able to ping: ping 192.168.B.B
I am unable to ping: ping 172.18.A.A

Keycloak fails because it can't find my theme or default theme with NullPointer Exception

I had a working local keycloak image when I had to nuke all my docker images for another issue. I then brought up my keycloak image again with
version: '3.6'
volumes:
keycloak_postgres_data: {}
services:
postgres-keycloak:
image: postgres:10-alpine
container_name: postgres
volumes:
- keycloak_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
keycloak:
image: jboss/keycloak:4.1.0.Final
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: testing
ports:
- 8088:8080
volumes:
- ./themes/puretalent:/opt/jboss/keycloak/themes/puretalent
- ./themes/fifteenrock:/opt/jboss/keycloak/themes/fifteenrock
depends_on:
- postgres-keycloak
The themes mentioned in the volume are present in the same folder and are also in the container when I bring it up. In the Realm Settings, I have tried, setting the theme to my specific theme or to the default theme but I get the same error below. I’ve elided it to the relevant error messages. I have also disable cache in the standalone.xml and restarted the container.
However, I'm getting a NullPointer exception.
keycloak_1 |
keycloak_1 | 01:18:58,781 WARN [org.keycloak.events] (default task-1) type=LOGIN_ERROR, realmId=master, clientId=odin, userId=null, ipAddress=172.20.0.1, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, response_type=code, redirect_uri=http://localhost:8082/odin/oidc_callback, code_id=9646b75e-273d-473e-a999-643d01d4cc36, response_mode=query
keycloak_1 | 01:18:58,793 ERROR [org.keycloak.services.error.KeycloakErrorHandler] (default task-1) Uncaught server error: java.lang.NullPointerException
keycloak_1 | at org.keycloak.theme.ExtendingThemeManager.loadTheme(ExtendingThemeManager.java:117)
keycloak_1 | at org.keycloak.theme.ExtendingThemeManager.getTheme(ExtendingThemeManager.java:108)
keycloak_1 | at org.keycloak.theme.DefaultThemeManager.getTheme(DefaultThemeManager.java:26)
keycloak_1 | at org.keycloak.theme.DefaultThemeManager.getTheme(DefaultThemeManager.java:21)
keycloak_1 | at org.keycloak.forms.login.freemarker.FreeMarkerLoginFormsProvider.getTheme(FreeMarkerLoginFormsProvider.java:262)
keycloak_1 | at org.keycloak.forms.login.freemarker.FreeMarkerLoginFormsProvider.createResponse(FreeMarkerLoginFormsProvider.java:158)
keycloak_1 | at org.keycloak.forms.login.freemarker.FreeMarkerLoginFormsProvider.createErrorPage(FreeMarkerLoginFormsProvider.java:498)
keycloak_1 | at org.keycloak.services.ErrorPage.error(ErrorPage.java:31)
keycloak_1 | at org.keycloak.authentication.AuthenticationProcessor.handleBrowserException(AuthenticationProcessor.java:728)
keycloak_1 | at org.keycloak.protocol.AuthorizationEndpointBase.handleBrowserAuthenticationRequest(AuthorizationEndpointBase.java:145)
keycloak_1 | at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.buildAuthorizationCodeAuthorizationResponse(AuthorizationEndpoint.java:409)
keycloak_1 | at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.process(AuthorizationEndpoint.java:152)
keycloak_1 | at org.keycloak.protocol.oidc.endpoints.AuthorizationEndpoint.buildGet(AuthorizationEndpoint.java:108)
…
In Client->My_Client->Settings->Login Theme. Select your theme.
Please note that the Client-Login-Theme overrides the Realm-Login-Theme.
See this images, you need to set the Client-Login-Theme to Choose... to get the Realm-Login-Theme working.
I've created an added my custom theme kreata to Keycloak based on this tutorial: https://www.keycloak.org/docs/latest/server_development/#_themes

How can I check postgres database in docker volume?

I followed the tutorial from prisma.io to get started building a local server.
Follow the docker-compose.yml:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: prisma
postgres:
image: postgres:10.3
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres: ~
I builded two docker containers, one is prisma server, the other is postgres database.
As I thought after command prisma depoly the Model User should create a users table in the database.
But I try to check the schema in the database and got the result:
docker exec -it myContainer psql -U prisma
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
prisma | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
prisma=# \z // or postgres=# \z
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+------+------+-------------------+-------------------+----------
(0 rows)
prisma=# \dt or postgres=# \dt
Did not find any relations.
Then I tried to check the volume folder in the VM machine
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/var/lib/docker/volumes/first_prisma_postgres/_data # ls
PG_VERSION pg_commit_ts pg_ident.conf pg_notify pg_snapshots pg_subtrans pg_wal postgresql.conf
base pg_dynshmem pg_logical pg_replslot pg_stat pg_tblspc pg_xact postmaster.opts
global pg_hba.conf pg_multixact pg_serial pg_stat_tmp pg_twophase postgresql.auto.conf postmaster.pid
The data exactly exist at the VM machine, but how can I checked the data or made dump backup from it?
Once if you are connected to postgres in your container, you can execute normal queries.
Example:
\l to display all the schema
\dt to display all tables.
maybe connecting to database is the one you are missing.
Run - \c schema_name to connect to db
Once you are connected, you can execute any normal queries.
To check all volumes you can run:
docker volume ls
To check one of them:
docker volume inspect postgres_db
, where postgres_db is a name of your volume
As a result you may see something like this:
[
{
"CreatedAt": "2022-05-05T14:24:04Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "s-deployment",
"com.docker.compose.version": "2.4.1",
"com.docker.compose.volume": "postgres_db"
},
"Mountpoint": "/var/lib/docker/volumes/s-deployment_postgres_db/_data",
"Name": "s-deployment_postgres_db",
"Options": null,
"Scope": "local"
}
]

Haproxy not bind to frontend on Ubuntu Server 18.04

How to solve Haproxy not working on ubuntu server, did I missing something, need a guide here.
Below I do not have a problem on my local macbook docker compose,
stm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | frontend default_port_80
lstm-haproxy | bind :80
lstm-haproxy | reqadd X-Forwarded-Proto:\ http
lstm-haproxy | maxconn 4096
lstm-haproxy | default_backend default_service
lstm-haproxy | backend default_service
lstm-haproxy | server lstm_lstm_1 lstm_lstm_1:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | server lstm_lstm_2 lstm_lstm_2:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | INFO:haproxy:Config check passed
lstm-haproxy | INFO:haproxy:Reloading HAProxy
lstm-haproxy | INFO:haproxy:Restarting HAProxy gracefully
lstm-haproxy | INFO:haproxy:HAProxy is reloading (new PID: 11)
lstm-haproxy | INFO:haproxy:===========END===========
But when I push to my staging server, ubuntu server 18.04
lstm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | INFO:haproxy:Launching HAProxy
lstm-haproxy | INFO:haproxy:HAProxy has been launched(PID: 9)
My docker and docker-compose versions on macbook,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad01
My docker and docker-compose versions on ubuntu server,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.1, build b02f1306
This is my docker-compose.yml,
version: '3'
services:
lstm:
restart: always
build:
context: .
environment:
MAX_REQUEST: 100
NUM_WORKER: 5
BIND_ADDR: 0.0.0.0:8008
command: bash monkey-sync.sh
lstm-haproxy:
image: dockercloud/haproxy
links:
- lstm
ports:
- '8008:80'
container_name: lstm-haproxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
This is my dockerfile,
FROM python:3.6.1 AS base
RUN pip3 install blablabla
WORKDIR /app
COPY . /app
RUN echo
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
EXPOSE 8008
Any guides really help me a lot, thanks!
Solved! Need to set SERVICE_PORTS: 8008 on lstm environment.