Keycloak continuously redirects to login page - keycloak

I have setup the keycloak but it continuously redirects to login page in a loop.
I got the below error in logs:
2022-02-22 12:41:42,003 WARN [org.keycloak.events] (default task-2) type=REFRESH_TOKEN_ERROR, realmId=master, clientId=security-admin-console, userId=null, ipAddress=10.x.x.x, error=invalid_token, grant_type=refresh_token, client_auth_method=client-secret
Can anyone guide?

If you are behind apache/nginx, perhaps set httpOnly cookie! This can be reason that behavior in keycloak 15.0.2

Related

keycloak on kubernetes: x509 auth with ingress

Does anyone have an example config for x509 authentication w/ Keycloak on Kubernetes via an ingress endpoint? I have x509 working fine w/ a NodePort setup, but access via ingress fails and Keycloak cycles to the username/password form.
18:37:54,474 DEBUG [org.keycloak.authentication.AuthenticationProcessor] (default task-2) AUTHENTICATE
18:37:54,474 DEBUG [org.keycloak.authentication.AuthenticationProcessor] (default task-2) AUTHENTICATE ONLY
18:37:54,474 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) processFlow: x509-browser
18:37:54,475 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) check execution: 'auth-cookie', requirement: 'ALTERNATIVE'
18:37:54,475 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) authenticator: auth-cookie
18:37:54,475 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Going through the flow 'x509-browser' for adding executions
18:37:54,475 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Going through the flow 'x509-browser forms' for adding executions
18:37:54,475 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Selections when trying execution 'auth-cookie' : [ authSelection - auth-cookie, authSelection - auth-x509-client-username-form, authSelection - auth-username-password-form]
18:37:54,475 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) invoke authenticator.authenticate: auth-cookie
18:37:54,475 DEBUG [org.keycloak.services.util.CookieHelper] (default task-2) Could not find cookie KEYCLOAK_IDENTITY, trying KEYCLOAK_IDENTITY_LEGACY
18:37:54,475 DEBUG [org.keycloak.services.managers.AuthenticationManager] (default task-2) Could not find cookie: KEYCLOAK_IDENTITY
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) authenticator ATTEMPTED: auth-cookie
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) check execution: 'auth-x509-client-username-form', requirement: 'ALTERNATIVE'
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) authenticator: auth-x509-client-username-form
18:37:54,476 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Going through the flow 'x509-browser' for adding executions
18:37:54,476 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Going through the flow 'x509-browser forms' for adding executions
18:37:54,476 DEBUG [org.keycloak.authentication.AuthenticationSelectionResolver] (default task-2) Selections when trying execution 'auth-x509-client-username-form' : [ authSelection - auth-x509-client-username-form, authSelection - auth-username-password-form]
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) invoke authenticator.authenticate: auth-x509-client-username-form
18:37:54,476 DEBUG [org.keycloak.services] (default task-2) [X509ClientCertificateAuthenticator:authenticate] x509 client certificate is not available for mutual SSL.
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) authenticator ATTEMPTED: auth-x509-client-username-form
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) check execution: 'x509-browser forms flow', requirement: 'ALTERNATIVE'
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) processFlow: x509-browser forms
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) check execution: 'auth-username-password-form', requirement: 'REQUIRED'
18:37:54,476 DEBUG [org.keycloak.authentication.DefaultAuthenticationFlow] (default task-2) authenticator: auth-username-password-form
Ingress is just an API and implemented by various providers, which support additional configuration in a product specific way.
In your example it is nginx.
Make sure that nginx is deployed with support for SNI based TLS passthrough, therefore keycloak will receive the original TLS connection and leverage client certificates.
For nginx the ingress configuration for that is an additional annotation:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Relevant documentation: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#ssl-passthrough

Can't logout from Keycloak: localhost:80 connection refused

I have:
Keycloak running as Docker container (Image: jboss/keycloak:16.1.1)
Traefik running (Image: traefik:v2.6.0)
a small Realm called demo-realm with one client called demo-client, which is a JEE Application deployed on jboss/wildfly:17.0.1.Final and this WILDFLY Server has the Keycloak Adapter System configured as per documentation.
Traefik rules for Keycloak
"traefik.docker.network": network-kf-LOCAL
"traefik.http.routers.keycloak.rule": Host(`keycloak.localhost`)
"traefik.http.routers.keycloak.service": "keycloak-application"
"traefik.http.services.keycloak-application.loadbalancer.server.port": "8080"
I set the KEYCLOAK_FRONTEND_URL for my Keycloak service in order to make redirect to login page work because frontend request url and backend url are not the same:
KEYCLOAK_FRONTEND_URL: http://keycloak.localhost/auth
Deployment Configuration in standalone.xml of my client
<secure-deployment name="my-app.war">
<realm>${env.KEYCLOAK_REALM}</realm>
<auth-server-url>${env.KEYCLOAK_BASEURL_INTERN}</auth-server-url>
<resource>${env.KEYCLOAK_CLIENT_ID}</resource>
<ssl-required>external</ssl-required>
<public-client>true</public-client>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>
Client Configuration inside Keycloak Admin Dashboard:
Note that my client application is also running behind Traefik using the Rule
"traefik.http.routers.traefik.rule": Host(`localhost`) && PathPrefix(`demo`)
so I dont specify a port in the client configuration inside keycloak.
Redirect to Login Screen and authentication already works, so i can enter my credentials and I'm logged in. I just can't logout or end the session.
If i try to destroy the session using both the Keycloak Administration Console or URL http://keycloak.localhost/auth/realms/demo-realm/protocol/openid-connect/logout the keycloak service logs the following:
15:22:10,893 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990
2022-02-14T15:23:12.847092400Z 15:23:12,846 WARN [org.keycloak.connections.httpclient.DefaultHttpClientFactory] (default task-1) TruststoreProvider is disabled
2022-02-14T15:23:12.963517200Z 15:23:12,960 WARN [org.keycloak.connections.httpclient.DefaultHttpClientFactory] (default task-1) Connect to localhost:80 [localhost/127.0.0.1] failed: Connection refused (Connection refused): org.apache.http.conn.HttpHostConnectException: Connect to localhost:80 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
......
2022-02-14T15:23:12.964548700Z Caused by: java.net.ConnectException: Connection refused (Connection refused)
......
2022-02-14T15:23:12.966559000Z 15:23:12,964 WARN [org.keycloak.services] (default task-1) KC-SERVICES0057: Logout for client 'demo-client' failed: org.apache.http.conn.HttpHostConnectException: Connect to localhost:80 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
Why It tries to reach localhost:80 ?? Keycloak runs on 8080. I cannot see any port 80 in the configuration of keycloak.

Kafka with Kerberos

I'm encountering the following errors while configuring kafka with Kerberos authentication.
Can somebody please let me know, what could be going wrong here in getting it fixed. Tried various options, but nothing seems to be working for me.
I could notice zookeeper is getting connected and in next attempt it fails
[2019-10-09 05:06:07,942] INFO Initiating client connection, connectString=kafka-d1.example.com:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#6adbc9d (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:07,945] DEBUG zookeeper.disableAutoWatchReset is false (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:07,959] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:07,961] DEBUG JAAS loginContext is: Client (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,252] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,253] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,254] DEBUG Client principal is "kafka/kafka-d1.example.com#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,261] DEBUG Server principal is "krbtgt/EXAMPLE.COM#EXAMPLE.COM". (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT valid starting at: Wed Oct 09 05:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT expires: Wed Oct 09 15:06:08 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,264] INFO TGT refresh sleeping until: Wed Oct 09 13:06:47 EDT 2019 (org.apache.zookeeper.Login)
[2019-10-09 05:06:08,265] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,265] DEBUG creating sasl client: Client=kafka/kafka-d1.example.com#EXAMPLE.COM;service=zookeeper;serviceHostname=kafka-d1.example.com (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,272] INFO Opening socket connection to server kafka-d1.example.com/10.14.61.17:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,277] INFO Socket connection established to kafka-d1.example.com/10.14.61.17:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,278] DEBUG Session establishment request sent on kafka-d1.example.com/10.14.61.17:2181 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,286] INFO Session establishment complete on server kafka-d1.example.com/10.14.61.17:2181, sessionid = 0x16dafa306f20009, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,288] DEBUG ClientCnxn:sendSaslPacket:length=0 (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] DEBUG saslClient.evaluateChallenge(len=0) (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,289] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,300] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-10-09 05:06:08,300] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-10-09 05:06:08,300] ERROR [ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,350] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:546)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1559)
at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1480)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at kafka.zk.KafkaZkClient$$anonfun$createTopLevelPaths$1.apply(KafkaZkClient.scala:1472)
at scala.collection.immutable.List.foreach(List.scala:392)
at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1472)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:373)
at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-10-09 05:06:08,354] INFO shutting down (kafka.server.KafkaServer)
[2019-10-09 05:06:08,356] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,357] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2019-10-09 05:06:08,359] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-10-09 05:06:08,361] INFO shut down completed (kafka.server.KafkaServer)
[2019-10-09 05:06:08,361] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-10-09 05:06:08,364] INFO shutting down (kafka.server.KafkaServer)
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab=/etc/keytabs/zookeeper.keytab
storeKey=true
useTicketCache=false
principal=zookeeper/kafka-d1.EXAMPLE.COM#EXAMPLE.COM;
};
cat /etc/kafka/jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/keytabs/kafka-d1.keytab"
principal="kafka/kafka-d1.EXAMPLE.COM#EXAMPLE.COM";
};
/etc/krb5.conf
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 3000
ignore_acceptor_hostname = true
[realms]
EXAMPLE.COM = {
kdc = srv-kerb.example.com
admin_server = srv-kerb.example.com
kdc = srv-kerb.example.com
}
[domain_realm]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and socketChannel.socket().getInetAddress().getHostName() must match the hostname in principal/hostname#realm Kafka Client will go to AUTHENTICATION_FAILED state.
I had the same problem. Changing zookeeper host value, from IP address to FQDN (hostname) and also adding the hostname in /etc/hosts fixed the problem for me.

Creating topics in SASL/GSSAPI (Kerberos) based Kafka Cluster

We have a SASL/GSSAPI (Kerberos) based authentication scheme in our Kafka cluster. Brokers are configured to authenticate with Zookeeper and each other. We added a principal to the "Super Users" list on all the brokers so that we can create topics using that principal, however, topic creation is failing, seemingly because of lack of privileges:
[2019-09-11 02:16:30,905] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2019-09-11 02:16:30,912] INFO Waiting for keeper state SaslAuthenticated (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,157] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,161] INFO Client will use GSSAPI as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-09-11 02:16:31,164] INFO Opening socket connection to server broker101.prod/13.14.15.16:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,177] INFO Socket connection established to broker101.prod/13.14.15.16:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,179] INFO TGT refresh thread started. (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,193] INFO TGT valid starting at: Tue Aug 20 02:16:31 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,194] INFO TGT expires: Wed Aug 21 02:16:31 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,194] INFO TGT refresh sleeping until: Tue Aug 20 21:34:57 UTC 2019 (org.apache.zookeeper.Login)
[2019-09-11 02:16:31,203] INFO Session establishment complete on server broker101.prod/13.14.15.16:2181, sessionid = 0x16c60b863b00035, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,204] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,214] ERROR An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-09-11 02:16:31,214] ERROR SASL authentication with Zookeeper Quorum member failed: javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's received SASL token. Zookeeper Client will go to AUTH_FAILED state. (org.apache.zookeeper.ClientCnxn)
[2019-09-11 02:16:31,215] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2019-09-11 02:16:31,215] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
Exception in thread "main" org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:103)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:85)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:58)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Is it even possible to create topics with a principal other than principal name used by broker to authentication with zookeeper? if yes, then how?
We can successfully create topics using principal which is used by brokers to authenticate with Zookeeper. We were thinking if Super User can perhaps do anything on the cluster, including creating new topics. Is that perception incorrect?

Error when authenticating with a Keycloak cluster

I've implemented a test environment to verify a clustered Keycloak authentication server with two Java web applications that need single signon. There are two Keycloak nodes in the cluster and there is Apache2 mod_proxy load balancer in front of them. I've followed the guidelines in the Keycloak documentation and everything seems to work fine, Keycloak logs report caches are started properly and syncronized:
[Server:server-one] 11:28:04,298 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,306 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,318 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,319 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
[Server:server-one] 11:28:04,321 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2) ISPN000094: Received new cluster view for channel ejb: [master:server-one|1] (2) [master:server-one, nucdev2:server-two]
The problem is that when authenticating from webapp, using the Keycloak Java adapter for Tomcat, I get a 403 Forbidden and looking at Keycloak log I see this error message:
[Server:server-one] 11:33:30,700 WARN [org.keycloak.events] (default task-3) type=CODE_TO_TOKEN_ERROR, realmId=test, clientId=customer-portal, userId=null, ipAddress=192.168.10.111, error=user_not_found, grant_type=authorization_code, code_id=889ab790-0c3a-44ea-a1df-247ba501260f, client_auth_method=client-secret
Seems that the problem is related to clustering mode, since everything works fine in standalone mode.
Is there anyone who is able to provide an example of a clustered Keycloak installation with an external load balancer like mod_proxy?
Setting session affinity with the nginx ingress fixes the issue, but I suspect that's just a band-aid on some broken functionality. I suspect the AuthenticationSessions replication is not working correctly, but I don't see any indication that it's failing, and I'm not sure where to look or how to confirm it's the root issue.
Here are the nginx affinity settings I used on the ingress:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent