Keycloak Infinispan cache replication is not working - keycloak

Infinispan isn't replicating caches across the cluster; the user is asked to login again when Keycloak on any node in the cluster isn't healthy. Can you please help point out what we are doing wrong?
We followed the documentation for Keycloak Infinispan setup (https://www.keycloak.org/docs/latest/server_installation/#sticky-sessions) and set the owners for all caches to 3. Below is an example excerpt of our cache owner setup (domain/configuration/domain.xml)
<subsystem xmlns="urn:jboss:domain:infinispan:11.0">
<cache-container name="keycloak">
<distributed-cache name="sessions" owners="3"/>
...
Details about our Keycloak setup
Domain clustered mode
1 primary, 2 secondary
Hosted on AWS and ALB has sticky session enabled.
Any help is appreciated.

Related

SpringBootAdmin Kubernetes AutoDiscovery not bringing up client details with Servlet Context

SpringBootAdmin with Kubernetes AutoDiscovery Running on GCP in GKE.
Client has Servlet Context Path setup and it is not being shown as active application in the SpringBootAdmin.
What should we do to get it to work.
Thanks,
SpringBootAdmin Kubernetes AutoDiscovery show all Actuator endpoints of SpringBootCLient with Servlet.context-path setup

Properly manage user sessions in Keycloak and Kubernetes

I have KeyCloak deployed to kubernetes.
When the pod restart for any reason (like a modification to the deployment) all user sessions are lost.
I see in documentation that session can only be stored in-memory. While it will be replicated, I found no documentation to ensure all sessions are replicated before the old pod goes down.
Strangely, My searches don't find people having any issue with this. Am I missing something?
My ideal solution would be to store the session data in a redis cluster.

How to enable Infinispan smallrye metrics in Wildfly 20?

We want to expose the metrics of our Hibernate caches into Prometheus, and have for the time being built our own metrics for the caches, but since Infinispan 10 provides native metrics support, we'd rather use that.
So when I curl the localhost:9990/metrics and look for infinispan related metrics, I find nothing. I do find jgroups, and our own metrics.
The configuration for the metrics in the standalone.xml is:
<subsystem xmlns="urn:wildfly:microprofile-metrics-smallrye:2.0"
security-enabled="false"
exposed-subsystems="*"
prefix="${wildfly.metrics.prefix:wildfly}"
/>
We've also added "statistics-enabled=true" to the defined infinispan cache-containers:
<cache-container name="hibernate"
default-cache="local-query"
module="org.infinispan.hibernate-cache"
statistics-enabled="true">
I've searched the web for Infinispan, Wildfly, metrics, but I only find generic metrics articles about how you can create your own, or the announcements of added support for metrics in Infinispan.
According to the subsystem configuration all metrics should be exposed. Is there anything that we need to configure in addition to enable infinispan metrics inside wildfly?
I had the same issue and found out that there is a bug in Wildfly 20, so that infinispan statistics aren't exported. See WFLY-14063 and the fixing pull-request.
The fix version mentioned in the ticket is 22.0.0.Beta1.
Not sure if it is going to work, there is a metrics tag in cache-container that needs to be configured/enabled:
<cache-container statistics="true">
<metrics gauges="true" histograms="true" />
</cache-container>
See the infinispan configuration doc

Unable to connect to mysql in Istio environment

We have configured the Kubernetes cluster on bare-metal server with v1.15.1 and Istio-1.4.0 (demo) with mTLS enabled.
And our mysql server is outside the K8s cluster on Azure VM's.
Now when we inject istio-proxy while deploying the application we are unable to connect to mysql server via jdbc and also tried my mysql client. But when remove the istio-proxy by re-deploying we are able to connect instantly with out any issue.
When through many blogs wrt istio and mysql, tried with removing the default mesh policy but tht didnt work. The case in istio faq's is when the mysql is in k8s cluster with istio injected.
You can configure auto-mtls for istio by configuring values.global.mtls.auto=true (ie it uses mtls when possible and falls back for other connections
https://istio.io/docs/tasks/security/authentication/auto-mtls/
Serviceentry and destionation rule does the work form my case

Keycloak cluster Production setup on Kubernetes - Google K8S Engine (GKE)

I am trying to deploy Keycloak onto Kubernetes Engine in HA (cluster) mode.
I am doing the deployment with an ingress service with TLS setting to be able to access externally.
The TLS setting was pretty straightforward, so got it done.
I placed the manifest files here
https://github.com/vsomasvr/keycloak-gke/tree/master/keycloak
The issue is that the keycloak does not form the cluster, hence keycloak is not functioning, the authentication itself fails.
This manifest works well for a single replica (which is not a cluster, so not helpful and not interested in sticky-session related config).
I think this is the crucial problem to be solved for the keycloak production installtion.
Any help is greatly appreciated.
There is a blogpost on this here.
The only things I needed to do where the following:
1) Create own Docker image
FROM jboss/keycloak:latest
ADD cli/JDBC_PING.cli /opt/jboss/tools/cli/jgroups/discovery/
The JDBC_PING.cli can be found here
2) Update your deployment with an extra Env
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
This did the job for me with 2 replicas on GKE.