How to persist user session at keycloak running on kubernetes? - kubernetes

After pod restart, all user session data are missing.
But all other data are present after pod restart (ex:- realm, user, realm settings)
keycloak is running with Postgres as persistence storage in a single pod.
below is the deployment file config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "XXXXXXX"
- name: KEYCLOAK_PASSWORD
value: "XXXXXXX"
- name: REALM
value: "XXXXXXX"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: "POSTGRES"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
can you please let me know, any configuration required to persist user session?

Please take a look at the existing chart.
On High Availability and Clustering section, it shows you that you need to set: CACHE_OWNERS_COUNT to 2 or higher and CACHE_OWNERS_AUTH_SESSIONS_COUNT to 2 or higher.
It's also recommended to use a separate Infinispan cluster if you have a big (more than 1 million from my experience) sessions.

Related

How to run a keycloak as second container after first container postgres Database start up at multi-container pod environment of kubernetes?

In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml

deployment throwing error for init container only when I add a second regular container to my deployment

Hi There I am currently trying to deploy sonarqube 7.8-community in GKE using a DB cloudsql instance.
This requires 2 containers ( one for sonarqube and the other for the cloudproxy in order to connect to the DB)
Sonarqube container, however, also requires an init container to give it some special memory requirments.
When I create the deployment with just the sonarqube image and the init container it works fine but this wont be of any use as I need the cloudsql proxy container to connect to my external db. When I add this container though the deployment suddenly errors with the below
deirdrerodgers#cloudshell:~ (meta-gear-306013)$ kubectl create -f initsonar.yaml
The Deployment "sonardeploy" is invalid:spec.template.spec.initContainers[0].volumeMounts[0].name: Not found: "init-sysctl"
This is my complete yaml file with the init container and the other two containers. I wonder is the issue because it doesnt know which container to apply the init container to?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
volumes:
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh
spec:
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
Your yaml file is incorrect. You have two spec: blocks. It should be only one. You need to combine it together. Under spec block should be initContainers block, then containers and finally volumes block. Look at the correct yaml file below:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonardeploy
name: sonardeploy
namespace: sonar
spec:
replicas: 1
selector:
matchLabels:
app: sonardeploy
strategy: {}
template:
metadata:
labels:
app: sonardeploy
spec:
initContainers:
- name: init-sysctl
image: busybox:1.32
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
resources:
{}
command: ["sh",
"-e",
"/tmp/scripts/init_sysctl.sh"]
volumeMounts:
- name: init-sysctl
mountPath: /tmp/scripts/
containers:
- image: sonarqube:7.8-community
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: sonarsecret
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: sonarsecret
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://localhost:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command: ["/cloud_sql_proxy",
"-instances=meta-gear-306013:us-central1:sonardb=tcp:5432",
"-credential_file=/secrets/service_account.json"]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: cloudsql-instance-credentials-volume
mountPath: /secrets/
readOnly: true
volumes:
- name: cloudsql-instance-credentials-volume
secret:
secretName: cloudsql-instance-credentials
- name: init-sysctl
configMap:
name: sonarqube-sonarqube-init-sysctl
items:
- key: init_sysctl.sh
path: init_sysctl.sh

Grafana is generating links with Base URL : http://localhost:3000 instead of using my url

I deployed grafana 7 with Kubernetes, here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-core
namespace: monitoring
labels:
app: grafana
component: core
spec:
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
initContainers:
- name: init-chown-data
image: grafana/grafana:7.0.3
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command: ["chown", "-R", "472:472", "/var/lib/grafana"]
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
containers:
- image: grafana/grafana:7.0.3
name: grafana-core
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 472
# env:
envFrom:
- secretRef:
name: grafana-env
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_INSTALL_PLUGINS
value: grafana-clock-panel,grafana-simple-json-datasource,camptocamp-prometheus-alertmanager-datasource
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana
key: admin-username
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana
key: admin-password
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana
- name: grafana-datasources
mountPath: /etc/grafana/provisioning/datasources
volumes:
- name: grafana-persistent-storage
persistentVolumeClaim:
claimName: grafana-storage
- name: grafana-datasources
configMap:
name: grafana-datasources
nodeSelector:
kops.k8s.io/instancegroup: monitoring-nodes
It is working well, but each time it generates an URL, it does it with base url : http://localhost:3000 instead of using https://grafana.company.com
Where can I configure that ? I couldn't find a env var that handle it.
Configure the root_url option of [server] in your Grafana config file or env variable GF_SERVER_ROOT_URL to https://grafana.company.com/.
I have fount it can be done through using the env variable inside the grafana pod. This set up is a tricky one, misuse of the url format of the GF_SERVER_ROOT_URL to your.url with no quotes, "your.url" without https:// or http:// and even "http://your.url" with no / at the end may cause problems.
grafana:
env:
GF_SERVER_ROOT_URL: "http://your.url/"
notifiers:
notifiers.yaml:
notifiers:
- name: telegram
type: telegram
uid: telegram
is_default: true
settings:
bottoken: "yourbottoken"
chatid: "-yourchatid"
and then use uid: "telegram" in the provisioned dashboards

kubernetes deployment file inject environment variables on a pre script

I have an elixir app connection to postgres using sql proxy
here is my deployment.yaml I deploy on kubernetes and works well,
the postgres connection password and user name are taken in the image from the environment variables in the yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: my-app
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: my-app
image: my-image:1.0.1
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- foreground
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: 123456
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project:region:my-postgres-instance=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: secrets-volume
secret:
secretName: gcloud-json
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
now due to security requirements I'd like to put sensitive environments encrypted, and have a script decrypting them
my yaml file would look like this:
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: ENCRYPTED_POSTGRES_USERNAME
value: hgkdhrkhgrk
- name: ENCRYPTED_POSTGRES_PASSWORD
value: fkjeshfke
then I have script that would run on all environments with prefix ENCRYPTED_ , will decrypt them and insert the dycrpted value under the environment variable without the ENCRYPTED_ prefix
is there a way to do that?
the environments variables should be injected before the image starts running
another requirement is that the pod running the image would decrypt the variables - since its the only one which has permissions to do it (working with work load identity)
something like:
- command:
- sh
- /decrypt_and_inject_environments.sh

kubernetes creating statefulset fail

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.