How to use external database (AWS RDS) with Bitnami Keycloak - keycloak

I'm trying to use Bitnami Keycloak Helm, which has an internal dependency on Bitnami PostgreSQL that I cannot use. I have to use our existing RDS as an external DB, which seems possible but instruction on this page is completely empty. Unfortunately, I can only use the Bitnami helm for Keycloak, FYI. Can anyone point me to the right direction or show what and where to change the stock chart to make it happen pls? Not getting much luck with Google atm.
Thanks in advance!

You need to use a sidecar container which will handle authorization and proxy the db calls from keycloak to your managed database :
[keycloak] --localhost:XXXX-> [sidecar container] -> [Aws RDS]
You'll find documentation for this on the bitnami chart github repo : https://github.com/bitnami/charts/tree/master/bitnami/keycloak#use-sidecars-and-init-containers

On the stock chart, you can set these properties:
postgresql:
enabled: false
externalDatabase:
host: ${DB_URL}
port: ${DB_PORT}
user: ${DB_USERNAME}
database: ${DB_NAME}
password: ${DB_PASSWORD}
If you need high availability, i.e will be running keycloak with multiple replicas, add the below as well:
cache:
enabled: true
extraEnvVars:
- name: KC_CACHE
value: ispn
- name: KC_CACHE_STACK
value: kubernetes
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
Sources:
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#keycloak-cache-parameters
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#database-parameters
https://www.keycloak.org/server/caching#_relevant_options

Related

application authentication using wso2 in kubernetes ingress

I am trying to use wso2 as an authorization server with ouath2. I referred to the below links
Link
As mentioned in the link Google authenticator is used but can I use wso2 instead of google?
I have created a service provider in wso2 -> then select oauth/opendID connect configuration -> used the client ID and secret to create oauth2 image. But I am not sure what provider name I have to give.
spec:
containers:
- args:
- --provider=wso2
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: 0UnfZFZDb
- name: OAUTH2_PROXY_CLIENT_SECRET
value: rZroDX6uOsySSt4eN
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: b'cFF0enRMdEJrUGlaU3NSTlkyVkxuQT09'
image: quay.io/pusher/oauth2_proxy:v4.1.0-amd64
and in the ingress, I have added the following annotations
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.auth.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://identity.wso2.com:443/commonauth?rd=/"
but I am getting an authentication error.
Can use I wso2 as a authorization server similar the github or google?
for wso2, do I need to create an oauth2 image?
my k8s ingress annotations are correct (tried multiple values like start?rd=$escaped_request_uri etc)?

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Keycloak on Kubernetes high availability cluster (with ldap as user federation) using codecentrics Helm Charts

We wanted to set up a high available Keycloak cluster on Kubernetes (with ldap as a user federation). We decided to use codecentrics helm charts since we were trying them out for a single Keycloak instance setup and that worked well. For the cluster we ran into a few issues while trying to set up everything correctly and didn't find the best sources in the wide internet. Therefore I decided to write a short summary what our main issues where and how we got through them.
Solutions to our problems where described on this website (amongst others), but things are described kind of very briefly and felt partly incomplete.
Issues we faced where:
Choosing the correct jgroups.discoveryProtocol
Adding the correct discoveryProperties
Parts that need to be overridden in your own values.yaml
Bonus issues (we already faced with the single instance setup):
Setting up an truststore to connect ldap as a user federation via ladps
Adding a custom theme for keycloak
I will try and update this if things change due to codecentrics updating their helm charts.
Thanks to codecentrics for providing the helm charts by the way!
Disclaimer:
This is the way we set it up - I hope this is helpful, but I do not take responsibility for configuration errors and resulting security flaws. Also we went through many different sources on the internet, I am sorry that I can't give credits to all of them, but it has been a few days since than an I can't get them together anymore...
CODECENTRIC CHART VERSION < 9.0.0
The main issues:
1. Choosing the correct jgroups.discoveryProtocol:
I will not explain things here but for us the correct protocol to use was org.jgroups.protocols.JDBC_PING. Find out more about the protocols (and general cluster setup) here.
discoveryProtocol: org.jgroups.protocols.JDBC_PING
With JDBC_PING jgroups will manage instance discovery. Therefore and for caching user sessions the database provided for keycloak will be enhanced with extra tables, e.g. JGROUPSPING.
2. Setting up the discoveryProperties:
This needs to be set to
discoveryProperties: >
"datasource_jndi_name=java:jboss/datasources/KeycloakDS"
to avoid an error like:
java.lang.IllegalStateException: java.lang.IllegalArgumentException:
Either the 4 configuration properties starting with 'connection_' or
the datasource_jndi_name must be set
3. Other parts that need to be set (as mostly described in the readme of codecentrics github and in the comments of the values.yaml in github as well):
setting the clusterDomain according to your cluster
setting the number of replicas greater than 1 to enable clustering
setting the service.type: We went with ClusterIP but it also can work with other setups like LoadBalancer depending on your setup
optional but recommended: Setting either maxUnavailable or minAvailable to always have sufficient pods available according to your needs.
setting up our Ingress (which looks pretty much standard):
ingress:
enabled: true
path: /
annotations: {
kubernetes.io/ingress.class: nginx
}
hosts:
- your.host.org
Bonus issues:
1. The truststore:
To have Keycloak communicate with ldap via ldaps we had to set up a truststore with the certificate of our ldap in it:
Receive the certificate from ldap and save it somewhere:
openssl s_client -connect your.ldap.domain.org < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /path/to/ldap.cert
Create a new keystore:
keytool -genkey -alias replserver \
-keyalg RSA -keystore /path/to/keystore.jks \
-dname "CN=AddCommonName, OU=AddOrganizationalUnit, O=AddOrganisation, L=AddLocality, S=AddStateOrProvinceName, C=AddCountryName" \
-storepass use_the_same_password \
-keypass use_the_same_password \
-deststoretype pkcs12
Add the downloaded certificate to the keystore:
keytool -import -alias ldaps -file /path/to/ldap.cert -storetype JKS -keystore path/to/keystore.jks
Type in the required password: use_the_same_password.
Trust the certificate by typing 'yes'.
Provide the keystore in a configmap:
kubectl create configmap cert-keystore --from-file=path/to/keystore.jks
Enhance your values.yaml for the truststore:
Add and mount the config map:
extraVolumes: |
- name: cert-keystore
configMap:
name: cert-keystore
extraVolumeMounts: |
- name: cert-keystore
mountPath: "/keystore/"
readOnly: true
Tell java tu use it:
javaToolOptions: >-
-[maybe some other settings of yours]
-Djavax.net.ssl.trustStore=/keystore/keystore.jks
-Djavax.net.ssl.trustStorePassword=<<keystore_password>>
Since we didn't want to upload the keystore password to git we added a step to our pipeline where it gets sed into the values.yaml, replacing the <<keystore_password>>.
2. Adding a custom theme:
Mainly we are providing a docker container with our custom theme in it:
extraInitContainers: |
- name: theme-provider
image: docker_repo_url/themeContainer:version
imagePullPolicy: IfNotPresent
command:
- sh
args:
- -c
- |
echo "Copying theme..."
cp -R /custom-theme/* /theme
volumeMounts:
- name: theme
mountPath: /theme
Add and mount the theme:
extraVolumes: |
- name: theme
emptyDir: {}
extraVolumeMounts: |
- name: theme
mountPath: /opt/jboss/keycloak/themes/custom-theme
You now should be able to choose the custom theme in the Keycloak admin UI via Realm Settings -> Themes.
CODECENTRIC CHART VERSION 9.0.0 to 9.3.2 (and maybe higher)
1. Clustering
We are still going with JDBC_PING since we had problems with DNS_PING as described in the Codecentric Repo readme:
extraEnv: |
## KEYCLOAK CONFIG
- name: PROXY_ADDRESS_FORWARDING
value: "true"
### CLUSTERING
- name: JGROUPS_DISCOVERY_PROTOCOL
value: org.jgroups.protocols.JDBC_PING
- name: JGROUPS_DISCOVERY_PROPERTIES
value: 'datasource_jndi_name=java:jboss/datasources/KeycloakDS'
- name: CACHE_OWNERS_COUNT
value: "2"
- name: CACHE_OWNERS_AUTH_SESSIONS_COUNT
value: "2"
With the service set up as ClusterIP:
service:
annotations: {}
labels: {}
type: ClusterIP
loadBalancerIP: ""
httpPort: 80
httpNodePort: null
httpsPort: 8443
httpsNodePort: null
httpManagementPort: 9990
httpManagementNodePort: null
extraPorts: []
2. 502 Error Ingress Problem
We encountered a 502 error with Codecentrics chart 9.x.x for which fixing took a while to figure out. A solution for this is also described here, where we took our inspiration but for us the following ingress setup was enough:
ingress:
enabled: true
servicePort: http
# Ingress annotations
annotations: {
kubernetes.io/ingress.class: nginx,
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k,
}
CODECENTRIC CHART VERSION 9.5.0 (and maybe higher)
Updating to 9.5.0 needs to be tested. Especially if desired to go with KUBE_PING and maybe even Autoscaling.
I will update after testing if something changed significantly.

DataDog how to disable Redis integration

I've installed the DataDog agent on my Kubernetes cluster using the Helm chart (https://github.com/helm/charts/tree/master/stable/datadog).
This works very well except for one thing. I have a number of Redis containers that have passwords set. This seems to be causing issues for the DataDog agent because it can't connect to Redis without a password.
I would like to either disable monitoring Redis completely or somehow bypass the Redis authentication. If I leave it as is I get a lot of error messages in the DataDog container logs and the redisdb integration shows up in yellow in the DataDog dashboard.
What are my options here?
I am not a fan of helm, but you can accomplish this in 2 ways:
via env vars: make use of DD_AC_EXCLUDE variable to exclude the Redis containers: eg DD_AC_EXCLUDE=name:prefix-redis
via a config map: mount an empty config map in /etc/datadog-agent/conf.d/redisdb.d/, below is an example where I renamed the auto_conf.yaml to auto_conf.yaml.example.
apiVersion: v1
data:
auto_conf.yaml.example: |
ad_identifiers:
- redis init_config: instances:
## #param host - string - required
## Enter the host to connect to.
#
- host: "%%host%%" ## #param port - integer - required
## Enter the port of the host to connect to.
#
port: "6379"
conf.yaml.example: |
init_config: instances: ## #param host - string - required
## Enter the host to connect to.
# [removed content]
kind: ConfigMap
metadata:
creationTimestamp: null
name: redisdb-d
alter the daemonset/deployment object:
[....]
volumeMounts:
- name: redisdb-d
mountPath: /etc/datadog-agent/conf.d/redisdb.d
[...]
volumes:
- name: redisdb-d
configMap:
name: redisdb-d
[...]

Ho to build Connection URL for Google Cloud SQL Postgresql Instance

I want to connect my app to a managed postgresql instance on google cloud SQL. The app would be deployed via GKE. Normally, i'd connect via a connection string:
Eg: postgres://<user>:<password>#<my-postgres-host>:5432"
But the documentation states that:
Create a Secret to provide the PostgreSQL username and password to the database.
Update your pod configuration file with the following items:
Provide the Cloud SQL instance's private IP address as the host address your application will use to access your database.
Provide the Secret you previously created to enable the application to log into the database.
Bring up your Deployment using the Kubernetes manifest file.
I can do step 1 and 3 but cannot follow step 2. Should the connection URL just be: postgres://<PRIVATE_ID>:5432 and I add ENV variables POSTGRES_USER and POSTGRES_PASSWORD through a secret?
Are there any examples I can look up?
Outcome: I'd like to derive the connection url for postgresql hosted on google cloud sql.
Thank you in advance!
You can find example of postgres_deployment.yaml file, to deploy with kubernetes. Example users proxy, but database configuration section does not change for private IP. For private IP do not use the section [proxy_container]
This is the section stated in the documentation, that you are searching for: database environment variables and secrets section.
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]