"Failed to connect to database" : Keycloak Operator external database Config not working - kubernetes

I checked this How to use for Keycloak operator custom resource using external database connection. I am using CloudSQL from Google platform as the external database source.
My configurations are
keycloak-idm
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: kiwigrid-keycloak-idm
spec:
instances: 3
externalAccess:
enabled: false
externalDatabase:
enabled: true
external db storage secret
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: keycloak-storage
POSTGRES_EXTERNAL_ADDRESS: pgsqlproxy.infra
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: keycloak-postgresql
POSTGRES_USERNAME: keycloak-user
POSTGRES_PASSWORD: S1ly3AValJYBNR-fsptLYdT74
POSTGRES_SUPERUSER: "true"
storage database
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLDatabase
metadata:
name: keycloak-storage
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
charset: UTF8
collation: en_US.UTF8
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
storage users
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLUser
metadata:
name: keycloak-user
namespace: kiwios-application
annotations:
cnrm.cloud.google.com/deletion-policy: "abandon"
spec:
instanceRef:
name: keycloak-storage-instance-pg
namespace: infra
password:
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
And the error shown in Kubernetes console
It is not working. Anyone please help me to figure out what I am doing wrong.
Update: I deep dived with k9s console. As per keycloak-operator functionality it creates a external name for the database connection.
which is here keycloak-postgresql
check image below
There is no error showing in keycloak-operator console. Only the keycloak-idm is not able to make a connection using this external name. It shows the below error.

This is what i am using for keycloak setup, also if you have read the question he has mention secret issue issue in update section
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: POSTGRES
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
- name : KEYCLOAK_HTTP_PORT
value : "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name : KEYCLOAK_HOSTNAME
value : keycloak.harshmanvar.tk #replace with ingress URL
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
You can try changing the ENV variables into the secret you are using.
Example files : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
Environment variables that Keycloak support : https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables

Have you tried this way..!
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: kiwios-application
type: Opaque
stringData:
POSTGRES_DATABASE: "keycloak-storage"
POSTGRES_EXTERNAL_ADDRESS: "pgsqlproxy.infra"
POSTGRES_EXTERNAL_PORT: "5432"
POSTGRES_HOST: "keycloak-postgresql"
POSTGRES_USERNAME: "keycloak-user"
POSTGRES_PASSWORD: "S1ly3AValJYBNR-fsptLYdT74"
POSTGRES_SUPERUSER: "true"

Related

Create Kubernetes deployment and service from image using configmap and secret

I am trying to create a deployment from an image that I specify in a Dockerfile. My goal would be to pass the environmental variables so I could access them inside the Dockerfile, so kind of similar to the postgres image available on Dockerhub. Additionally I would need a service so I could access the database in a Spring boot application. The reason why I would like to work it this way, because I would like to have additional content inside the Dockerfile. I am not sure whether I can do it this way, but if yes, what am I doing wrong, and how could I access it from Spring boot?
What I have tried so far:
apiVersion: v1
kind: Namespace
metadata:
name: testns
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
namespace: testns
data:
dbname: test_database
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: testns
type: Opaque
data:
username: cG9zdGdyZXN1c2Vy
password: cGFzc3dvcmQxMjM0NTY=
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-depl
namespace: testns
labels:
app: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresdb
image: testpostgres
ports:
- containerPort: 5432
env:
- name: DB
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: dbname
- name: USERNAME
valueFrom:
secretKeyRef:
name: postgres-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
namespace: testns
spec:
selector:
app: postgresql
type: LoadBalancer
ports:
- protocol: TCP
port: 7000
targetPort: 5432
nodePort: 30000
And the related Dockerfile that I use to create the testpostgres image:
FROM postgres
ARG DB
ARG USERNAME
ARG PASSWORD
ENV POSTGRES_DB=$DB
ENV POSTGRES_USER=$USERNAME
ENV POSTGRES_PASSWORD=$PASSWORD
EXPOSE 5432

Minikube Kubernetes, Postgres, Spring Boot Cluster - Postgres connection refused

so I have a basic minikube cluster configuration for K8s cluster with only 2 pods for Postgres DB and my Spring app. However, I can't get my app to connect to my DB. I know that in Docker such issue could be solved with networking but after a lot of research I can't seem to find the problem and the solution to my issue.
Currently, given my configuration I get a Connection refused error by postgres whenever my Spring App tries to start:
Caused by: org.postgresql.util.PSQLException: Connection to postgres-service:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
So my spring-app is a basic REST API with some open endpoints where I query for some data. The app works completely fine and here is my application.properties:
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
The way I create my Postgres component is by creating a ConfigMap, a Secret and finally a Deployment with it's Service inside. They look like so:
postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
postgres-url: postgres-service
postgres-port: "5432"
postgres-db: "test"
postgres-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
postgres_user: cm9vdA== #already encoded in base64
postgres_password: cm9vdA== #already encoded in base64
postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgresdb
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
and finally here's my Deployment with it's Service for my spring app
spring-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-deployment
labels:
app: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-app
image: app #image is pulled from my docker hub
ports:
- containerPort: 8080
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-url
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-port
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
A connection refused means that the host you are connecting to, does not have the port you mentioned opened.
This leads me to think that the postgres pod isnt running correctly, or the service is not pointing to those pods correctly.
By checking the Yamls I can see that the service's pod selector isnt configured correctly:
The service is selecting pods with label: app.kubernetes.io/name: postgres
The deployment is configured with pods with label: app: postgres
The correct service manifest should look like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can double check that by describing the service using kubectl describe service postgres-service.
The output should contain the postgres pods IPs for Endpoints.

Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty)

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

How to import existing keycloak realm from my local directory when deploying keycloak into kinD kubernetes

I am new to kubernetes and kinD. I am using kinD to deploy keycloak locally using yaml.
Below is the yml file I am using.
How can I import an existing realm JSON when deploying using below yml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: jboss/keycloak:9.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443