Keycloak with postgres on minikube: cannot connect to database - postgresql

I have one pod with postgres and one pod with keycloak on minikube.
The pod with keycloak deployed via an helm chart from codecentric (chart version 17.0.1, application version 16.1.1) is failing to initialize.
I have inspected the logs and it is failing to connect to the database:
FATAL [org.keycloak.services] (ServerService Thread Pool -- 64) Error during startup: java.lang.RuntimeException: Failed to connect to database
...
Caused by: org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
values.yml used to deploy the helm chart
postgresql:
enabled: false
extraEnv: |
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: "127.0.0.1"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: JDBC_PARAMS
value: "connectTimeout=30000"
Files used to deploy the postgresql pod:
postgres-configmap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config-old
labels:
app: postgres-old
data:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
postgres-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-old
spec:
replicas: 1
selector:
matchLabels:
app: postgres-old
template:
metadata:
labels:
app: postgres-old
spec:
containers:
- name: postgres-old
image: docker.io/library/postgres:14.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config-old
volumeMounts:
- mountPath: /var/lib/postgresql/14/data
name: postgredb-old
volumes:
- name: postgredb-old
persistentVolumeClaim:
claimName: postgres-pv-claim-old
postgres-service.yml:
apiVersion: v1
kind: Service
metadata:
name: postgres-old
labels:
app: postgres-old
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres-old
postgres-storage.yml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume-old
labels:
type: local
app: postgres-old
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim-old
labels:
app: postgres-old
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I've also tried to expose an external port with:
minikube service postgres-old --url
and add this port on values.yml instead of 5432 but with no luck.
I am running minikube on wsl2.

Related

How to deploy phpadmin in Azure Kubernetes?

I have deployed MySQL using this YAML file.
apiVersion: v1
kind: Service
metadata:
name: mysqlsb
labels:
app: dataenv
spec:
ports:
- port: 3306
selector:
app: dataenv
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: dataenv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dataenv-mysql
labels:
app: dataenv
spec:
selector:
matchLabels:
app: dataenv
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: dataenv
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The instance is running and I can create tables via command line.
How do I deploy phpMyAdmin to manage this pod?
You can use port forwarding
kubectl port-forward service/<<svcname>> 3306:3306
based on your service name:
kubectl port-forward service/mysqlsb 3306:3306
Then you can access it from your desktop (via phpmyadmin or any other GUI) using servername as localhost and port 3306

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

How to mount a secret to kubernetes StatefulSet

So, looking at the Kubernetes API documentation: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#statefulsetspec-v1-apps it appears that I can indeed have a volume because it uses a podspec and the podspec does have a volume field, so I could list the secret and then mount it like in a deployment, or any other pod.
The problem is that kubernetes seems to think that volumes are not actually in the podspec for StatefulSet? Is this right? How do I mount in a secret to my statefulset if this is true.
error: error validating "mysql-stateful-set.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
StatefulSet:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi```
The volume field should come inside template spec and not inside container (as done in your template). Refer this for the exact structure (https://godoc.org/k8s.io/api/apps/v1#StatefulSetSpec), go to PodTemplateSpec and you will find volumes field.
Below template should work for you:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
name: database
selector:
app: mysql
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql # has to match .spec.template.metadata.labels
serviceName: "mysql"
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
terminationGracePeriodSeconds: 60
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
name: database
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: mysql
mountPath: /run/secrets/mysql
env:
- name: MYSQL_ROOT_PASSWORD_FILE
value: /run/secrets/mysql/root-pass
volumes:
- name: mysql
secret:
secretName: mysql
items:
- key: root-pass
path: root-pass
mode: 511
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi

Error in setting up PostgreSQL in Kubernates

I'm trying to setup postgres in my Kubernates enabled Docker Desktop. I pulled images 'postgres:latest' and 'dpage/pgadmin4:latest' from Docker Hub. Below are my yaml files. Everything starts perfectly. I can open the pgAdmin page from Chrome. The creds i have given in the yaml file are not logging me in.
What is causing this issue?
postgres Db:-
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Never
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
pgAdmin :-
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
labels:
app: pgadmin4
data:
PGADMIN_DEFAULT_EMAIL: ap#a.com
PGADMIN_DEFAULT_PASSWORD: posdt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin4
template:
metadata:
labels:
app: pgadmin4
spec:
containers:
- name: pgadmin4
image: dpage/pgadmin4:latest
imagePullPolicy: Never
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: pgadmin-config
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
type: NodePort
ports:
- port: 80
selector:
app: pgadmin4
I just deployed and was able to connect the database. see below
postgres-7cb54957dd-qkfd4 1/1 Running 0 13s
# kubectl exec -it postgres-7cb54957dd-qkfd4 sh
# psql -h 127.0.0.1 -U postgres -q -d app -c 'SELECT 1'
?column?
----------
1
(1 row)
used below YAML with slight modifications
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres

unable to deploy debezium on minikube

I am new to kubernetes, I am trying to integrate kafka with debezium and mysql.
i successfully deploy kafka and mysql on minikube , once i deploy the debezium yml on minikube, it got hanged and don't response at all , then i restart the minikube, After running all pod minikube again got hanged.
below is my code:
zookeeper service
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
zookeeper deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: zookeeper-deployment-1
spec:
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
- name: zoo1
image: debezium/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zoo1
kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "1"
type: NodePort
kafka deployemnt:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-broker1
spec:
template:
metadata:
labels:
selector: kafka
app: kafka
id: "1"
spec:
containers:
- name: kafka
image: debezium/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.39.47
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: hello-topic:3:3
MySql-persistance volume:
#application/mysql/mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql deployment:
#application/mysql/mysql-deployment.yaml
# this command is for mysql client kubectl run -it --rm --image=debezium/example-mysql --restart=Never mysql-client -- mysql -h mysql -pdebezium
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: extensions/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: debezium/example-mysql
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: debezium
- name: MYSQL_USER
value: mysqluser
- name: MYSQL_PASSWORD
value: mysqlpw
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Debezium deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: debezium-connect-source
spec:
selector:
matchLabels:
app: debezium-connect-source
replicas: 1
template:
metadata:
labels:
app: debezium-connect-source
spec:
terminationGracePeriodSeconds: 30
containers:
- name: debezium-connect-source
image: debezium/connect
env:
- name: BOOTSTRAP_SERVERS
value: kafka-service:9092
- name: GROUP_ID
value: "1"
- name: CONFIG_STORAGE_TOPIC
value: debezium-connect-source_config
- name: OFFSET_STORAGE_TOPIC
value: debezium-connect-source_offset
ports:
- containerPort: 8083
name: dm-c-source
when i deploy the debezium , then problem starts and minikube response like
$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
OS :Centos
minikube version: v0.30.0
I believe this is happening because of the resource crunch on the VM started by minikube.
By default when you start using minikube start it takes up only 2 CPU and 2GB RAM from your system, and by looking at your deployments (kafka + mysql + debezium) that might not be enough.
You can increase CPU and memory allocated to VM by using minikube start with parameters --cpu and --memory (value should be in MB).
For more info, you should do minikube start -h
I strongly suggest, if you want to setup a heavy deployments you should be using machines with more resources.
Hope this helps.
you should set memory limits for the Java-based pods. The older versions of Java would see the whole guest memory as their own and will happily consume it completely - and there are at least three JVMs started.