Error in setting up PostgreSQL in Kubernates - postgresql

I'm trying to setup postgres in my Kubernates enabled Docker Desktop. I pulled images 'postgres:latest' and 'dpage/pgadmin4:latest' from Docker Hub. Below are my yaml files. Everything starts perfectly. I can open the pgAdmin page from Chrome. The creds i have given in the yaml file are not logging me in.
What is causing this issue?
postgres Db:-
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Never
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
pgAdmin :-
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
labels:
app: pgadmin4
data:
PGADMIN_DEFAULT_EMAIL: ap#a.com
PGADMIN_DEFAULT_PASSWORD: posdt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin4
template:
metadata:
labels:
app: pgadmin4
spec:
containers:
- name: pgadmin4
image: dpage/pgadmin4:latest
imagePullPolicy: Never
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: pgadmin-config
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
type: NodePort
ports:
- port: 80
selector:
app: pgadmin4

I just deployed and was able to connect the database. see below
postgres-7cb54957dd-qkfd4 1/1 Running 0 13s
# kubectl exec -it postgres-7cb54957dd-qkfd4 sh
# psql -h 127.0.0.1 -U postgres -q -d app -c 'SELECT 1'
?column?
----------
1
(1 row)
used below YAML with slight modifications
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres

Related

Keycloak with postgres on minikube: cannot connect to database

I have one pod with postgres and one pod with keycloak on minikube.
The pod with keycloak deployed via an helm chart from codecentric (chart version 17.0.1, application version 16.1.1) is failing to initialize.
I have inspected the logs and it is failing to connect to the database:
FATAL [org.keycloak.services] (ServerService Thread Pool -- 64) Error during startup: java.lang.RuntimeException: Failed to connect to database
...
Caused by: org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
values.yml used to deploy the helm chart
postgresql:
enabled: false
extraEnv: |
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: "127.0.0.1"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: JDBC_PARAMS
value: "connectTimeout=30000"
Files used to deploy the postgresql pod:
postgres-configmap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config-old
labels:
app: postgres-old
data:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
postgres-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-old
spec:
replicas: 1
selector:
matchLabels:
app: postgres-old
template:
metadata:
labels:
app: postgres-old
spec:
containers:
- name: postgres-old
image: docker.io/library/postgres:14.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config-old
volumeMounts:
- mountPath: /var/lib/postgresql/14/data
name: postgredb-old
volumes:
- name: postgredb-old
persistentVolumeClaim:
claimName: postgres-pv-claim-old
postgres-service.yml:
apiVersion: v1
kind: Service
metadata:
name: postgres-old
labels:
app: postgres-old
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres-old
postgres-storage.yml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume-old
labels:
type: local
app: postgres-old
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim-old
labels:
app: postgres-old
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I've also tried to expose an external port with:
minikube service postgres-old --url
and add this port on values.yml instead of 5432 but with no luck.
I am running minikube on wsl2.

Kubernets prisma server not reachable

I'm setting up a Kubernetes which is compose by prismagraphql and Postgres locally.
The Postgres container is setted up correctly and by using a NodePort I can access it from pgadmin locally.
Instead the prisma container is not reachable. I followed these steps (Guide) but I'm not able to find it at localhost:4466 also after the forwarding process.
This is the postgres.yaml:
# postgres config
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database
POSTGRES_USER: username
POSTGRES_PASSWORD: password
---
# postgres deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-configuration
ports:
- containerPort: 5432
name: postgresdb
---
# postgres service
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
app: postgres
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
name: postgres
This is the prisma.yaml:
# prisma config
apiVersion: v1
kind: ConfigMap
metadata:
name: prisma-config
labels:
app: prisma
data:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: database
port: 5432
user: username
password: password
migrations: true
---
# prisma deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: prisma-deployment
labels:
app: prisma
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: prisma
template:
metadata:
labels:
app: prisma
spec:
containers:
- name: prisma-4466
image: prismagraphql/prisma:1.34
ports:
- containerPort: 4466
env:
- name: PRISMA_CONFIG
valueFrom:
configMapKeyRef:
name: prisma-config
key: PRISMA_CONFIG
---
# prisma service
apiVersion: v1
kind: Service
metadata:
name: prisma-service
spec:
type: NodePort
selector:
app: prisma
ports:
- protocol: TCP
port: 4466
targetPort: 4466
nodePort: 30100
This is what I obtained with kubectl describe svc prisma-service:

How to deploy phpadmin in Azure Kubernetes?

I have deployed MySQL using this YAML file.
apiVersion: v1
kind: Service
metadata:
name: mysqlsb
labels:
app: dataenv
spec:
ports:
- port: 3306
selector:
app: dataenv
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: dataenv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dataenv-mysql
labels:
app: dataenv
spec:
selector:
matchLabels:
app: dataenv
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: dataenv
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The instance is running and I can create tables via command line.
How do I deploy phpMyAdmin to manage this pod?
You can use port forwarding
kubectl port-forward service/<<svcname>> 3306:3306
based on your service name:
kubectl port-forward service/mysqlsb 3306:3306
Then you can access it from your desktop (via phpmyadmin or any other GUI) using servername as localhost and port 3306

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

How to connect to samba server from container running in kubernetes?

I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate