Kubernetes deployment works but can't connect to postgresql from pgAdmin4 - postgresql

I have a deployment and service yaml file. I use minikube to run Kubernetes in my WSL.
postgres-deployment.yaml:
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
replicas: 1
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
# Volume Claim
volumeClaimTemplates:
- metadata:
name: postgresql-db-disk
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 25Gi
postgres-service.yaml:
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgresql-db
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
I run them with:
# kubectl apply -f postgres-deployment.yaml
# kubectl apply -f postgres-service.yaml
The deployment works, I get the Cluster IP of the service with kubectl get all.
I run the pgAdmin with the command:
docker run -p 80:80
-e 'PGADMIN_DEFAULT_EMAIL=user#domain.com'
-e 'PGADMIN_DEFAULT_PASSWORD=SuperSecret'
-d dpage/pgadmin4
I try to connect to the postgres but I am unable to connect.
EDIT:
I changed the user for connection to postgres, still doesn't works.
I tried to change the LoadBalancer to ClusterIp and NodePort, it doesn't work either.
I tried to change my OS to Ubuntu, in case of some weird WSL issues, it doesn't work either.

To access the Postgres locally, I have to use NodePort.
We need to find the NodePort ip and port.
To find the nodeport internal-ip, do:
$ kubectl get nodes -o wide
For the port we can do kubectl describe svc postgres-db-lb or kubectl get svc.
In pgAdmin the hostname should-be <node-ip>:<node-port>.
We can also do minikube service postgres-db-lb to find the url.
EDIT
Or more simply minikube service <NAME_OF_SERVICE>.

Related

How to access mysql pod in another pod(busybox)?

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

Kubernetes Service unreachable

I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)

How can you connect to a database inside a k8 cluster that's behind a headless service?

given a database that is part of a statefulset and behind a headless service, how can I use a local client (outside of the cluster) to access the database? Is it possible to create a separate service that targets a specific pod by its stable id?
There are multiple ways you can conect to this database service
You can use
Port-forward : https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Service as LoadBalancer : https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Service as Nodeport : https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Example MySQL database running on K8s : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The easiest way is to try with port-forwarding :
kubectl port-forward -n <NAMESPACE Name> <POD name> 3306:3306
using the above command you can create the proxy from local to K8s cluster and test the localhost:3306
This is not a method for Prod use case it's can be used for debugging.
NodePort : Expose the port but use the worker node IPs so if worker not get killed during autoscaling IP may changed with time
I would recommend creating a new service with the respective label and type as LoadBalancer.

Trouble connecting to postgres from outside Kubernetes cluster

I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.

Cannot connect to a mongodb service in a Kubernetes cluster

I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore.
However, they can't seem to find the host.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
Here's my mongo deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
And an example of one of my services...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>
First, check that the service is created
kubectl describe svc mongo
You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via
kubectl get po | grep mongo
Which should return something like: mongo-deployment-<guid>-<guid>, then do
kubectl describe po mongo-deployment-<guid>-<guid>
You should make sure the pod is started correctly and says Running not something like ImagePullBackoff. It looks like you're mounting a volume from a gcePersistentDisk. If you're seeing your pod just hanging out in the ContainerCreating state it's very likely you're not mounting the disk correctly. Make sure you create the disk before you try and mount it as a volume.
If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:
kubectl logs mongo-deployment-<guid>-<guid>
If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
kubectl exec -it mongo-deployment-<guid>-<guid> sh
Which should get you into the container (Pod) and then you can try something like this to see if your DB is running.