I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.
Related
I have a deployment and service yaml file. I use minikube to run Kubernetes in my WSL.
postgres-deployment.yaml:
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
replicas: 1
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgresql-db-disk
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
# Volume Claim
volumeClaimTemplates:
- metadata:
name: postgresql-db-disk
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 25Gi
postgres-service.yaml:
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgresql-db
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
I run them with:
# kubectl apply -f postgres-deployment.yaml
# kubectl apply -f postgres-service.yaml
The deployment works, I get the Cluster IP of the service with kubectl get all.
I run the pgAdmin with the command:
docker run -p 80:80
-e 'PGADMIN_DEFAULT_EMAIL=user#domain.com'
-e 'PGADMIN_DEFAULT_PASSWORD=SuperSecret'
-d dpage/pgadmin4
I try to connect to the postgres but I am unable to connect.
EDIT:
I changed the user for connection to postgres, still doesn't works.
I tried to change the LoadBalancer to ClusterIp and NodePort, it doesn't work either.
I tried to change my OS to Ubuntu, in case of some weird WSL issues, it doesn't work either.
To access the Postgres locally, I have to use NodePort.
We need to find the NodePort ip and port.
To find the nodeport internal-ip, do:
$ kubectl get nodes -o wide
For the port we can do kubectl describe svc postgres-db-lb or kubectl get svc.
In pgAdmin the hostname should-be <node-ip>:<node-port>.
We can also do minikube service postgres-db-lb to find the url.
EDIT
Or more simply minikube service <NAME_OF_SERVICE>.
I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)
I am playing with kubenetes. I have created a StatefulSet running postgresql. I have created a service with ClusterIP: None. I have launched a pod with pgadmin4. I can get to pgadmin from my browser. When I try to get to my pgsql server from pgadmin, it tells me that either the ip or the port are not accessible. The error message displays the ip address, so I know that it is resolving the right pod name.
This is MicroK8s on Ubuntu.
Here are my configs.
--- pomodoro-pgsql StatefulSet ---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pomodoro-pgsql
namespace: pomodoro-services
spec:
selector:
matchLabels:
app: pomodoro-pgsql
env: development
serviceName: pomodoro-pgsql
replicas: 1
template:
metadata:
labels:
app: pomodoro-pgsql
env: development
spec:
containers:
- name: pomodoro-pgsql
image: localhost:32000/pomodoro-pgsql
env:
- name: POSTGRES_PASSWORD
value: blahblah
- name: POSTGRES_USER
value: blahblah
- name: POSTGRES_DB
value: blahblah
ports:
- name: pgsql
containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteOnce
--- pomodoro-pgsql Headless Service ---
apiVersion: v1
kind: Service
metadata:
name: pomodoro-pgsql
namespace: pomodoro-services
spec:
clusterIP: None
selector:
app: pomodoro-pgsql
env: development
ports:
- name: pgsql
port: 5432
--- pgadmin4 Pod --
apiVersion: v1
kind: Pod
metadata:
name: pomodoro-pgadmin
namespace: pomodoro-services
labels:
env: development
spec:
containers:
- name: pomodoro-pgadmin
image: localhost:32000/pomodoro-pgadmin
env:
- name: PGADMIN_DEFAULT_EMAIL
value: blahblah
- name: PGADMIN_DEFAULT_PASSWORD
value: blahblah
imagePullPolicy: IfNotPresent
restartPolicy: Always
--- pgadmin4 Service ---
apiVersion: v1
kind: Service
metadata:
name: pomodoro-pgadmin
namespace: pomodoro-services
spec:
type: NodePort
ports:
- port: 5002
targetPort: 80
selector:
app: pomodoro-pgadmin
env: development
I am able to see the ip addresses through dig
microk8s.kubectl run `
--namespace pomodoro-services `
-it srvlookup `
--image=tutum/dnsutils --rm `
--restart=Never `
-- dig SRV pomodoro-pgsql.pomodoro-services.svc.cluster.local
Here is the error from pgadmin. Note that the IP is correct for the pod.
Unable to connect to server:
could not connect to server: Operation timed out
Is the server running on host "pomodoro-pgsql-0.pomodoro-pgsql.pomodoro-
services.svc.cluster.local" (10.10.10.219) and accepting
TCP/IP connections on port 5432?
Here are the logs from the pgsql pod
2019-01-02 04:23:05.576 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2019-01-02 04:23:05.576 UTC [1] LOG: listening on IPv6 address "::", port 5432
2019-01-02 04:23:05.905 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-01-02 04:23:06.430 UTC [21] LOG: database system was shut down at 2019-01-01 20:01:36 UTC
2019-01-02 04:23:06.630 UTC [1] LOG: database system is ready to accept connections
As requested, here are the results from kubectl get services (IPs have been changed.)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pomodoro-pgadmin NodePort 10.1.18.45 <none> 5002:30437/TCP 12h
pomodoro-pgsql ClusterIP None <none> 5432/TCP 46h
pomodoro-ping-rapi ClusterIP 10.1.18.36 <none> 8888/TCP 47h
[update 1/2/2019] I connected to another container in the cluster and tried to telnet, and then to psql into postgres. I could not connect with either program. I could run psql on the container running the postgresql server. My current theory is that the server has exposed 5432 locally, but it is filtered from other pods.
I have confirmed that /var/lib/postgresql/data/postgresql.conf contains the following:
listen_addresses = '*'
Using microk8s.kubctl port-forward pomodoro-pgsql-0 5432:5432 I was able to connect to 5432 through telnet.
_> telnet localhost 5432
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
[update 1/2/2019]
Results kubctl exec pomodoro-pgsql-0 -- nslookup pomodoro-pgsql
nslookup: can't resolve '(null)': Name does not resolve
nslookup: can't resolve 'pomodoro-pgsql': Try again
command terminated with exit code 1
Results kubctl exec pomodoro-pgsql-0 -- nslookup pomodoro-pgsql-0
Name: pomodoro-pgsql-0
Address 1: 10.1.1.19 pomodoro-pgsql-0.pomodoro-pgsql.pomodoro-services.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Note: IPs change when computer is restarted.
The problem was a firewall rule for the computer running microk8s. I found out that this is documented on their web page where the docs tell us to do this:
sudo iptables -P FORWARD ACCEPT
sudo apt-get install iptables-persistent
I have a postgres pod running locally on a coreOS vm.
I am able to access postgres using the ip of the minion it is on but I'm attempting to set it up in such a manner as to not have to know exactly which minion the pod is on but still be able to use postgres.
Here is my pod
apiVersion: v1
kind: Pod
metadata:
name: postgresql
labels:
role: postgres-client
spec:
containers:
- image: postgres:latest
name: postgres
ports:
- containerPort: 5432
hostPort: 5432
name: pg-port
volumeMounts:
- name: nfs
mountPath: /mnt
volumes:
- name: nfs
nfs:
server: nfs.server
path: /
and here is a service I tried to set-up but it doesn't seem correct
apiVersion: v1
kind: Service
metadata:
name: postgres-client
spec:
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres-client
I'm guessing that the selector for your service is not finding any matching backends.
Try changing
app: postgres-client
to
role: postgres-client
in the service definition (or vice versa in the pod definition above).
The label selector has to match both the key and value (i.e. role and postgres-client). See the Labels doc for more details.