Can please somebody help me? This is my first post here, and I am really exited to start posting here and helping people but I need help first.
I am deploying my own Postgres database on Minikube. For db, password and username I am using secrets.
Data is encoded with base64
POSTGRES_USER = website_user
POSTGRES_DB = website
POSTGRES_PASSWORD = pass
I also exec into container to see if I could see these envs and they were there.
The problem is when I try to enter into postgres with psql. I checked minikube ip and typed correct password(pass) after this command:
pqsl -h 192.168.99.100 -U website_user -p 31315 website
Error
Password for user website_user:
psql: FATAL: password authentication failed for user "website_user"
Also if I exec into my pod:
kubectl exec -it postgres-deployment-744fcdd5f5-7f7vx bash
And try to enter into postgres I get:
psql -h $(hostname -i) -U website_user -p 5432 website
Error:
Password for user website_user:
psql: FATAL: password authentication failed for user "website_user"
I am lacking something here.I tried also ps aux in container, and everything seems to be find postgres processes are running
kubectl get all
Output:
NAME READY STATUS RESTARTS AGE
pod/postgres-deployment-744fcdd5f5-7f7vx 1/1 Running 0 18m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
service/postgres-service NodePort 10.109.235.114 <none> 5432:31315/TCP 18m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/postgres-deployment 1/1 1 1 18m
NAME DESIRED CURRENT READY AGE
replicaset.apps/postgres-deployment-744fcdd5f5 1 1 1 18m
# Secret store
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
POSTGRES_USER: d2Vic2l0ZV91c2VyCg==
POSTGRES_PASSWORD: cGFzcwo=
POSTGRES_DB: d2Vic2l0ZQo=
---
# Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres-pv
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: postgres-pv
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_USER
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_DB
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: POSTGRES_PASSWORD
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-volume-mount
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
ports:
- port: 5432
protocol: TCP
targetPort: 5432
type: NodePort
You created all your values with:
$ echo "value" | base64
which instead you should use: $ echo -n "value" | base64
Following official man page of echo:
Description
Echo the STRING(s) to standard output.
-n = do not output the trailing newline
TL;DR: You need to edit your Secret definition with new values:
$ echo -n "website_user" | base64
$ echo -n "website" | base64
$ echo -n "pass" | base64
You created your Secret with a trailing newline. Please take a look at below example:
POSTGRES_USER:
$ echo "website_user" | base64
output: d2Vic2l0ZV91c2VyCg== which is the same as yours
$ echo -n "website_user" | base64
output: d2Vic2l0ZV91c2Vy which is the correct value
POSTGRES_PASSWORD:
$ echo "pass" | base64
output: cGFzcwo= which is the same as yours
$ echo -n "pass" | base64
output: cGFzcw== which is the correct value
POSTGRES_DB:
$ echo "website" | base64
output: d2Vic2l0ZQo= which is the same as yours
$ echo -n "website" | base64
output: d2Vic2l0ZQ== which is the correct value
Your Secret should look like that:
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
POSTGRES_USER: d2Vic2l0ZV91c2Vy
POSTGRES_PASSWORD: cGFzcw==
POSTGRES_DB: d2Vic2l0ZQ==
If you create it with a new Secret you should be able to connect to the database:
root#postgres-deployment-64d697868c-njl7q:/# psql -h $(hostname -i) -U website_user -p 5432 website
Password for user website_user:
psql (9.6.6)
Type "help" for help.
website=#
Please take a look on additional links:
Github.com: Kubernetes: issues: Config map vs secret to store credentials for Postgres deployment
Kubernetes.io: Secrets
Related
I'm trying to verify that my postgres pod is accessible via the service that I've just set up. As of now, I cannot verify this. What I'm able to do is to log into the container running postgres itself, and attempt to talk to the postgres server via the IP of the service. This does not succeed. However, I'm unsure if this is a valid test of whether other pods in the cluster could talk to postgres via the service or if there is a problem with how I'm doing the test, or if there is a fundamental problem in my service or pod configurations.
I'm doing this all on a minikube cluster.
Setup the pod and service:
$> kubectl create -f postgres-pod.yml
$> kubectl create -f postgres-service.yml
postgres-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
env: prod
creation_method: manual
domain: infrastructure
spec:
containers:
- image: postgres:13-alpine
name: kubia-postgres
ports:
- containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_PASSWORD
value: dave
- name: POSTGRES_USER
value: dave
- name: POSTGRES_DB
value: tmp
# TODO:
# volumes:
# - name: postgres-db-volume
postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres
Check that the service is up kubectl get services:
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35d
postgres-service ClusterIP 10.110.159.21 <none> 5432/TCP 71m
Then, log in to the postgres container:
$> kubectl exec --stdin --tty postgres -- /bin/bash
from there, attempt to hit the service's IP:
bash-5.1# psql -U dave -h 10.110.159.21 -p 5432 tmp
psql: error: could not connect to server: Connection refused
Is the server running on host "10.110.159.21" and accepting
TCP/IP connections on port 5432?
So using this approach I am not able to connect to the postgres server using the IP of the service.
I'm unsure of several steps in this process:
Is the selecting by name block in the service configuration yaml correct?
Can you access the IP of a service from pods that are "behind" the service?
Is this, in fact, a valid way to verify that the DB server is accessible via the service, or is there some other way?
Hello, hope you are envoying your Kubernetes journey !
I wanted to try this on my kind (Kubernetes in docker) cluster locally. So this is what I've done:
First I have setup a kind cluster locally with this configuration (info here: https://kind.sigs.k8s.io/docs/user/quick-start/):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: so-cluster-1
nodes:
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: control-plane
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
- role: worker
image: kindest/node:v1.23.5
after this I created my cluster with this command:
kind create cluster --config=config.yaml
Next, i have created a test namespace (manifest obtained with: kubectl create ns so-tests -o yaml --dry-run):
apiVersion: v1
kind: Namespace
metadata:
name: so-tests
From there, i got my environment setted up, so I had to deploy a postgres on it, but here is what I've changed:
1- Instead of creating a singleton pod, I created a statefulset (which aim is to deploy databases)
2- I decided to keep using your docker image "postgres:13-alpine" and added a security context to run as the native postgres user (not dave neither root) -- to know what is the id of the postgres user, i first deployed the statefulset without the security context and executed this commands:
❯ k exec -it postgres-0 -- bash
bash-5.1# whoami
root
bash-5.1# id
uid=0(root) gid=0(root) groups=1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
bash-5.1# id postgres
uid=70(postgres) gid=70(postgres) groups=70(postgres),70(postgres)
bash-5.1# exit
so, once that i knew that the id of the postgres user was 70, I just added this in the statefulset manifest:
securityContext:
runAsUser: 70
fsGroup: 70
3- Instead of adding configuration and secrets as environment variable directly into the pod config of the statefulset, I decide to created a secret and a configmap:
First lets create a kubernetes secret with your password in it, here is the manifest (obtained from this command: "k create secret generic --from-literal password=dave postgres-secret -o yaml --dry-run=client"):
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
After this i created a configmap to store our postgres config, here is the manifest (obtained by running: kubectl create configmap postgres-config --from-literal user=dave --from-literal db=tmp --dry-run=client -o yaml )
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
Since, it is just for a testing purpose, i didnt setted up a dynamic volume provisionning for the statefulset, neither pre-provisionned volume. Instead I have configured a simple emptyDir to store the postgres data (/var/lib/postgresql/data).
N.B.: By default, emptyDir volumes are stored on whatever medium is backing the node - that might be disk or SSD or network storage, depending on your environment. However, you can set the emptyDir.medium field to "Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem) for you instead. (this came from here Create a new volume when pod restart in a statefulset)
Since it is a statefulset, it has to be exposed by a headless kubernetes service (https://kubernetes.io/fr/docs/concepts/services-networking/service/#headless-services)
Here are the manifests:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
---
apiVersion: v1
data:
password: ZGF2ZQ==
kind: Secret
metadata:
name: postgres-secret
---
apiVersion: v1
data:
db: tmp
user: dave
kind: ConfigMap
metadata:
name: postgres-config
---
I deployed this using:
kubectl apply -f postgres.yaml
I tested to connect into the postgres-0 pod to connect my db with $POSTGRES_USER and $POSTGRES_PASSWORD credentials:
❯ k exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
I listed the databases:
tmp=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-------+----------+------------+------------+-------------------
postgres | dave | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
template1 | dave | UTF8 | en_US.utf8 | en_US.utf8 | =c/dave +
| | | | | dave=CTc/dave
tmp | dave | UTF8 | en_US.utf8 | en_US.utf8 |
(4 rows)
and I connected to the "tmp" db:
tmp=# \c tmp
Password:
You are now connected to database "tmp" as user "dave".
succesful.
I also tried to connect the database using the IP, as you tried:
bash-5.1$ ip a | grep /24
inet 10.244.4.8/24 brd 10.244.4.255 scope global eth0
bash-5.1$ psql --username=$POSTGRES_USER -W --host=10.244.4.8 --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
succesful.
I then downloaded dbeaver (from here https://dbeaver.io/download/ ) to test the access from outside of my cluster:
with a kubectl port-forward:
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
I created the connection on dbeaver, and could access easily the db "tmp" from localhost:5361 with dave:dave credentials
kubectl port-forward statefulset/postgres 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
perfect.
same as before (with dbeaver), I tried to connect the db using a port forward, not of the pod, but of the service:
❯ kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
It worked as well !
I also created a standalone pod, based on our config to access the db that is in another pod (via the servine name as hostname), here is the manifest of the pod:
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: test
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
here is the result of the connection from inside the podtest:
bash-5.1$ psql --username=$POSTGRES_USER -W --host=postgres-service --port=5432 --dbname=tmp
Password:
psql (13.6)
Type "help" for help.
tmp=#
Here is how you can access it from outside the pod/namespace (make sure that there is no network rules that block the connection):
StatefulSetName-Ordinal.Service.Namespace.svc.cluster.local
i.e: postgres-0.postgres-service.so-tests.svc.cluster.local
To access the statefulsets workloads from outside the cluster here is a good start: How to expose a headless service for a StatefulSet externally in Kubernetes
Hope this will helped you. Thank you for your question.
Bguess
You cannot, at least with minikube, access the IP of a service from the pod "behind" that service if there is only one (1) replica.
I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.
I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.
However, the app can't reach the Database.
Here is the deployment-descriptor of the postgresql:
kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
Here is the deployments-descriptor of the meal-planer
kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string jdbc:postgresql://postgres:5432/home to connect to the DB.
I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.
When deploying both applications, I can kubectl exec -it <<podname>> -n home -- bin/sh into it. If I call wget -O- postgres or wget -O- postgres.home from there, I always get Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable.
I don't know, why the network is unreachable and I don't know what I can do about it.
First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use statefulset
StatefulSet is the workload API object used to manage stateful
applications.
Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably hostpath storageClass with ReadWriteOnce.
Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf
Here is a minimal working example that'll help you get started:
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have pg_isready binary in order to test the connection to created db.
pi#rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi#rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi#rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi#rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi#rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root#test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
If you'll still have trouble connecting to DB, please attach following:
kubectl describe pod <<postgres_pod_name>>
kubectl logs <<postgres_pod_name>> Idealy afrer you've tried to connect to it
kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf
Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)
I used this tutorial: https://severalnines.com/blog/using-kubernetes-deploy-postgresql
With my configuration on Kubernetes which is based off the official Docker Image I keep getting:
psql -h <publicworkernodeip> -U postgres -p <mynodeport> postgres
Password for user postgres: example
psql: FATAL: password authentication failed for user "postgres"
DETAIL: Role "postgres" does not exist.
Connection matched pg_hba.conf line 95: "host all all all md5"
yamls:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumes:
- name: postgres
persistentVolumeClaim:
claimName: postgres-pv-claim
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 12Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 12Gi
try to login using the below command
psql -h $(hostname -i) -U postgres
kubectl exec -it postgres-566fbfb87c-rcbvd sh
# env
POSTGRES_PASSWORD=example
POSTGRES_USER=postgres
POSTGRES_DB=postgres
# psql -h $(hostname -i) -U postgres
Password for user postgres:
psql (11.2 (Debian 11.2-1.pgdg90+1))
Type "help" for help.
postgres=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=#
I've launched a postgresql server in minikube, and I'm having difficulty connecting to it from outside the cluster.
Update
It turned out my cluster was suffering from unrelated problems, causing all sorts of broken behavior. I ended up nuking the whole cluster and vm and starting from scratch. Now I've got working. I changed the deployment to a statefulset, though I think it could work either way.
Setup and test:
kubectl --context=minikube create -f postgres-statefulset.yaml
kubectl --context=minikube create -f postgres-service.yaml
url=$(minikube service postgres --url --format={{.IP}}:{{.Port}})
psql --host=${url%:*} --port=${url#*:} --username=postgres --dbname=postgres \
--command='SELECT refobjid FROM pg_depend LIMIT 1'
Password for user postgres:
refobjid
----------
1247
postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
selector:
app: postgres
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
protocol: TCP
postgres-statefulset.yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
labels:
app: postgres
role: service
spec:
replicas: 1
selector:
matchLabels:
app: postgres
role: service
serviceName: postgres
template:
metadata:
labels:
app: postgres
role: service
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
Original question
I created a deployment running one container (postgres-container) and a NodePort (postgres-service). I can connect to postgresql from within the pod itself:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- psql --port=5432 --username=postgres --dbname=postgres
But I can't connect through the service.
$ minikube service --url postgres-service
http://192.168.99.100:32254
$ psql --host=192.168.99.100 --port=32254 --username=postgres --dbname=postgres
psql: could not connect to server: Connection refused
Is the server running on host "192.168.99.100" and accepting
TCP/IP connections on port 32254?
I think postgres is correctly configured to accept remote TCP connections:
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- tail /var/lib/postgresql/data/pg_hba.conf
host all all 127.0.0.1/32 trust
...
host all all all md5
$ kubectl --context=minikube exec -it postgres-deployment-7fbf655986-r49s2 \
-- grep listen_addresses /var/lib/postgresql/data/postgresql.conf
listen_addresses = '*'
My service definition looks like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
type: NodePort
ports:
- port: 5432
targetPort: 5432
protocol: TCP
And the deployment is:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
template:
metadata:
labels:
app: postgres-container
spec:
containers:
- name: postgres-container
image: postgres:9.6
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
The resulting service configuration:
$ kubectl --context=minikube get service postgres-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-07T05:29:22Z
name: postgres-service
namespace: default
resourceVersion: "194827"
selfLink: /api/v1/namespaces/default/services/postgres-service
uid: 0da6bc36-f9e1-11e8-84ea-080027a52f02
spec:
clusterIP: 10.109.120.251
externalTrafficPolicy: Cluster
ports:
- nodePort: 32254
port: 5432
protocol: TCP
targetPort: 5432
selector:
app: postgres-container
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
I can connect if I use port-forward, but I'd like to use the nodePort instead. What am I missing?
I just deployed postgres and exposed its service through NodePort and following is my pod and service.
[root#master postgres]# kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-7ff9df5765-2mpsl 1/1 Running 0 1m
[root#master postgres]# kubectl get svc postgres
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres NodePort 10.100.199.212 <none> 5432:31768/TCP 20s
And this is how connected to postgres though the nodeport:
[root#master postgres]# kubectl exec -it postgres-7ff9df5765-2mpsl -- psql -h 10.6.35.83 -U postgresadmin --password -p 31768 postgresdb
Password for user postgresadmin:
psql (10.4 (Debian 10.4-2.pgdg90+1))
Type "help" for help.
postgresdb=#
In above, 10.6.35.83 is my node/host IP (not pod IP or clusterIP) and port is the NodePort defined in service. The issue is you're not using the right IP to connect to the postgresql.
I had this challenge when working with PostgreSQL database server in Kubernetes using Minikube.
Below is my statefulset yaml file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
replicas: 2
selector:
matchLabels:
app: postgresql-db
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
ports:
- containerPort: 5432
name: postgresql-db
volumeMounts:
- name: postgresql-db-data
mountPath: /data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-db-secret
key: DATABASE_PASSWORD
- name: PGDATA
valueFrom:
configMapKeyRef:
name: postgresql-db-configmap
key: PGDATA
volumeClaimTemplates:
- metadata:
name: postgresql-db-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 25Gi
To access your PostgreSQL database server outside your cluster simple run the command below in a separate terminal:
minikube service --url your-postgresql-db-service
In my case my PostgreSQL db service was postgresql-db-service:
minikube service --url postgresql-db-service
After you run the command you will get an IP address and a port to access your database. In my case it was:
http://127.0.0.1:61427
So you can access the database on the IP address and port with your defined database username and password.
I tried to configure mongo with authentication on a kubernetes cluster. I deployed the following yaml:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
When I tried to authenticate with username "admin" and password "abc123changeme" I received "Authentication failed.".
How can I configure mongo admin username and password (I want to get password from secret)?
Thanks
The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
See below YML which is adapted from a few of the examples I found online. Note the learning points for me
cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)
I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster
MONGODB_DATABASE needs to be set to 'admin' for authentication to work.
I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.
If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.
You need at least 3 nodes in a Mongo cluster !
The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
mongo shell is installed in the image so you should be able to connect to the replicaset with...
mongo mongodb://mongoadmin:adminpassword#mongo-db/admin?replicaSet=rs0
And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Supposing you created a secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Here a snippet to get a value from a secret in a kubernetes yaml file:
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.
Try this simplified code (which moves numactl out of the way):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
I raised an issue at:
https://github.com/docker-library/mongo/issues/330
Hopefully it will be fixed at some point so no need for the hack :o)
Adding this resolved the issue for me:
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
Seems like the default is set to "false".
If you are using Kubernetes you can check the reason for failure by using the command:
kubernetes logs <pod name>
This is what worked for me;
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: my-mongodb-pod
image: mongo:4.4.3
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "someMongoUser"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "somePassword"
- name: MONGO_REPLICA_SET
value: "myReplicaSet"
- name: MONGO_PORT
value: "27017"
# Note, to disable non-auth in mongodb is kind of complicated[4]
# Note, the `_getEnv` function is internal and undocumented[3].
#
# 1. https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
# 2. https://stackoverflow.com/a/54726708/2768067
# 3. https://stackoverflow.com/a/67037065/2768067
# 4. https://www.mongodb.com/features/mongodb-authentication
command:
- /bin/sh
- -c
- >
set -x # print command been ran
set -e # fail if any command fails
env;
ps auxwww;
printf "\n\t mongod:: start in the background \n\n";
mongod \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--quiet > /tmp/mongo.log.json 2>&1 &
sleep 9;
ps auxwww;
printf "\n\t mongod: set master \n\n";
mongo --port "${MONGO_PORT}" --eval '
rs.initiate({});
sleep(3000);';
printf "\n\t mongod: add user \n\n";
mongo --port "${MONGO_PORT}" --eval '
db.getSiblingDB("admin").createUser({
user: _getEnv("MONGO_INITDB_ROOT_USERNAME"),
pwd: _getEnv("MONGO_INITDB_ROOT_PASSWORD"),
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
});';
printf "\n\t mongod: shutdown \n\n";
mongod --shutdown;
sleep 3;
ps auxwww;
printf "\n\t mongod: restart with authentication \n\n";
mongod \
--auth \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--verbose=v