Postgres on Kubernetes password authentication failed - postgresql

I used this tutorial: https://severalnines.com/blog/using-kubernetes-deploy-postgresql
With my configuration on Kubernetes which is based off the official Docker Image I keep getting:
psql -h <publicworkernodeip> -U postgres -p <mynodeport> postgres
Password for user postgres: example
psql: FATAL: password authentication failed for user "postgres"
DETAIL: Role "postgres" does not exist.
Connection matched pg_hba.conf line 95: "host all all all md5"
yamls:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: example
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:11
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumes:
- name: postgres
persistentVolumeClaim:
claimName: postgres-pv-claim
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 12Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 12Gi

try to login using the below command
psql -h $(hostname -i) -U postgres
kubectl exec -it postgres-566fbfb87c-rcbvd sh
# env
POSTGRES_PASSWORD=example
POSTGRES_USER=postgres
POSTGRES_DB=postgres
# psql -h $(hostname -i) -U postgres
Password for user postgres:
psql (11.2 (Debian 11.2-1.pgdg90+1))
Type "help" for help.
postgres=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=#

Related

psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "<USERNAME>" does not exist K8S

I am facing this error after using Taint - Toleration. I didn't understand why. Can someone explain to me?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-prod-deployment
namespace: prod
labels:
tier: prod
app: postgresql
spec:
selector:
matchLabels:
tier: prod
app: postgresql
strategy:
type: Recreate
template:
metadata:
labels:
tier: prod
app: postgresql
spec:
containers:
- name: postgres-prod
image: postgres
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
# - name: POSTGRES_PASSWORD
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: password
# - name: POSTGRES_USER
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: user
- name: POSTGRES_PASSWORD
value: aa0074
- name: POSTGRES_DB
value: todo_list_penn
- name: POSTGRES_USER
value: httpdwgp
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-prod-pv
mountPath: /var/lib/postgresql/data
nodeSelector:
tier: prod
tolerations:
- key: "tier"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
volumes:
- name: postgres-prod-pv
persistentVolumeClaim:
claimName: postgres-prod-claim
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-prod-claim
namespace: prod
spec:
resources:
requests:
storage: 2Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
Taint
node3=$(kubectl get no -o jsonpath="{.items[3].metadata.name}")
kubectl taint node $node3 tier=prod:NoSchedule
kubectl label node $node3 tier=prod
The error I got with taint and tolerance
root#postgres-prod-deployment-69979b64d6-tcp58:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "httpdwgp" does not exist
When I put the selector and other parts in the comment line, everything is fine.
root#postgres-prod-deployment-bc8bcd786-zrdwm:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql (15.0 (Debian 15.0-1.pgdg110+1))
Type "help" for help.
todo_list_penn=#
I've tried everything I can think of. At first I thought the problem was related to PVC and I spent a lot of time on it. Later, I realized that the problem stems from here.

Accessing Postgresql data of Kubernetes cluster

I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.
When I exec myself into one of the two postgres pod (kubectl exec --stdin --tty [postgres_pod] -- /bin/bash) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.
So in short I create 4 tables; in one postgres pod I have 4 tables but 2 are empty, in the other postgres pod there are 3 tables and the tables that were empty in the first pod, here are filled with data.
Why the pods don't have the same data in it?
How can I access and download the entire database?
PS. I deploy the cluster using HELM in minikube.
Here are the YAML files:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database-pg
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/pgdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
volumeMounts:
- name: postgres-disk
mountPath: /data
# Config from ConfigMap
envFrom:
- configMapRef:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-disk
spec:
accessModes: ["ReadWriteOnce"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.
Here's what I did to download the postgres volume:
First of all, minikube supports some specific directories for volume appear:
minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
So I've changed the mount path to be under the /data directory. This made the database volume visible.
After this I ssh'ed into minikube and copied the database volume to a new directory (I used /home/docker as the user of minikube is docker).
sudo cp -R /data/pgdata /home/docker
The volume pgdata was still owned by root (access denied error) so I changed it to be owned by docker. For this I also set a new password which I knew:
sudo passwd docker # change password for docker user
sudo chown -R docker: /home/docker/pgdata # change owner from root to docker
Then you can exit and copy the directory into you local machine:
exit
scp -r $(minikube ssh-key) docker#$(minikube ip):/home/docker/pgdata [your_local_path].
NOTE
Mario's advice, which is to use pgdump is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.

SQL script isn't running Kubernetes, but runs fine using just Docker

Have a pretty simple test.sql:
SELECT 'CREATE DATABASE test_dev'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'test_dev')\gexec
\c test_dev
CREATE TABLE IF NOT EXISTS test_table (
username varchar(255)
);
INSERT INTO test_table(username)
VALUES ('test name');
Doing the following does what I expected it to do:
Dockerfile.dev
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
docker build -t testproj/postgres -f db/Dockerfile.dev .
docker run -p 5432:5432 testproj/postgres
This creates the database, switches to it, creates a table, and inserts the values.
Now I'm trying to do the same in Kubernetes with Skaffold, but nothing really seems to happen: no error messages, but nothing changed in postgres
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: init-script
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
- name: init-script
persistentVolumeClaim:
claimName: init-script
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: init-script
mountPath: /docker-entrypoint-initdb.d
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
What I am doing wrong here?
Basically tried to follow the answers here, but isn't panning out. Sounded like I needed to move the .sql to a persistent volume.
https://stackoverflow.com/a/53069399/3123109
You don’t want to mount a volume over the entry point folder. You are basically masking the script in your image with an empty folder. Also you aren’t using your modified image, so it wouldn’t have your script in the first place.
I'm not 100% sure your image will work on Kubernetes.
I would recommend using something tested like PostgreSQL chart by bitnami, also it might be helpful to read Using Kubernetes to Deploy PostgreSQL.
If you want to use your own image inside the Kubernetes you need to push the image to your private Docker registry or repository. This is explained here Pull an Image from a Private Registry.
As for the test.sql you can store it into ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-sql
data:
test.sql: |
SELECT 'CREATE DATABASE test_dev'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'test_dev')\gexec
\c test_dev
CREATE TABLE IF NOT EXISTS test_table (
username varchar(255)
);
INSERT INTO test_table(username)
VALUES ('test name');
Which you can later mount as init.sql or execute after pod was created.
coderanger pointed out a glaring mistake that got me going in the right direction: I wasn't referring to the modified image.
I update accordingly:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: testproject/postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Then it started loading the data accordingly.

Reconfigure postgres with kubectl apply command

I'm trying to change the settings of my postgres database inside my local minikube cluster. I mistakenly deployed a database without specifying the postgres user, password and database.
The problem: When I add the new env variables and use kubectl apply -f postgres-deployment.yml, postgres does not create the user, password or database that specified by the environment variables.
This is the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: PGUSER
value: admin
- name: PGPASSWORD
value: password
- name: PGDATABSE
value: testdb
How can I change the settings of postgres when I apply the deployment file?
Can you share pod's logs?
kubectl logs <pod_name>
Postgres is using init script with defined variable names:
POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_DB
Try this one out
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
env:
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: testdb
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pg-data
volumes:
- name: pg-data
emptyDir: {}

Postgres DB Password not being set in Kubernetes script

For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script.
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
labels:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- resources:
image: postgres:9.4
name: postgres
env:
- name: DB_PASS
value: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: postgres-disk
fsType: ext4
Any ideas?
According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.