psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "<USERNAME>" does not exist K8S - postgresql

I am facing this error after using Taint - Toleration. I didn't understand why. Can someone explain to me?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-prod-deployment
namespace: prod
labels:
tier: prod
app: postgresql
spec:
selector:
matchLabels:
tier: prod
app: postgresql
strategy:
type: Recreate
template:
metadata:
labels:
tier: prod
app: postgresql
spec:
containers:
- name: postgres-prod
image: postgres
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
# - name: POSTGRES_PASSWORD
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: password
# - name: POSTGRES_USER
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: user
- name: POSTGRES_PASSWORD
value: aa0074
- name: POSTGRES_DB
value: todo_list_penn
- name: POSTGRES_USER
value: httpdwgp
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-prod-pv
mountPath: /var/lib/postgresql/data
nodeSelector:
tier: prod
tolerations:
- key: "tier"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
volumes:
- name: postgres-prod-pv
persistentVolumeClaim:
claimName: postgres-prod-claim
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-prod-claim
namespace: prod
spec:
resources:
requests:
storage: 2Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
Taint
node3=$(kubectl get no -o jsonpath="{.items[3].metadata.name}")
kubectl taint node $node3 tier=prod:NoSchedule
kubectl label node $node3 tier=prod
The error I got with taint and tolerance
root#postgres-prod-deployment-69979b64d6-tcp58:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "httpdwgp" does not exist
When I put the selector and other parts in the comment line, everything is fine.
root#postgres-prod-deployment-bc8bcd786-zrdwm:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql (15.0 (Debian 15.0-1.pgdg110+1))
Type "help" for help.
todo_list_penn=#
I've tried everything I can think of. At first I thought the problem was related to PVC and I spent a lot of time on it. Later, I realized that the problem stems from here.

Related

Connecting to GKE POD running Postgres with client Postico 2

I want to connect to a Postgres instance that it is in a pod in GKE.
I think a way to achieve this can be with kubectl port forwarding.
In my local I have "Docker for desktop" and when I apply the yamls files I am able to connect to the database. The yamls I am using in GKE are almost identical
secrets.yaml
apiVersion: v1
kind: Secret
metadata:
namespace: staging
name: postgres-secrets
type: Opaque
data:
MYAPPAPI_DATABASE_NAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_USERNAME: XXXENCODEDXXX
MYAPPAPI_DATABASE_PASSWORD: XXXENCODEDXXX
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: staging
name: db-data-pv
labels:
type: local
spec:
storageClassName: generic
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql/data"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: staging
name: db-data-pvc
spec:
storageClassName: generic
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
deployment.yaml
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: staging
labels:
app: postgres-db
name: postgres-db
spec:
replicas: 1
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:12.4
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-db
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_PASSWORD
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: db-data-pvc
svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: staging
labels:
app: postgres-db
name: postgresdb-service
spec:
type: ClusterIP
selector:
app: postgres-db
ports:
- port: 5432
and it seems that everything is working
Then I execute kubectl port-forward postgres-db-podname 5433:5432 -n staging and when I try to connect it throws
FATAL: role "myappuserdb" does not exist
UPDATE 1
This is from GKE YAML
spec:
containers:
- env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_NAME
name: postgres-secrets
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_USERNAME
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: MYAPPAPI_DATABASE_PASSWORD
name: postgres-secrets
UPDATE 2
I will explain what happened and how I solve this.
The first time I applied the files, kubectl apply -f k8s/, in the deployment, the environment variable POSTGRES_USER was referencing a wrong secret, MYAPPAPI_DATABASE_NAME and it should make reference to MYAPPAPI_DATABASE_USERNAME.
After this first time, everytime I did kubectl delete -f k8s/ the resources were deleted. However, when I created the resources again, the data that I created in the previous step was not cleaned.
I deleted the cluster and created a new cluster and everything worked. I need to check if there is a way to clean the data in kubernetes volume.
in your deployment's env spec you have assigned the wrong value for POSTGRES_USER. you have assigned the value POSTGRES_USER = MYAPPAPI_DATABASE_NAME.
but i think it should be POSTGRES_USER = MYAPPAPI_DATABASE_USERNAME .
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_NAME #<<<this is the value need to change>>>
please try this one
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: MYAPPAPI_DATABASE_USERNAME

chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted

Hi I have set up an small NFS server at home using my raspberry pi.
An I want to set that as the default storage for all of my kubernetes containers.
However I keep on getting this chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
here is my config.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-ss
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgredb
volumes:
- name: pv-data
nfs:
path: /mnt/infra-data/pg
server: 192.168.1.150
readOnly: false
I'm wondering what would be the cause of this. and how can i solve it.
Thanks,

Error: PostgreSQL Kubernetes data directory "/var/lib/postgresql/data" has wrong ownership windows

I am new to kubernetes and stuff. I was going through the tutorials, I encountered a error while using the pstgres database and persistent volume claim. I am pretty sure that all the permissions are being given to the user but still the error suggest that the folder has wrong ownership.
Here are my configuration files
This is the persistent volume claim file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Here is my postgres deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: "trust"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
Here is the error message
2020-04-12 01:57:11.986 UTC [82] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2020-04-12 01:57:11.986 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
Any help is appreciated?
This is working for me check subpath in volume mount
apiVersion: apps/v1
kind: StatefulSet
metadata:
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
creationTimestamp: null
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: <Password>
- name: POSTGRES_DB
value: <DB name>
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:9.5
imagePullPolicy: IfNotPresent
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
subPath: pgdata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeMode: Filesystem
status:
phase: Pending

How to solve permission trouble when running Postgresql from minikube?

I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:
minikube-persistent-volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
postgres-persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
postgres-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
when I start this I get the following in the logs from the deployment:
[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
Why do I get this Permission denied error and what can I do about it?
Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use /data/postgres as a path and things will work.
Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:
/data
/var/lib/localkube
/var/lib/docker
Read these sections for more details:
https://github.com/kubernetes/minikube#persistent-volumes
https://github.com/kubernetes/minikube#mounted-host-folders

Postgres DB Password not being set in Kubernetes script

For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script.
apiVersion: v1
kind: ReplicationController
metadata:
name: postgres
labels:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- resources:
image: postgres:9.4
name: postgres
env:
- name: DB_PASS
value: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: postgres-disk
fsType: ext4
Any ideas?
According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.