Error: PostgreSQL Kubernetes data directory "/var/lib/postgresql/data" has wrong ownership windows - postgresql

I am new to kubernetes and stuff. I was going through the tutorials, I encountered a error while using the pstgres database and persistent volume claim. I am pretty sure that all the permissions are being given to the user but still the error suggest that the folder has wrong ownership.
Here are my configuration files
This is the persistent volume claim file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Here is my postgres deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: "trust"
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: pgpassword
key: PGPASSWORD
Here is the error message
2020-04-12 01:57:11.986 UTC [82] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2020-04-12 01:57:11.986 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
Any help is appreciated?

This is working for me check subpath in volume mount
apiVersion: apps/v1
kind: StatefulSet
metadata:
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
creationTimestamp: null
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: <Password>
- name: POSTGRES_DB
value: <DB name>
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:9.5
imagePullPolicy: IfNotPresent
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
subPath: pgdata
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
volumeMode: Filesystem
status:
phase: Pending

Related

psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "<USERNAME>" does not exist K8S

I am facing this error after using Taint - Toleration. I didn't understand why. Can someone explain to me?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-prod-deployment
namespace: prod
labels:
tier: prod
app: postgresql
spec:
selector:
matchLabels:
tier: prod
app: postgresql
strategy:
type: Recreate
template:
metadata:
labels:
tier: prod
app: postgresql
spec:
containers:
- name: postgres-prod
image: postgres
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
# - name: POSTGRES_PASSWORD
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: password
# - name: POSTGRES_USER
# valueFrom:
# secretKeyRef:
# name: postgresql-prod-secret
# key: user
- name: POSTGRES_PASSWORD
value: aa0074
- name: POSTGRES_DB
value: todo_list_penn
- name: POSTGRES_USER
value: httpdwgp
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-prod-pv
mountPath: /var/lib/postgresql/data
nodeSelector:
tier: prod
tolerations:
- key: "tier"
operator: "Equal"
value: "prod"
effect: "NoSchedule"
volumes:
- name: postgres-prod-pv
persistentVolumeClaim:
claimName: postgres-prod-claim
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-prod-claim
namespace: prod
spec:
resources:
requests:
storage: 2Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
Taint
node3=$(kubectl get no -o jsonpath="{.items[3].metadata.name}")
kubectl taint node $node3 tier=prod:NoSchedule
kubectl label node $node3 tier=prod
The error I got with taint and tolerance
root#postgres-prod-deployment-69979b64d6-tcp58:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "httpdwgp" does not exist
When I put the selector and other parts in the comment line, everything is fine.
root#postgres-prod-deployment-bc8bcd786-zrdwm:/# psql -U $POSTGRES_USER $POSTGRES_DB
psql (15.0 (Debian 15.0-1.pgdg110+1))
Type "help" for help.
todo_list_penn=#
I've tried everything I can think of. At first I thought the problem was related to PVC and I spent a lot of time on it. Later, I realized that the problem stems from here.

How to configure pv and pvc for single pod with multiple containers in kubernetes

Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.

chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted

Hi I have set up an small NFS server at home using my raspberry pi.
An I want to set that as the default storage for all of my kubernetes containers.
However I keep on getting this chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
here is my config.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pg-ss
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
ports:
- containerPort: 5432
name: postgredb
volumes:
- name: pv-data
nfs:
path: /mnt/infra-data/pg
server: 192.168.1.150
readOnly: false
I'm wondering what would be the cause of this. and how can i solve it.
Thanks,

Kubernetes postgres PVC data

I have postgres yaml for deployment with PV/PVC created. In DB I have a lot of info. When I delete Statesfulset postgres and created new one, data is being lost. How can I do to have restored data after creating new Statesfulset ?
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: postgres
replicas: 1
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: postgres
spec:
hostname: postgres
containers:
- name: postgres
image: postgres
imagePullPolicy: Always
restartPolicy: Always
env:
- name: "PGDATA"
value: "/var/lib/postgresql/data"
- name: "POSTGRES_INITDB_WALDIR"
value: "/var/lib/postgresql/dblogs/logs"
...
ports:
- name: port5432
containerPort: 5432
volumeMounts:
- name: test
mountPath: /var/lib/postgresql
- name: postgressecret
mountPath: /etc/postgressecret
readonly: true
volumes:
- name: postgressecret
secret:
secretName: postgressecret
- name: test
persistentVolumeClaim:
claimName: test
volumeClaimTemplates:
- metadata:
name: test
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
Kubernetes does not remove PVCs after the deletion of their corresponding StatefulSets! From this ticket it can be deducted that auto remove functionality is planned for version 1.23

Kubernetes Permission denied for mounted nfs volume

The following is the k8s definition used:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: nfs-server
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Now /mnt directory in nfs-busybox should have 2000 as gid(as per docs). But it still have root and root as user and group. Since application is running with 1000/2000 its not able to create any logs or data in /mnt directory.
chmod might solve the issue, but it looks like work around. Is there any permenant solution for this?
Observations: If i replace nfs with some other PVC its working fine as told in docs.
Have you tried initContainers method? It fixes permissions on an exported directory:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /exports"]
volumeMounts:
- name: nfs
mountPath: /exports
If you use a standalone NFS server on Linux box, I suggest using no_root_squash option:
/exports *(rw,no_root_squash,no_subtree_check)
To manage the directory permission on nfs-server, there is a need to change security context and raise it to privileged mode:
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: nfs-server
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true