Permission issue with Persistent volume in postgres in Kubernetes - postgresql

I know this question has been asked repeatedly but not fully answered. I have a postgres running in as root user in a container which is using the persistent volume. But it seems like there is permission issue issue in mounting in the container.
Container logs
`The files belonging to this database system will be owned by user
"postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /data ... ok
initdb: could not create directory "/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/data"`
Persistent Volume and Persistent Volume Claim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
labels:
app: pgmaster
namespace: pustakalaya
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/library/pgmaster-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
labels:
app: postgres
namespace: pustakalaya
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
and Pod file:
spec:
selector:
matchLabels:
app: pgmaster
replicas: 1
template:
metadata:
labels:
app: pgmaster
spec:
# initContainers:
# - name: chmod-er
# image: busybox:latest
# command: ['sh', '-c' ,'/bin/chmod -R 777 /data && /bin/chown -R 999:999 /data']
containers:
- name: pgmaster
image: becram/olen-elib-db-master:v5.3.0
env:
- name: POSTGRES_DB
value: pustakalaya
- name: POSTGRES_USER
value: pustakalaya_user
- name: POSTGRES_PASSWORD
value: pustakalaya123
- name: PGDATA
value: /data
- name: POSTGRES_BACKUP_DIR
value: /backup
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /data:rw
name: pgmaster-volume
# restartPolicy: Always
volumes:
- name: pgmaster-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim

I was having the same issue with Minikube. I solved it with a manual approach. Since the folder is created on the host machine - which is running the node, I ssh-ed into the cluster. On Minikube you could do this by:
minikube ssh
Next, find the folder on the cluster host machine and manually change the permissions.
chmod -R 777 /myfolder
chown -R 999:999 /myfolder
After this, I executed the manifest files again, and it ran without a problem.
So to fix this, you need to change permissions from your cluster machine and not from your container.

Related

Accessing Postgresql data of Kubernetes cluster

I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.
When I exec myself into one of the two postgres pod (kubectl exec --stdin --tty [postgres_pod] -- /bin/bash) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.
So in short I create 4 tables; in one postgres pod I have 4 tables but 2 are empty, in the other postgres pod there are 3 tables and the tables that were empty in the first pod, here are filled with data.
Why the pods don't have the same data in it?
How can I access and download the entire database?
PS. I deploy the cluster using HELM in minikube.
Here are the YAML files:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database-pg
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/pgdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
volumeMounts:
- name: postgres-disk
mountPath: /data
# Config from ConfigMap
envFrom:
- configMapRef:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-disk
spec:
accessModes: ["ReadWriteOnce"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.
Here's what I did to download the postgres volume:
First of all, minikube supports some specific directories for volume appear:
minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
So I've changed the mount path to be under the /data directory. This made the database volume visible.
After this I ssh'ed into minikube and copied the database volume to a new directory (I used /home/docker as the user of minikube is docker).
sudo cp -R /data/pgdata /home/docker
The volume pgdata was still owned by root (access denied error) so I changed it to be owned by docker. For this I also set a new password which I knew:
sudo passwd docker # change password for docker user
sudo chown -R docker: /home/docker/pgdata # change owner from root to docker
Then you can exit and copy the directory into you local machine:
exit
scp -r $(minikube ssh-key) docker#$(minikube ip):/home/docker/pgdata [your_local_path].
NOTE
Mario's advice, which is to use pgdump is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.

Postgres on Azure kubernetes volume permission error

I'm trying to deploy Postgresql on Azure Kubernetes with data persistency. So I'm using PVC.
I searched lots of posts on here, most of them offered yaml files like below, but it's giving the error below;
chmod: changing permissions of '/var/lib/postgresql/data/pgdata': Operation not permitted
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: error: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ...
deployment yaml file is below;
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresql
image: postgres:13.2
securityContext:
runAsUser: 999
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgresql-secret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb-kap
volumes:
- name: postgredb-kap
persistentVolumeClaim:
claimName: postgresql-pvc
Secret yaml is below;
apiVersion: v1
kind: Secret
metadata:
name: postgresql-secret
type: Opaque
data:
POSTGRES_DB: a2V5sd4=
POSTGRES_USER: cG9zdGdyZXNhZG1pbg==
POSTGRES_PASSWORD: c234Rw==
PGDATA: L3Za234dGF0YQ==
pvc and sc yaml files are below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-pvc
labels:
app: postgresql
spec:
storageClassName: postgresql-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgresql-sc
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
So when I use the mountpath like "- mountPath: /var/lib/postgresql/", it's working. I can reach the DB and it's good. But when I delete the pod and recreating, there is no DB! So no data persistency.
Can you please help, what am I missing here?
Thanks!
One thing you could try is to change uid=1000,gid=1000 in mount options to 999 since this is the uid of postgres user in postgres conatiner (I didn't test this).
Another solution that will for certain solve this issue involves init conatainers.
Postgres container requires to start as root to be able to chown pgdata dir since its mounted as root dir. After it does this, it drops root permisions and runs as postgres user.
But you can use init container (running as root) to chmod the volume dir so that you can run main container as non-root.
Here is an example:
initContainers:
- name: init
image: alpine
command: ["sh", "-c", "chown 999:999 /var/lib/postgresql/data"]
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb-kap
Based on the helpful answer from Matt. For bitnami postgresql the initContainer also works but with a slightly different configuration:
initContainers:
- name: init
image: alpine
command: ["sh", "-c", "chown 1001:1001 /bitnami/postgresql"]
volumeMounts:
- mountPath: /bitnami/postgresql
name: postgres-volume

What is the correct way to manage Kubernetes NFS PV permissions for Postgres?

I am standing up Harbor using Helm into a on-prem Kubernetes cluster. I would like to persist data to an NFS share exposed on an EMC storage appliance. I have created a PV and PVC pointed to the nfs share and test them with busybox to confirm basic setup works. In the values for the helm chart, I specify the existingClaim and subPath. However, the postgres database pod fails. By looking at kubectl logs and describe, I determine the issue is permissions, however I'm unclear how to proceed. Specifically:
the init container change-permission-of-directory changes uid:gid of the /harbor/database folder to postgres:postrges.
the remove-lost-found init container fails because "Permission Denied"
I manually chmod -R 777 /harbor/database. remove-lost-found completes
the database container creates some folders global and pg_xlog, but then fails with permission denied
I manually chmod -R 777 /harbor/database again.
the database container fails with "initdb: directory "/var/lib/postgresql/data" exists but is not empty"
If I delete the folders, it recreates the folders and fails with permission denied.
I assume chmod 777 is not the right thing to do, but I'm confused how this should be configured. What am I doing wrong here?
Kubernetes file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: harbor-database-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /nfsserver/somepath/database
server: platform.mycompany.com
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: harbor-database-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: "1Gi"
volumeName: "harbor-database-pv"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: harbor-postgres
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 1
selector:
matchLabels:
app: harbor-postgres
template:
metadata:
labels:
app: harbor-postgres
spec:
containers:
- name: postgres
image: postgres:13
imagePullPolicy: "Always"
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: pgbench
- name: PGUSER
value: pgbench
- name: POSTGRES_PASSWORD
value: postgres#123
- name: PGBENCH_PASSWORD
value: superpostgres
- name: PGDATA
value: /var/lib/postgresql/data/pg
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: harbor-database-pvc
First init logs from kubectl logs :
fixing permissions on existing directory /var/lib/postgresql/data ... ok
initdb: error: could not create directory "/var/lib/postgresql/data/pg_wal/archive_status": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data"
initdb: warning: could not open directory "/var/lib/postgresql/data/global": Permission denied
initdb: warning: could not open directory "/var/lib/postgresql/data/pg_wal": Permission denied
initdb: error: failed to remove contents of data directory
creating subdirectories ...
Subsequent init logs:
initdb: error: directory "/var/lib/postgresql/data/pg" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/data/pg" or run initdb
with an argument other than "/var/lib/postgresql/data/pg".
Overall bad idea to persist databased on nfs/efs . Plain PV will do the job but dont try replication.

How to mount PostgreSQL data directory in Kubernetes?

I'm using minikube to run kubernetes locally. My local k8s have two pods which one of them is PostgreSQL and another one is my own app. I've mounted a PersistentVolume and PersistentVolumeClaim in order to make a stateful pod for PostgreSQL:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/psql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Here is PostgreSQL deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
imagePullPolicy: Never
image: postgres:9.6
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-persistent-storage
volumes:
- name: postgres-persistent-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
The problem is that PostgreSQL service doesn't start and this error occurs when I run its pod:
Error: /var/lib/postgresql/9.6/main is not accessible; please fix the directory permissions (/var/lib/postgresql/9.6/ should be world readable)
No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
I've checked inside of PostgreSQL pod and I found that /var/lib/postgresql is empty, just like /data/psql In minikube host.
Can anyone help?
Change:
volumeMounts:
- mountPath: /var/lib/postgresql
to
volumeMounts:
- mountPath: /var/lib/postgresql/data
With the wrong mountPoint postgres executables were overridden.
I attach an image with the data I see from inside the pod (on the left) and from inside minikube space (on the right, the little shell from virtualbox).

How to solve permission trouble when running Postgresql from minikube?

I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:
minikube-persistent-volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
postgres-persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
postgres-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
when I start this I get the following in the logs from the deployment:
[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
Why do I get this Permission denied error and what can I do about it?
Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use /data/postgres as a path and things will work.
Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:
/data
/var/lib/localkube
/var/lib/docker
Read these sections for more details:
https://github.com/kubernetes/minikube#persistent-volumes
https://github.com/kubernetes/minikube#mounted-host-folders