Postgres on Azure kubernetes volume permission error - postgresql

I'm trying to deploy Postgresql on Azure Kubernetes with data persistency. So I'm using PVC.
I searched lots of posts on here, most of them offered yaml files like below, but it's giving the error below;
chmod: changing permissions of '/var/lib/postgresql/data/pgdata': Operation not permitted
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: error: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ...
deployment yaml file is below;
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresql
image: postgres:13.2
securityContext:
runAsUser: 999
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgresql-secret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb-kap
volumes:
- name: postgredb-kap
persistentVolumeClaim:
claimName: postgresql-pvc
Secret yaml is below;
apiVersion: v1
kind: Secret
metadata:
name: postgresql-secret
type: Opaque
data:
POSTGRES_DB: a2V5sd4=
POSTGRES_USER: cG9zdGdyZXNhZG1pbg==
POSTGRES_PASSWORD: c234Rw==
PGDATA: L3Za234dGF0YQ==
pvc and sc yaml files are below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-pvc
labels:
app: postgresql
spec:
storageClassName: postgresql-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgresql-sc
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
So when I use the mountpath like "- mountPath: /var/lib/postgresql/", it's working. I can reach the DB and it's good. But when I delete the pod and recreating, there is no DB! So no data persistency.
Can you please help, what am I missing here?
Thanks!

One thing you could try is to change uid=1000,gid=1000 in mount options to 999 since this is the uid of postgres user in postgres conatiner (I didn't test this).
Another solution that will for certain solve this issue involves init conatainers.
Postgres container requires to start as root to be able to chown pgdata dir since its mounted as root dir. After it does this, it drops root permisions and runs as postgres user.
But you can use init container (running as root) to chmod the volume dir so that you can run main container as non-root.
Here is an example:
initContainers:
- name: init
image: alpine
command: ["sh", "-c", "chown 999:999 /var/lib/postgresql/data"]
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb-kap

Based on the helpful answer from Matt. For bitnami postgresql the initContainer also works but with a slightly different configuration:
initContainers:
- name: init
image: alpine
command: ["sh", "-c", "chown 1001:1001 /bitnami/postgresql"]
volumeMounts:
- mountPath: /bitnami/postgresql
name: postgres-volume

Related

unable to understand mounting postgres data path onto minikube kubernetes deployment with permission errors

I’m getting started with kubernetes, and I want to create a simple app with a single webserver & postgres database. The problem I’m running into is the deployment of the postgres is giving me permission errors. The following are discussions around this:
https://github.com/docker-library/postgres/issues/116
https://github.com/docker-library/postgres/issues/103
https://github.com/docker-library/postgres/issues/696
Can't get either Postgres permissions or PVC working in AKS
Kubernetes - Pod which encapsulates DB is crashing
Mount local directory into pod in minikube
https://serverfault.com/questions/981459/minikube-using-a-storageclass-to-provision-data-outside-of-tmp
EDIT
spec:
OSX - 10.15.4
minikube - v1.9.2
kubernetes - v1.18.2
minikube setup
minikube start --driver=virtualbox --cpus=2 --memory=5120 --kubernetes-version=v1.18.2 --container-runtime=docker --mount=true --mount-string=/Users/holmes/kubernetes/pgdata:/data/pgdata
The permission error: chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
I am trying to mount a local OS directory into minikube to be used with the postgres deployment/pod/container volume mount.
After I run the above setup I ssh into minikube (minikube ssh) and check the permissions
# minikube: /
drwxr-xr-x 3 root root 4096 May 13 19:31 data
# minikube: /data
drwx------ 1 docker docker 96 May 13 19:27 pgdata
By running the script below the chmod permission error surfaces. If I change the --mount-string=/Users/holmes/kubernetes/pgdata:/data (leave out /pgdata) and then minikube ssh to create the pgdata directory:
mkdir -p /data/pgdata
chmod 777 /data/pgdata
I get a different set of permissions before deployment
# minikube: /
drwx------ 1 docker docker 96 May 13 20:10 data
# minikube: /data
drwxrwxrwx 1 docker docker 64 May 13 20:25 pgdata
and after deployment
# minikube: /
drwx------ 1 docker docker 128 May 13 20:25 data
# minikube: /data
drwx------ 1 docker docker 64 May 13 20:25 pgdata
Not sure why this changes, and the chmod permission error persists. It seems like the above reference links are bouncing around different methods on different machines on different vms which I don’t understand nor can I get this to work. Can someone walk me getting this to work? Super confused going through all the above discussions.
postgres.yaml
apiVersion: v1
kind: Namespace
metadata:
name: data-block
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: data-block
labels:
type: starter
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: docker
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
namespace: data-block
labels:
app: postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/pgdata
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
namespace: data-block
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: data-block
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
persistentVolumeClaim:
claimName: postgres-pv-claim
UPDATE
I went ahead and updated the deployment script to a simple pod. The goal is map the postgres /var/lib/postgresql/data to my local file directory /Users/<my-path>/database/data to persist the data.
---
apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
namespace: data-block
labels:
name: postgres-pod
spec:
containers:
- name: postgres
image: postgres:12.3
imagePullPolicy: IfNotPresent
ports:
- name: postgres-port
containerPort: 5432
envFrom:
- configMapRef:
name: postgres-env-config
- secretRef:
name: postgres-secret
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
hostPath:
path: /Users/<my-path>/database/data
restartPolicy: Never
The error: initdb: error: could not access directory "/var/lib/postgresql/data": Permission denied
How to go about mounting the local file directory?
You are declaring the PGDATA field that maybe the cause of the issue. I faced the same error, this comes because there's as LOST+FOUND folder already in that directory however, the container wants it to be a empty dir. Giving the subPath field solves this issue. Please try this it should solve the issue and you need not need any PGDATA field. Try omitting it from your configmap and add subPath to some folder. Please go through following manifests.
https://github.com/mendix/kubernetes-howto/blob/master/postgres-deployment.yaml
https://www.bmc.com/blogs/kubernetes-postgresql/
it's a statefulset that usually you should go with and not a deployment when it comes to Database deployment.
- name: postgredb
mountPath: /var/lib/postgresql/data
#setting subPath will fix your issue it can be pgdata or
postgres or any other folder name according to your
choice.
subPath: postgres

Kubernetes PostgreSQL: How do I store config files elsewhere than the data directory?

Currently I'm trying to create a PostgreSQL Deployment for replication, using the postgres:latest image.
The config files are located by default within the data directory /var/lib/postgresql/data. For replication to work I need the data directory to be empty, but that means I have to keep the config files elsewhere.
Referring to the PostgreSQL Documentation:
If you wish to keep the configuration files elsewhere than the data directory, the postgres -D command-line option or PGDATA environment variable must point to the directory containing the configuration files, and the data_directory parameter must be set in postgresql.conf (or on the command line) to show where the data directory is actually located. Notice that data_directory overrides -D and PGDATA for the location of the data directory, but not for the location of the configuration files.
In a physical machine setup, we can manually move the files and set the location of data-directory in the postgresql.conf file. However in Kubernetes it is not so straight-forward.
I tried to use volumeMount with subPath to mount the config files in another location, then use command to change the new location of postgresql.conf.
Sample .yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: pg-replica
labels:
app: postgres
name: pg-replica
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mypassword
pg_hba.conf: |
# Contents
postgresql.conf: |
data_directory = /var/lib/postgresql/data/data-directory
recovery.conf: |
# Contents
---
apiVersion: v1
kind: Service
metadata:
name: pg-replica
labels:
app: postgres
name: pg-replica
spec:
type: NodePort
ports:
- nodePort: 31000
port: 5432
selector:
app: postgres
name: pg-replica
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pg-replica
spec:
selector:
matchLabels:
app: postgres
name: pg-replica
replicas: 1
template:
metadata:
labels:
app: postgres
name: pg-replica
spec:
containers:
- name: pg-replica
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: pg-replica
volumeMounts:
- name: pg-replica
mountPath: /var/lib/postgresql/data
- name: replica-config
mountPath: /var/lib/postgresql/postgresql.conf
subPath: postgresql.conf
- name: replica-config
mountPath: /var/lib/postgresql/pg_hba.conf
subPath: pg_hba.conf
- name: replica-config
mountPath: /var/lib/postgresql/recovery.conf
subPath: recovery.conf
command:
- "/bin/bash"
- "postgres -c config_file=/var/lib/postgresql/postgresql.conf"
volumes:
- name: pg-replica
persistentVolumeClaim:
claimName: pv-replica-claim
- name: replica-config
configMap:
name: pg-replica
The returned message was as following:
/bin/bash: postgres -c config_file=/var/lib/postgresql/postgresql.conf: No such file or directory
What is wrong with this configuration? And what steps am I missing to make it work?
Edit:
When using the volumeMount field, the directory is overwritten (all other files were removed) despite I specified the exact file to mount on with subPath. What could be the cause for this?
I realized there were a few mistakes here after posting this question...
I used PostgreSQL 11 for replication prior so I assumed they worked the same way (which of course is wrong, there are some changes). The recovery.conf is omitted from PostgreSQL 12 and it gave this error message FATAL: XX000: using recovery command file "recovery.conf" is not supported when I had it. so I had to remove it from my ConfigMap.
I confused about Docker's Entrypoint & Command to Kubernetes' Command & Args. After being corrected by my senior that Kubernetes Command will override the Docker Entrypoint, I'm going to need and use only Args afterwards.
The following are the changes I made to my ConfigMap and Deployment.
apiVersion: v1
kind: ConfigMap
metadata:
name: pg-replica
labels:
app: postgres
name: pg-replica
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mypassword
pg_hba.conf: |
# Contents
postgresql.conf: |
data_directory = '/var/lib/postgresql/data'
# the contents from recovery.conf are intergrated into postgresql.conf
primary_conninfo = # host address and authentication credentials
promote_trigger_file = # trigger file path
extra.sh: |
#!/bin/sh
postgres -D /var/lib/postgresql
---
apiVersion: v1
kind: Service
metadata:
name: pg-replica
labels:
app: postgres
name: pg-replica
spec:
type: NodePort
ports:
- nodePort: 31000
port: 5432
selector:
app: postgres
name: pg-replica
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pg-replica
spec:
selector:
matchLabels:
app: postgres
name: pg-replica
replicas: 1
template:
metadata:
labels:
app: postgres
name: pg-replica
spec:
containers:
- name: pg-replica
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: pg-replica
volumeMounts:
- name: pg-replica
mountPath: /var/lib/postgresql/data
- name: replica-config
mountPath: /var/lib/postgresql/postgresql.conf
subPath: postgresql.conf
- name: replica-config
mountPath: /var/lib/postgresql/pg_hba.conf
subPath: pg_hba.conf
- name: replica-config
mountPath: /docker-entrypoint-initdb.d/extra.sh
subPath: extra.sh
args:
- "-c"
- "config_file=/var/lib/postgresql/postgresql.conf"
- "-c"
- "hba_file=/var/lib/postgresql/pg_hba.conf"
volumes:
- name: pg-replica
persistentVolumeClaim:
claimName: pv-replica-claim
- name: replica-config
configMap:
name: pg-replica
The arguments in Args will set the location of the .conf files to where I specified.
For further steps in Replication:
After the pod is up, I manually ran the pod's shell with kubectl exec.
I removed all the files from the data-directory for step 3 (to copy files from master pod).
rm -rf /var/lib/postgresql/data/*
Use pg_basebackup to backup data from master node.
pg_basebackup -h <host IP> --port=<port number used> -D /var/lib/postgresql/data -P -U replica -R -X stream
And that's it. Now I managed to have my pg-replica pod replicating my master pod.
As mentioned in comments, I really encorage you to use the Postgres Helm chart to setup yout environment.
The way you solved the issue could work, but if the pod died for some reason, all work you have done will be lost and you'll need to reconfigure everything again.
Here you can found all information about how to create a postgres deployment with high availability and replication.
To install HELM you can follow this guide.

Permission issue with Persistent volume in postgres in Kubernetes

I know this question has been asked repeatedly but not fully answered. I have a postgres running in as root user in a container which is using the persistent volume. But it seems like there is permission issue issue in mounting in the container.
Container logs
`The files belonging to this database system will be owned by user
"postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /data ... ok
initdb: could not create directory "/data/pg_xlog": Permission denied
initdb: removing contents of data directory "/data"`
Persistent Volume and Persistent Volume Claim:
kind: PersistentVolume
apiVersion: v1
metadata:
name: store-persistent-volume
labels:
app: pgmaster
namespace: pustakalaya
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/library/pgmaster-data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: store-persistent-volume-claim
labels:
app: postgres
namespace: pustakalaya
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
and Pod file:
spec:
selector:
matchLabels:
app: pgmaster
replicas: 1
template:
metadata:
labels:
app: pgmaster
spec:
# initContainers:
# - name: chmod-er
# image: busybox:latest
# command: ['sh', '-c' ,'/bin/chmod -R 777 /data && /bin/chown -R 999:999 /data']
containers:
- name: pgmaster
image: becram/olen-elib-db-master:v5.3.0
env:
- name: POSTGRES_DB
value: pustakalaya
- name: POSTGRES_USER
value: pustakalaya_user
- name: POSTGRES_PASSWORD
value: pustakalaya123
- name: PGDATA
value: /data
- name: POSTGRES_BACKUP_DIR
value: /backup
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /data:rw
name: pgmaster-volume
# restartPolicy: Always
volumes:
- name: pgmaster-volume
persistentVolumeClaim:
claimName: store-persistent-volume-claim
I was having the same issue with Minikube. I solved it with a manual approach. Since the folder is created on the host machine - which is running the node, I ssh-ed into the cluster. On Minikube you could do this by:
minikube ssh
Next, find the folder on the cluster host machine and manually change the permissions.
chmod -R 777 /myfolder
chown -R 999:999 /myfolder
After this, I executed the manifest files again, and it ran without a problem.
So to fix this, you need to change permissions from your cluster machine and not from your container.

Mount host dir for Postgres on Minikube - permissions issue

I'm trying to setup PostgreSQL on Minikube with data path being my host folder mounted on Minikube (I'd like to keep my data on host).
With the kubernetes object created (below) I get permission error, the same one as here How to solve permission trouble when running Postgresql from minikube? although the question mentioned doesn't answer the issue. It advises to mount minikube's VM dir instead.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
volumes:
- name: storage
hostPath:
path: /data/postgres
Is there any other way to do that other than building own image on top of Postgres and playing with the permissions somehow? I'm on macOS with Minikube 0.30.0 and I'm experiencing that with both Virtualbox and hyperkit drivers for Minikube.
Look at these lines from here : hostPath
the files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a hostPath volume
So, either you have to run as root or you have to change the file permission of /data/postgres directory.
However, you can run your Postgres container as root without rebuilding docker image.
You have to add following to your container:
securityContext:
runAsUser: 0
Your yaml should look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage
env:
- name: POSTGRES_PASSWORD
value: user
- name: POSTGRES_USER
value: pass
- name: POSTGRES_DB
value: k8s
securityContext:
runAsUser: 0
volumes:
- name: storage
hostPath:
path: /data/postgres

How to solve permission trouble when running Postgresql from minikube?

I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:
minikube-persistent-volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
postgres-persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
postgres-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
when I start this I get the following in the logs from the deployment:
[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
Why do I get this Permission denied error and what can I do about it?
Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use /data/postgres as a path and things will work.
Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:
/data
/var/lib/localkube
/var/lib/docker
Read these sections for more details:
https://github.com/kubernetes/minikube#persistent-volumes
https://github.com/kubernetes/minikube#mounted-host-folders