Minikube volume write permissions? - kubernetes

The big picture is: I'm trying to install WordPress with plugins in Kubernetes, for development in Minikube.
I want to use the official wp-cli Docker image to install the plugins. I am trying to use a write-enabled persistence volume. In Minikube, I turn on the mount to minikube cluster with command:
minikube mount ./src/plugins:/data/plugins
Now, the PV definition looks like this:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-install-plugins-pv
labels:
app: wordpress
env: dev
spec:
capacity:
storage: 5Gi
storageClassName: ""
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /data/plugins
The PVC looks like this:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-install-plugins-pvc
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: ""
volumeName: wordpress-install-plugins-pv
Both the creation and the binding are succesful. The Job definition for plugin installation looks like this:
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-plugins
labels:
env: dev
app: wordpress
spec:
template:
spec:
securityContext:
fsGroup: 82 # www-data
volumes:
- name: plugins-volume
persistentVolumeClaim:
claimName: wordpress-install-plugins-pvc
- name: config-volume
configMap:
name: wordpress-plugins
containers:
- name: wpcli
image: wordpress:cli
volumeMounts:
- mountPath: "/configmap"
name: config-volume
- mountPath: "/var/www/html/wp-content/plugins"
name: plugins-volume
command: ["sh", "-c", "id; \
touch /var/www/html/wp-content/plugins/test; \
ls -al /var/www/html/wp-content; \
wp core download --skip-content --force && \
wp config create --dbhost=mysql \
--dbname=$MYSQL_DATABASE \
--dbuser=$MYSQL_USER \
--dbpass=$MYSQL_PASSWORD && \
cat /configmap/wp-plugins.txt | xargs -I % wp plugin install % --activate" ]
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secrets
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secrets
key: dbname
restartPolicy: Never
backoffLimit: 3
Again, the creation looks fine and all the steps look fine. The problem I have is that apparently the permissions to the mounted volume do not allow the current user to write to the folder. Here's the log contents:
uid=82(www-data) gid=82(www-data) groups=82(www-data)
touch: /var/www/html/wp-content/plugins/test: Permission denied
total 9
drwxr-xr-x 3 root root 4096 Mar 1 20:15 .
drwxrwxrwx 3 www-data www-data 4096 Mar 1 20:15 ..
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
Downloading WordPress 5.3.2 (en_US)...
md5 hash verified: 380d41ad22c97bd4fc08b19a4eb97403
Success: WordPress downloaded.
Success: Generated 'wp-config.php' file.
Installing WooCommerce (3.9.2)
Downloading installation package from https://downloads.wordpress.org/plugin/woocommerce.3.9.2.zip...
Unpacking the package...
Warning: Could not create directory.
Warning: The 'woocommerce' plugin could not be found.
Error: No plugins installed.
Am I doing something wrong? I tried different minikube mount options, but nothing really helped! Did anyone run into this issue with minikube?

This is a long-term issue that prevents a non-root user to write to a container when mounting a hostPath PersistentVolume in Minikube.
There are two common workarounds:
Simply use the root user.
Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.
Please let me know if that helped.

I looked deeper into the way the volume mount works in minikube, and I think I came up with solution.
TL;DR
minikube mount ./src/plugins:/data/mnt/plugins --uid 82 --gid 82
Explanataion
There are to mounting moments:
minikube mounting the directory with minikube mount
volumne being mounted in Kubernetes
minikube mount sets up the directory in the VM with the UID and GID provided as parameters, with the default being docker user and group.
When the volume is being mounted in the Pod as a directory, it gets mounted with the exact same UID and GID as the host one! You can see this in my question:
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
UID=1000 and GID=1000 refer to the docker UID and GID in the minikube host. Which gave me an idea, that I should try mounting with the UID and GID of the user in the Pod.
82 is the id of both the user and the group www-data in the wordpress:cli Docker image, and it works!
One last think worth mentioning: the volume is mounted as a subdirectory in the Pod (wp-content in my case). It turned out that wp-cli actually needs access to that directory as well to create temporary folder. What I ended up doing is adding an emptyDir volume, like this:
volumes
- name: content
emptyDir: {}
I hope it help anybody! For what it's worth, my version of minikube is 1.7.3, running on OS X with VirtualBox driver.

Unfortunately, for Minikube today, 2 (Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.) doesn't seem to be a viable option, because the HostPast provisioner, which is used under the hood, doesn't honor Security Context. There seems to be a newer HostPath Provisioner, which preemptively sets new mounts to 777, but the one that came with my 1.25 MiniKube is still returning these paths as 755.

Related

Kubernetes Pod permission denied on local volume

I have created a pod on Kubernetes and mounted a local volume but when I try to execute the ls command on locally mounted volume, I get a permission denied error. If I disable SELINUX then everything works fine. I am unable to make out how do I make it work with SELinux enabled.
Following is the output of permission denied:
kubectl apply -f testpod.yaml
root#olcne-operator-ol8 opc]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/testpod 1/1 Running 0 5s
# kubectl exec -i -t testpod /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root#testpod /]# cd /u01
[root#testpod u01]# ls
ls: cannot open directory '.': Permission denied
[root#testpod u01]#
Following is the testpod.yaml
cat testpod.yaml
kind: Pod
apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
securityContext:
privileged: false
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
Selinux Configuration on worker node:
# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
---
# semanage fcontext -l | grep kub | grep container_file
/var/lib/kubelet/pods(/.*)? all files system_u:object_r:container_file_t:s0
/var/lib/kubernetes/pods(/.*)? all files system_u:object_r:container_file_t:s0
Machine OS Details
rpm -qa | grep kube
kubectl-1.20.6-2.el8.x86_64
kubernetes-cni-0.8.1-1.el8.x86_64
kubeadm-1.20.6-2.el8.x86_64
kubelet-1.20.6-2.el8.x86_64
kubernetes-cni-plugins-0.9.1-1.el8.x86_64
----
cat /etc/oracle-release
Oracle Linux Server release 8.4
---
uname -r
5.4.17-2102.203.6.el8uek.x86_64
This is a community wiki answer posted for better visibility. Feel free to expand it.
SELinux labels can be assigned with seLinuxOptions:
apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
securityContext:
seLinuxOptions:
level: "s0:c123,c456"
From the official documentation:
seLinuxOptions: Volumes that support SELinux labeling are relabeled
to be accessible by the label specified under seLinuxOptions.
Usually you only need to set the level section. This sets the
Multi-Category Security (MCS) label given to all Containers in the Pod
as well as the Volumes.
Based on the information from the original post on stackoverflow:
You can only specify the level portion of an SELinux label when relabeling a path destination pointed to by a hostPath volume. This
is automatically done so by the seLinuxOptions.level attribute
specified in your securityContext.
However attributes such as seLinuxOptions.type currently have no
effect on volume relabeling. As of this writing, this is still an
open issue within
Kubernetes

What is the difference between subPath and mountPath in Kubernetes

I am trying to add files in volumeMounts to .dockerignore and trying to understand the difference between subPath and mountPath. Reading official documentation isn't clear to me.
I should add from what I read mountPath is the directory in the pod where volumes will be mounted.
from official documentation: "subPath The volumeMounts.subPath property specifies a sub-path inside the referenced volume instead of its root." https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath (this part isn't clear)
- mountPath: /root/test.pem
name: test-private-key
subPath: test.testing.com.key
In this example should I include both test.pem and test.testing.com.key to dockerignore?
mountPath shows where the referenced volume should be mounted in the container. For instance, if you mount a volume to mountPath: /a/b/c, the volume will be available to the container under the directory /a/b/c.
Mounting a volume will make all of the volume available under mountPath. If you need to mount only part of the volume, such as a single file in a volume, you use subPath to specify the part that must be mounted. For instance, mountPath: /a/b/c, subPath: d will make whatever d is in the mounted volume under directory /a/b/c
The difference between mountPath & subPath is that subPath is an addition to mountPath and it exists to solve a problem.
Look at my comments inside the example Pod manifest, I explain the problem and how subPath solves it.
To further understand the difference look at the "under the hood" section to see how kubernetes treats these two properties.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: vol1
emptyDir: {}
containers:
- image: nginx
name: mypod-container
volumeMounts:
# In our example let's say we want to mount "vol1" on path "/a/b/c"
# So we declare this:
- name: vol1
mountPath: /a/b/c
# But what if we also want to use a child folder "d" ?
# If we try to use "/a/b/c/d" then we wont have access to /a/b/c
# because the mountPath /a/b/c is overwritten by mountPath /a/b/c/d
# So if we try the following mountPath we lose our above declaration:
# - name: vol1
# mountPath: /a/b/c/d # This overwrites the above mount to /a/b/c
# The solution:
# Using subPath we enjoy both worlds.
# (1) We dont overwrite the info in our volume at path /a/b/c .
# (2) We have a separate path /a/b/c/d that when we can write to
# without affecting the content in path /a/b/c.
# Here is how we write the correct declaration:
- name: vol1
mountPath: /a/b/c/d
subPath: d
Under the hood of mountPath & subPath
Lets look under the hood of kubernetes to see how it manages the mountPath & the subPath properties differently:
1. How kubernetes manages mountPath:
When a mountPath is declared then kubernetes creates a file with the name of the volume in the following path:
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~empty-dir/<volume name>
So in our above manifest example this is what was created ("vol1" is the volume name):
/var/lib/kubelet/pods/301d...a71c/volumes/kubernetes.io~empty-dir/vol1
Now you can see that if we defined the "/a/b/c/d" mountPath we would have triggered a creation of another file "vol1" in same directory thus overwritting the original.
2. How kubernetes manages subPath:
When a subPath is declared then kubernetes creates a file with the SAME VOLUME name Volume but in a different path:
/var/lib/kubelet/pods/<pod-id>/volume-subpaths/<volume name>
So in our above manifest example this is what was created ("vol1" is the volume name):
/var/lib/kubelet/pods/3eaa...6d1/volume-subpaths/vol1
Conclusion:
Now you see that the subPath enables us to define an additional volumePath without colliding with the root voulmePath.
It does this by creating files with the same volume name but in different directories in kubernetes kubelet.
subPath is used to select a specific Path within the volume from which the container's volume should be mounted. This Defaults to "" (volume's root).
Check this mentioned here. So what that means is that you can still mount a volume on the path mentioned in your mountPath but instead of mounting it from the volume root, you can specify a separate subpath within the volume to be mounted under the volumeMount directory in your container.
To clarify on what this means, i created a simple volume on my minikube node.
docker#minikube:/mnt/data$ ls -lrth
total 8.0K
drwxr-xr-x 2 root root 4.0K Dec 30 16:23 path1
drwxr-xr-x 2 root root 4.0K Dec 30 16:23 path2
docker#minikube:/mnt/data$
docker#minikube:/mnt/data$ pwd
/mnt/data
As you can see that in this case, i have a directory and i have created two sub directories inside this volume. Under each of these path1 or path2 folder i have placed a different index file.
docker#minikube:/mnt/data$ pwd
/mnt/data
docker#minikube:/mnt/data$ cat path1/index.html
This is index file from path1
docker#minikube:/mnt/data$
docker#minikube:/mnt/data$ cat path2/index.html
This is index file from path2
docker#minikube:/mnt/data$
Now I created a sample PV using this volume on my minikube node using the sample manifest as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
After this, i created the sample PVC using the below manifest
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 800Mi
Now if i created a nginx pod and used this PVC under my volume, depending on the subPath config that i use in my pod spec, i will have the volume mounted from that specific subfolder.
i.e. If i used the below manifest for my nginx pod
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
volumeMounts:
# a mount for site-data
- name: config
mountPath: /usr/share/nginx/html
subPath: path1
volumes:
- name: config
persistentVolumeClaim:
claimName: task-pv-claim
and i do a curl on the POD IP, i get the index.html served from path1.
Gauravs-MBP:K8s alieninvader$ kubectl exec -it mycurlpod -- /bin/sh
/ $ curl 172.17.0.3
This is index file from path1
And if i changed my pod manifest and used the subpath as path2, so that the new manifest becomes this
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
volumeMounts:
# a mount for site-data
- name: config
mountPath: /usr/share/nginx/html
subPath: path2
volumes:
- name: config
persistentVolumeClaim:
claimName: task-pv-claim
Then as expected the curl to nginx pod would produce the below output serving the file from path2.
Gauravs-MBP:K8s alieninvader$ kubectl exec -it mycurlpod -- /bin/sh
/ $ curl 172.17.0.3
This is index file from path2
/ $

How to pass user credentials to (user-restricted) mounted volume inside Kubernetes Pod?

I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod.
The NFS folder /mount/protected has user access restrictions, i.e. only certain users can access this folder.
This is my Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
secret:
secretName: my-secret
containers:
- name: my-container
image: <...>
command: ["/bin/sh"]
args: ["-c", "python /my-volume/test.py"]
volumeMounts:
- name: my-volume
mountPath: /my-volume
When applying it, I get the following error:
The Pod "my-pod" is invalid:
* spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type
* spec.containers[0].volumeMounts[0].name: Not found: "my-volume"
I created my-secret according to the following guide:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret
So basically:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: bXktYXBw
password: PHJlZGFjdGVkPg==
But when I mount the folder /mount/protected with:
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
I get a permission denied error python: can't open file '/my-volume/test.py': [Errno 13] Permission denied when running a Pod that mounts this volume path.
My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?
You're trying to tell Kubernetes that my-volume should get its content from both a host path and a Secret, and it can only have one of those.
You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the mountPath you specify within the container. (Specifying hostPath: at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on every node in the cluster.)
So change:
volumes:
- name: my-volume
secret:
secretName: my-secret
# but no hostPath
I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (https://github.com/fstab/cifs).
With this Plugin, every user can pass her/his credentials to the Pod.
The user only needs to create a Kubernetes secret (cifs-secret), storing the username/password and use this secret for the mount within the Pod.
The volume is then mounted as follows:
(...)
volumes:
- name: test
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//server/share"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"

Write permissions on volume mount with OpenShift

Using OpenShift 3.11, I've mounted an nfs persistent volume, but the application cannot copy into the new volume, saying:
oc logs my-project-77858bc694-6kbm6
cp: cannot create regular file '/config/dbdata/resdb.lock.db': Permission denied
...
I've tried to change the ownership of the folder by doing a chown in an InitContainers, but it tells me the operation not permitted.
initContainers:
- name: chowner
image: alpine:latest
command: ["/bin/sh", "-c"]
args:
- ls -alt /config/dbdata; chown 1001:1001 /config/dbdata;
volumeMounts:
- name: my-volume
mountPath: /config/dbdata/
oc logs my-project-77858bc694-6kbm6 -c chowner
total 12
drwxr-xr-x 3 root root 4096 Nov 7 03:06 ..
drwxr-xr-x 2 99 99 4096 Nov 7 02:26 .
chown: /config/dbdata: Operation not permitted
I expect to be able to write to the mounted volume.
You can give your Pods permission to write into a volume by using fsGroup: GROUP_ID in a Security Context. fsGroup makes your volumes writable by GROUP_ID and makes all processes inside your container part of that group.
For example:
apiVersion: v1
kind: Pod
metadata:
name: POD_NAME
spec:
securityContext:
fsGroup: GROUP_ID
...

The server must be started by the user that owns the data directory

I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
This is the NFS configuration:
% exportfs -v
/srv/nfs/postgresql/postgres-registry
kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)
$ ls -ldn /srv/nfs/postgresql/postgres-registry
drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry
$ ls -ln /srv/nfs/postgresql/postgres-registry
total 4
drwx------. 2 999 999 4096 Jul 25 08:36 pgdata
The full log from the pod:
2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T07:32:50.618113000Z This user must also own the server process.
2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english".
2019-07-25T07:32:50.620826000Z Data page checksums are disabled.
2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T07:32:50.647445000Z creating subdirectories ... ok
2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20
2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB
2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC
2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix
2019-07-25T07:32:51.218995000Z creating configuration files ... ok
2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory.
2019-07-25T07:32:51.262238000Z child process exited with exit code 1
2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T07:32:51.380205000Z running bootstrap script ...
The deployment has the following in:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
What am I doing wrong?
Edit: Added storage.yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.3.7
path: /srv/nfs/postgresql/postgres-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Edit: And the full deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
subPath: "pgdata"
name: postgredb-registry-persistent-storage
volumes:
- name: postgredb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
Even more debugging adding:
command: ["/bin/bash", "-c"]
args:["id -u; ls -ldn /var/lib/postgresql/data"]
Which returned:
999
drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data
Clearly, the UID/GID are wrong. Why?
Even with the work around suggested by Jakub Bujny, I get this:
2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T09:32:08.735335000Z This user must also own the server process.
2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english".
2019-07-25T09:32:08.738754000Z Data page checksums are disabled.
2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T09:32:08.766606000Z creating subdirectories ... ok
2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20
2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB
2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC
2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix
2019-07-25T09:32:09.168161000Z creating configuration files ... ok
2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory.
2019-07-25T09:32:09.208849000Z child process exited with exit code 1
2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T09:32:09.274741000Z running bootstrap script ... 999
2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data
Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine.
You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata" do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs
$ sudo su -c "ls -al ./nfs" postgres
total 8
drwx------ 2 postgres postgres 4096 Jul 25 14:44 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
$ kubectl apply -f nfspv.yaml
persistentvolume/postgres-registry-pv-volume created
persistentvolumeclaim/postgres-registry-pv-claim created
$ kubectl apply -f postgres.yaml
deployment.extensions/postgres-registry created
$ sudo su -c "ls -al ./nfs" postgres
total 124
drwx------ 19 postgres postgres 4096 Jul 25 14:46 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
drwx------ 3 postgres postgres 4096 Jul 25 14:46 base
drwx------ 2 postgres postgres 4096 Jul 25 14:46 global
drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts
. . .
I noticed using nfs: directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath: to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3
2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections
done
server started
$ kubectl exec -it postgres-registry-675869694-9fp52 psql
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bash
postgres#postgres-registry-675869694-9fp52:/$ whoami && id -u && id -g
postgres
999
999
nfspv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 172.29.0.218
path: /test/nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
postgres.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb-registry-persistent-storage
volumes:
- name: postgresdb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]
args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM