How to unmount partition in kubernetes - kubernetes

I have one pod and one partion in it
kubectl exec pod-t -- lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 298.1G 0 disk
`-sda10 8:10 28G 0 part /etc/hosts
sr0 11:0 1 1024M 0 rom
rbd5 252:80 0 15G 0 disk /usr/share/nginx/html
When i want umount it i see
must be superuser to unmount
#kubectl exec pod-t -- umount /dev/rbd5
umount: /usr/share/nginx/html: must be superuser to unmount
command terminated with exit code 32
The pod was created by this template:
apiVersion: v1
kind: Pod
metadata:
name: pod-t
namespace: default
labels:
spec:
containers:
- name: nginxqw
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: content-data
mountPath: /usr/share/nginx/html
volumes:
- name: content-data
persistentVolumeClaim:
claimName: pvc-t
I think the pod does not have root priviledge.
How can i solve it?

There is a privileged flag on the SecurityContext of the container spec.
I use this template
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello-world-container
# The container definition
# ...
securityContext:
privileged: true

Related

Postgresql data on k8s cannot be made persistent

I am building postgresql on kubernetes, but I cannot persist postgres data.
The host is a GCP instance with Ubuntu-20.4. The volumes for disk are using GCP volumes, which are mounted after attaching to the instance.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 9982728 4252592 5713752 43% /
devtmpfs 4038244 0 4038244 0% /dev
tmpfs 4059524 0 4059524 0% /dev/shm
tmpfs 811908 1568 810340 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 4059524 0 4059524 0% /sys/fs/cgroup
/dev/loop0 50304 50304 0 100% /snap/core18/2671
/dev/loop2 217856 217856 0 100% /snap/google-cloud-cli/98
/dev/loop3 94208 94208 0 100% /snap/lxd/24065
/dev/loop1 60544 60544 0 100% /snap/core20/1782
/dev/loop4 44032 44032 0 100% /snap/snapd/17885
/dev/nvme0n1p15 99801 6004 93797 7% /boot/efi
tmpfs 811904 0 811904 0% /run/user/1001
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/49ebcf7b449f4b13d52aab6f52c28f139c551f83070f6f21207dbf52315dc264/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/e46f0c6b19e5ccff9bb51fa3f7669a9a6a2e7cfccf54681e316a9cd58183dce4/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/495c80e87521bfdda55827df64cdb84cddad149fb502ac7ee12f3607badd4649/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/443e7b254d02c88873c59edc6d5b0a71e80da382ea81105e8b312ad4122d694a/shm
/dev/nvme0n3 10218772 12 9678088 1% /var/lib/postgresql ※ disk for postgres
/dev/nvme0n2 3021608 24 2847916 1% /var/lib/pgadmin ※ disk for pgadmin
shm 65536 1052 64484 2% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/bd83982e91b6a3bce7853416d72878d5473174e884c15578c47a8d8952f4e718/shm
Also, the pod volume is allocated using persistent volume and persistent volume claim.
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume # Sets PV's name
labels:
app: postgres
spec:
# storageClassName: local-storage
capacity:
storage: 10Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pgadmin-pv-volume # Sets PV's name
labels:
app: pgadmin
spec:
# storageClassName: local-storage
capacity:
storage: 3Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/pgadmin"
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- image: docker.io/yamamuratkr/postgres
name: postgresql
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres
key: database_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: database_password
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
However, when the pod is deleted, the postgres data also disappears with it, and I can confirm that it is not persistent.
If you know the cause of this problem, please let me know.
Thank you in advance.
None of the following worked
Using hostpath for pod volumes
Use default PGDATA
The source of your problem is here:
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql
The postgres image itself mounts a volume on /var/lib/postgresql/data. We can see that if we inspect the image:
$ docker image inspect docker.io/postgres:14 | jq '.[0].ContainerConfig.Volumes'
{
"/var/lib/postgresql/data": {}
}
Your mount on /var/lib/postgresql is effectively a no-op. An ephemeral volume is created on /var/lib/postgresql/data each time the container starts, and since that's the default PGDATA location, your data is effectively discarded when the container exits.
I've put together an example in my local environment to demonstrate this behavior. I've made a few minor changes from your example that shouldn't operationally impact anything.
I've created the following Secret with the postgres credentials; by naming the keys like this we can use a single envFrom block instead of multiple env entries (see the Deployment for details):
apiVersion: v1
kind: Secret
metadata:
name: postgres-env
type: Opaque
stringData:
POSTGRES_USER: example
POSTGRES_PASSWORD: secret
And in line with my comment I'm injecting a file into /docker-entrypoint-initdb.d from this ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
initdb.sql: |
CREATE TABLE people (
id SERIAL,
name VARCHAR(40),
favorite_color VARCHAR(10)
);
INSERT INTO people (name, favorite_color) VALUES ('bob', 'red');
INSERT INTO people (name, favorite_color) VALUES ('alice', 'blue');
INSERT INTO people (name, favorite_color) VALUES ('mallory', 'purple');
I'm using this PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And finally I'm tying it all together in this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- envFrom:
- secretRef:
name: postgres-env
image: docker.io/postgres:14
name: postgresql
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: postgres-config
- mountPath: /var/lib/postgresql
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
- configMap:
name: postgres-config
name: postgres-config
The significant changes here are:
I'm using a single envFrom block to set environment variables from the keys in the postgres-env secret.
I'm using the upstream docker.io/postgres:14 image rather than building my own custom image.
I'm injecting the contents of /docker-entrypoint-initdb.d from the postgres-config ConfigMap.
Note that this deployment is using the same mountPath as in your example`.
If I bring up this environment, I can see that the database initialization script was executed correctly. The people table exists and has the expected data:
$ kubtectl exec -it deploy/postgresql -- psql -U example -c 'select * from people'
id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
(3 rows)
Let's make a change to the database and see what happens when we restart the pod. First, we add a new row to the table and view the updated table:
$ kubectl exec -it deploy/postgresql -- psql -U example -c "insert into people (name, favorite_color) values ('carol', 'green')"
INSERT 0 1
$ kubectl exec -it deploy/postgresql -- psql -U example -c 'select * from people'
id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
4 | carol | green
(4 rows)
Now we restart the database pod:
$ kubectl rollout restart deployment/postgresql
And finally check if our changes survived the restart:
$ kubectl exec -it deploy/postgresql -- psql -U example -c 'select * from people' id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
(3 rows)
As expected, they did not! Let's change the mountPath in the Deployment so that it looks like this instead:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- envFrom:
- secretRef:
name: postgres-env
image: docker.io/postgres:14
name: postgresql
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: postgres-config
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
- configMap:
name: postgres-config
name: postgres-config
Using this Deployment, with no other changes, we can re-run the previous test and see that our data persists as desired:
$ kubectl exec -it deploy/postgresql -- psql -U example -c "insert into people (name, favorite_color) values ('carol', 'green')"
INSERT 0 1
$ k rollout restart deployment/postgresql
deployment.apps/postgresql restarted
$ k exec -it deploy/postgresql -- psql -U example -c 'select * from people' id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
4 | carol | green
(4 rows)
An alternative solution would be to mount your volume in a completely different location and then set PGDATA appropriately. E.g.,
...
env:
- name: PGDATA
value: /data
...
volumeMounts:
- name: postgres-data
mountPath: /data
...

unable to understand mounting postgres data path onto minikube kubernetes deployment with permission errors

I’m getting started with kubernetes, and I want to create a simple app with a single webserver & postgres database. The problem I’m running into is the deployment of the postgres is giving me permission errors. The following are discussions around this:
https://github.com/docker-library/postgres/issues/116
https://github.com/docker-library/postgres/issues/103
https://github.com/docker-library/postgres/issues/696
Can't get either Postgres permissions or PVC working in AKS
Kubernetes - Pod which encapsulates DB is crashing
Mount local directory into pod in minikube
https://serverfault.com/questions/981459/minikube-using-a-storageclass-to-provision-data-outside-of-tmp
EDIT
spec:
OSX - 10.15.4
minikube - v1.9.2
kubernetes - v1.18.2
minikube setup
minikube start --driver=virtualbox --cpus=2 --memory=5120 --kubernetes-version=v1.18.2 --container-runtime=docker --mount=true --mount-string=/Users/holmes/kubernetes/pgdata:/data/pgdata
The permission error: chmod: changing permissions of '/var/lib/postgresql/data': Operation not permitted
I am trying to mount a local OS directory into minikube to be used with the postgres deployment/pod/container volume mount.
After I run the above setup I ssh into minikube (minikube ssh) and check the permissions
# minikube: /
drwxr-xr-x 3 root root 4096 May 13 19:31 data
# minikube: /data
drwx------ 1 docker docker 96 May 13 19:27 pgdata
By running the script below the chmod permission error surfaces. If I change the --mount-string=/Users/holmes/kubernetes/pgdata:/data (leave out /pgdata) and then minikube ssh to create the pgdata directory:
mkdir -p /data/pgdata
chmod 777 /data/pgdata
I get a different set of permissions before deployment
# minikube: /
drwx------ 1 docker docker 96 May 13 20:10 data
# minikube: /data
drwxrwxrwx 1 docker docker 64 May 13 20:25 pgdata
and after deployment
# minikube: /
drwx------ 1 docker docker 128 May 13 20:25 data
# minikube: /data
drwx------ 1 docker docker 64 May 13 20:25 pgdata
Not sure why this changes, and the chmod permission error persists. It seems like the above reference links are bouncing around different methods on different machines on different vms which I don’t understand nor can I get this to work. Can someone walk me getting this to work? Super confused going through all the above discussions.
postgres.yaml
apiVersion: v1
kind: Namespace
metadata:
name: data-block
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: data-block
labels:
type: starter
data:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: docker
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
namespace: data-block
labels:
app: postgres
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/pgdata
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
namespace: data-block
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: data-block
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.2
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
persistentVolumeClaim:
claimName: postgres-pv-claim
UPDATE
I went ahead and updated the deployment script to a simple pod. The goal is map the postgres /var/lib/postgresql/data to my local file directory /Users/<my-path>/database/data to persist the data.
---
apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
namespace: data-block
labels:
name: postgres-pod
spec:
containers:
- name: postgres
image: postgres:12.3
imagePullPolicy: IfNotPresent
ports:
- name: postgres-port
containerPort: 5432
envFrom:
- configMapRef:
name: postgres-env-config
- secretRef:
name: postgres-secret
volumeMounts:
- name: postgres-vol
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-vol
hostPath:
path: /Users/<my-path>/database/data
restartPolicy: Never
The error: initdb: error: could not access directory "/var/lib/postgresql/data": Permission denied
How to go about mounting the local file directory?
You are declaring the PGDATA field that maybe the cause of the issue. I faced the same error, this comes because there's as LOST+FOUND folder already in that directory however, the container wants it to be a empty dir. Giving the subPath field solves this issue. Please try this it should solve the issue and you need not need any PGDATA field. Try omitting it from your configmap and add subPath to some folder. Please go through following manifests.
https://github.com/mendix/kubernetes-howto/blob/master/postgres-deployment.yaml
https://www.bmc.com/blogs/kubernetes-postgresql/
it's a statefulset that usually you should go with and not a deployment when it comes to Database deployment.
- name: postgredb
mountPath: /var/lib/postgresql/data
#setting subPath will fix your issue it can be pgdata or
postgres or any other folder name according to your
choice.
subPath: postgres

Pod access PVC subdirectory that already existed

I have a pod created using a deployment using git-sync image and mount the volume to a PVC
kind: Deployment
metadata:
name: config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
demo: config
template:
metadata:
labels:
demo: config
spec:
containers:
- args:
- '-ssh'
- '-repo=git#domain.com:org/repo.git'
- '-dest=conf'
- '-branch=master'
- '-depth=1'
image: 'k8s.gcr.io/git-sync:v3.1.1'
name: git-sync
securityContext:
runAsUser: 65533
volumeMounts:
- mountPath: /etc/git-secret
name: git-secret
readOnly: true
- mountPath: /config
name: cus-config
securityContext:
fsGroup: 65533
volumes:
- name: git-secret
secret:
defaultMode: 256
secretName: git-creds
- name: cus-config
persistentVolumeClaim:
claimName: cus-config
After the deployment, I checked the pod and got a file path like this.
/tmp/git/conf/subdirA/some.Files
Then I created a second pod from another deployment and want to mount the tmp/git/conf/subdirA on the second pod. This is the example of my second deployment script.
kind: Deployment
metadata:
name: test-mount-config
namespace: test
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: 'nginx:1.7.9'
name: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /root/conf
name: config
subPath: tmp/git/conf/subdirA
volumes:
- name: config
persistentVolumeClaim:
claimName: cus-config
This is my PVC
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: conf
name: config
namespace: test
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: conf
namespace: test
provisioner: spdbyz
reclaimPolicy: Retain
I already read about subpath on PVC, but everytime I checked the folder /root/conf on the second pod, there is nothing inside it.
Any idea on how to mount specific PVC subdirectory on another pod?
Very basic example on how share file content between PODs using PV/PVC
First Create a persistent volume refer below yaml example with hostPath configuration
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv-1
labels:
pv: my-pv-1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /var/log/mypath
$ kubectl create -f pv.yaml
persistentvolume/my-pv-1 created
Second create a persistent volume claim using below yaml example
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-claim-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv: my-pv-1
$ kubectl create -f pvc.yaml
persistentvolumeclaim/my-pvc-claim-1 created
Verify the pv and pvc STATUS is set to BOUND
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv-1 1Gi RWX Retain Bound default/my-pvc-claim-1 62s
$ kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc-claim-1 Bound my-pv-1 1Gi RWX 58
Third consume the pvc in required PODs refer below example yaml where the volume is mounted on two pods nginx-1 and nginx-2.
apiVersion: v1
kind: Pod
metadata:
name: nginx-1
spec:
containers:
- image: nginx
name: nginx-1
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-1.yaml
pod/nginx-1 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 35s 10.244.3.53 k8s-node-3 <none> <none>
Create second POD and consume same PVC
apiVersion: v1
kind: Pod
metadata:
name: nginx-2
spec:
containers:
- image: nginx
name: nginx-2
volumeMounts:
- mountPath: /var/log/mypath
name: test-vol
subPath: TestSubPath
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: my-pvc-claim-1
$ kubectl create -f nginx-2.yaml
pod/nginx-2 created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-1 1/1 Running 0 55s 10.244.3.53 k8s-node-3 <none> <none>
nginx-2 1/1 Running 0 35s 10.244.3.54 k8s-node-3 <none> <none>
Test by connecting to container 1 and write to the file on mount-path.
root#nginx-1:/# df -kh
Filesystem Size Used Avail Use% Mounted on
overlay 12G 7.3G 4.4G 63% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda1 12G 7.3G 4.4G 63% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
root#nginx-1:/# cd /var/log/mypath/
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# date >> date.txt
root#nginx-1:/var/log/mypath# cat date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020
Now connect tow second POD/container and it should see the file from first as below
$ kubectl exec -it nginx-2 -- /bin/bash
root#nginx-2:/# cat /var/log/mypath/date.txt
Thu Jan 30 10:44:42 UTC 2020
Thu Jan 30 10:44:43 UTC 2020

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

Unable to write file. Volume mounted as root

I am spinning up a Pod (comes up with Non Root user) that needs to write data to a volume. The volume comes from a PVC.
The pod definition is simple
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: task-pv-container
image: jnlp/jenkins-slave:latest
command: ["/bin/bash"]
args: ["-c", "sleep 500"]
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
When I exec into the Pod and try to write into /usr/share/nginx/html
I get
jenkins#task-pv-pod:/usr/share/nginx/html$ touch test
touch: cannot touch ‘test’: Permission denied
Looking at the permissions of the directory
jenkins#task-pv-pod:~$ ls -ld /usr/share/nginx/html
drwxr-xr-x 3 root root 4096 Mar 29 15:52 /usr/share/nginx/html
Its clear that ONLY root user can write to /usr/share/nginx/html but thats not what I want.
Is there a way to change the permissions for mounted volumes ?
You can consider using an initContainer to mount your volume and change permissions. The initContainer will be run before the main container(s) start up. The usual pattern for this usage is to have a busybox image (~22 MB) to mount the volume and run a chown or chmod on the directory. When your pod's primary container runs, the volume(s) will have the correct ownership/access privileges.
Alternatively, you can consider using the initContainer to inject the proper files as shown in this example.
Hope this helps!
A security context defines privilege and access control settings for a Pod or Container. Just try securityContext:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
securityContext:
fsGroup: $jenkins_uid
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
...