Kubernetes fsGroup not changing file ownership on PersistentVolume - kubernetes

On the host, everything in the mounted directory (/opt/testpod) is owned by uid=0 gid=0. I need those files to be owned by whatever the container decides, i.e. a different gid, to be able to write there. Resources I'm testing with:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
labels:
name: pv
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/testpod"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: manual
selector:
matchLabels:
name: pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
nodeSelector:
foo: bar
securityContext:
runAsUser: 500
runAsGroup: 500
fsGroup: 500
volumes:
- name: vol
persistentVolumeClaim:
claimName: pvc
containers:
- name: testpod
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: vol
mountPath: /data
After the pod is running, I kubectl exec into it and ls -la /data shows everything still owned by gid=0. According to some Kuber docs, fsGroup is supposed to chown everything on the pod start but it doesn't happen. What am I doing wrong please?

The hostpath type PV doesn't support security context. You have to be root for the volume to be written in. It is described well in this github issue and this docs about hostPath:
The directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged
container or modify the file permissions on the host to be able to write to a
hostPath volume
You may also want to check this github request describing why changing permission of host directory is dangerous.
The workaround people describe that it appears to be working is to grant your user sudo privileges but that actually makes the idea of running container as non root user useless.
Security context appears to be working well with emptyDir volume (described well in the k8s docs here)

Related

Permission denied when accessing persistent volume

I have a kube cluster running using kind. Kind runs in a docker container. It has access to a volume by way of the following:
extraMounts:
- hostPath: /mnt/disk-1/shared
containerPath: /shared-drive
... the persistent-volume and pvc configuration:
---
# Volumes - PVC write
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-write
namespace: ingress-nginx
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 100Gi
---
# PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-drive
namespace: ingress-nginx
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /shared-drive
... the request for the volume in the deployment template spec:
spec:
volumes:
- name: shared-drive
persistentVolumeClaim:
claimName: pvc-write
readOnly: false
...
volumeMounts:
- name: shared-drive
mountPath: "/shared"
Other observations
From within the container where I need access to the shared volume:
(accessed by docker exec -ti cluster-control-plane bash -> crictl exec -ti the-container sh)
> ls -l /
...
drw-rw-rw- 2 appuser appuser 26 Oct 9 19:36 shared
I can view a list of the files in the shared directory
I cannot read nor write to the directory
I can read and write in other directories belonging to appuser
The volume being shared by the host (the host running kind) has rw permissions for "other" users
I've played a bit with setting the securityContext for the container without success. This attempt was not thorough as I'm at a loss for how to interpret what I'm "solving for". So for instance, the following did not solve the problem:
# included in the deployment template spec
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroupChangePolicy: "OnRootMismatch"

Kubernetes PersistentVolume on local machine, share data

I would like to spin up a Pod on my local machine. Inside the pod is a single container with a .jar file in it. That jar file can take in files, process then, and then output them. I would like to create a PersistentVolume and attach that to the Pod, so the container can accesss the files.
My Dockerfile:
FROM openjdk:11
WORKDIR /usr/local/dat
COPY . .
ENTRYPOINT ["java", "-jar", "./tool/DAT.jar"]
(Please note that the folder used inside the container is /usr/local/dat)
My PersistentVolume.yml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dat-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 150Mi
storageClassName: hostpath
hostPath:
path: /home/zoltanvilaghy/WORK/ctp/shared
My PersistentVolumeClaim.yml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dat-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: hostpath
volumeName: dat-volume
My Pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: dat-tool-pod
labels:
name: dat-tool-pod
spec:
containers:
- name: dat-tool
image: dat_docker
imagePullPolicy: Never
args: ["-in", "/usr/local/dat/shared/input/Archive", "-out", "/usr/local/dat/shared/output/Archive2", "-da"]
volumeMounts:
- mountPath: /usr/local/dat/shared
name: dat-volume
restartPolicy: Never
volumes:
- name: dat-volume
persistentVolumeClaim:
claimName: dat-pvc
If all worked well, after attaching the PersistentVolume (and putting the Archive folder inside the shared/input folder), by giving the arguments to the jar file it would be able to process the files and output them to the shared/output folder.
Instead, I get an error saying that the folder cannot be found. Unfortunately, after the error the container exists, so I can't look around inside the container to check the file structure. Can somebody help me identify the problem?
Edit: Output of kubectl get sc, pvc, pv :
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/hostpath (default) docker.io/hostpath Delete Immediate false 20d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/dat-pvc Bound dat-volume 150Mi RWO hostpath 4m52s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/dat-volume 150Mi RWO Retain Bound default/dat-pvc hostpath 4m55s
Assumed your sc/pvc/pv are all correct, here's how you can test:
apiVersion: v1
kind: Pod
metadata:
name: dat-tool-pod
labels:
name: dat-tool-pod
spec:
containers:
- name: dat-tool
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 7200"]
volumeMounts:
- mountPath: /usr/local/dat/shared
name: dat-volume
restartPolicy: Never
volumes:
- name: dat-volume
persistentVolumeClaim:
claimName: dat-pvc
After the pod is created then you can kubectl exec -it dat-tool-pod -- ash and cd /usr/local/dat/shared. Here you can check the directory/files (incl. permission) to understand why your program complaint about missing directory/files.
For anyone else experiencing this problem, here is what helped me find a solution:
https://github.com/docker/for-win/issues/7023
(And actually the link inside the first comment in this issue.)
So my setup was a Windows 10 machine, using WSL2 to run docker containers and kubernetes cluster on my machine. No matter where I put the folder I wanted to share with my Pod, it didn't appear inside the pod. So based on the link above, I created my folder in /mnt/wsl, called /mnt/wsl/shared.
Because supposedly, this /mnt/wsl folder is where the DockerDesktop will start to look for the folder that you want to share. I changed my PersistentVolume.yml to the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dat-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 150Mi
storageClassName: hostpath
hostPath:
path: /run/desktop/mnt/host/wsl/shared
My understanding is that /run/desktop/mnt/host/wsl is the same as /mnt/wsl, and so I could finally pass files between my Pod and my machine.

permission denied when mount in kubernetes pod with root user

When I using this command in kubernetes v1.18 jenkins's master pod to mount a nfs file system:
root#jenkins-67fff76bb6-q77xf:/# mount -t nfs -o v4 192.168.31.2:/infrastructure/jenkins-share-workspaces /opt/jenkins
mount: permission denied
root#jenkins-67fff76bb6-q77xf:/#
why it shows permission denied althrough I am using root user? when I using this command in another machine(not in docker), it works fine, shows the server side works fine. this is my kubernetes jenkins master pod secure text config in yaml:
securityContext:
runAsUser: 0
fsGroup: 0
today I tried another kubernetes pod and mount nfs file system and throw the same error. It seems mount NFS from host works fine, and mount from kubernetes pod have a perssion problem. Why would this happen? the NFS is works fine by PVC binding PV in this kubernetes pod, why it mount from docker failed? I am confusing.
There are two ways to mount nfs volume to a pod
First (directly in pod spec):
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 192.168.31.2
path: /infrastructure/jenkins-share-workspaces
containers:
- name: app
image: example
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
Second (creating persistens nfs volume and volume claim):
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.31.2
path: "/infrastructure/jenkins-share-workspaces"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: example
volumeMounts:
- name: nfs
mountPath: /opt/jenkins
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EDIT:
The solution above is prefered one, but if you reallly need to use mount in container you need to add capabilities to the pod:
spec:
containers:
- securityContext:
capabilities:
add: ["SYS_ADMIN"]
Try using
securityContext:
privileged: true
This needs if you are using dind for jenkins

PV file not saved on host

hi all quick question on host paths for persistent volumes
I created a PV and PVC here
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
and I ran a sample pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
i exec the pod and created a file
root#task-pv-pod:/# cd /usr/share/nginx/html
root#task-pv-pod:/usr/share/nginx/html# ls
tst.txt
However, when I go back to my host and try to ls the file , its not appearing. Any idea why? My PV and PVC are correct as I can see that it has been bounded.
ubuntu#ip-172-31-24-21:/home$ cd /mnt/data
ubuntu#ip-172-31-24-21:/mnt/data$ ls -lrt
total 0
A persistent volume (PV) is a kubernetes resource which has its own lifecycle independent of the pod pv documentation. Using a PVC to consume from a PV makes it visible in some other tool. For example azure files, ELB, a server with NFS, etc. My point here is that there is no reason why the PV should exist in the node.
If you want your persistence to be saved in the node use the hostPath option for PVs. Check this link. Though this is not a good production practice.
First of all, you don't need to create a PV if you are creating a PVC. PVCs create PV, if you have the right storageClass.
Second, hostPath is one delicate PV in Kubernetes world. That's the only PV that doen't need to be created to be mounted in a Pod. So you could have not created neither PV nor PVC and a hostPath volume would work just fine.
To make a test, delete your PV and PVC, and create your Pod like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-volume
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
securityContext:
privileged: true
ports:
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx
mountPath: /root/nginx-volume # path in the pod
volumes:
- name: nginx
hostPath:
path: /var/test # path in the host machine
I know this is a confusing concept, but that's how it is.

How to exempt a directory when using readOnlyRootFilesystem in kubernetes?

I need to block my k8s pods from writing to root folders, with an exemption to /tmp dir. There are 2 reasons I need to write to this dir:
Flask needs to write to somewhere. It's trying to write to /tmp and /etc/... and /opt/... , but all of them are blocked because it's under root folder
I'm going to need to write to a file for liveness probe, but if the entire file system is blocked, then I can't do it
I'm running kubernetes 1.13.6-gke.13 in GKE
The relevant part from the yaml file:
securityContext:
runAsUser: 1000
readOnlyRootFilesystem: true
runAsNonRoot: true
I expect the pod to be able to write to a predefined folder, maybe a mounted one.
Create a volume mount for /tmp directory.
volumeMounts:
- mountPath: /tmp
name: tmp
And in Volumes -
volumes:
- emptyDir: {}
name: tmp
As I understand, you would like to create a POD with access to a local directory. You need to create PV, PVC and POD.
PV definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-flaskapp
labels:
type: local
spec:
storageClassName: <your-storageclass-name>
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/test_flask/app"
PVC definition:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-flaskapp
spec:
storageClassName: <your-storageclass-name>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
POD definition:
apiVersion: v1
kind: Pod
metadata:
name: flaskapp
spec:
containers:
- image: flask:latest
name: flaskapp
ports:
- containerPort: 8080
name: flaskapp
volumeMounts:
- mountPath: /usr/local/flask/webapps
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc-flask
Now you can check if everything works fine:
$ kubectl exec -it flaskapp bash
root#flaskapp:/usr/local/flask# mkdir /usr/local/flask/webapps/sample
root#flaskapp:/usr/local/flask# touch /usr/local/flask/webapps/sample/testfile
root#flaskapp:/usr/local/flask# ls /usr/local/flask/webapps/sample/
testfile
Now when you look at host, you will see the newly created file:
[root#master user]# ls /opt/test_flask/app/sample/
testfile
I hope it will helps you.