permission denied when mount in kubernetes pod with root user - kubernetes

When I using this command in kubernetes v1.18 jenkins's master pod to mount a nfs file system:
root#jenkins-67fff76bb6-q77xf:/# mount -t nfs -o v4 192.168.31.2:/infrastructure/jenkins-share-workspaces /opt/jenkins
mount: permission denied
root#jenkins-67fff76bb6-q77xf:/#
why it shows permission denied althrough I am using root user? when I using this command in another machine(not in docker), it works fine, shows the server side works fine. this is my kubernetes jenkins master pod secure text config in yaml:
securityContext:
runAsUser: 0
fsGroup: 0
today I tried another kubernetes pod and mount nfs file system and throw the same error. It seems mount NFS from host works fine, and mount from kubernetes pod have a perssion problem. Why would this happen? the NFS is works fine by PVC binding PV in this kubernetes pod, why it mount from docker failed? I am confusing.

There are two ways to mount nfs volume to a pod
First (directly in pod spec):
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 192.168.31.2
path: /infrastructure/jenkins-share-workspaces
containers:
- name: app
image: example
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
Second (creating persistens nfs volume and volume claim):
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.31.2
path: "/infrastructure/jenkins-share-workspaces"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: example
volumeMounts:
- name: nfs
mountPath: /opt/jenkins
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EDIT:
The solution above is prefered one, but if you reallly need to use mount in container you need to add capabilities to the pod:
spec:
containers:
- securityContext:
capabilities:
add: ["SYS_ADMIN"]

Try using
securityContext:
privileged: true
This needs if you are using dind for jenkins

Related

Permission denied when accessing persistent volume

I have a kube cluster running using kind. Kind runs in a docker container. It has access to a volume by way of the following:
extraMounts:
- hostPath: /mnt/disk-1/shared
containerPath: /shared-drive
... the persistent-volume and pvc configuration:
---
# Volumes - PVC write
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-write
namespace: ingress-nginx
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 100Gi
---
# PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-drive
namespace: ingress-nginx
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
hostPath:
path: /shared-drive
... the request for the volume in the deployment template spec:
spec:
volumes:
- name: shared-drive
persistentVolumeClaim:
claimName: pvc-write
readOnly: false
...
volumeMounts:
- name: shared-drive
mountPath: "/shared"
Other observations
From within the container where I need access to the shared volume:
(accessed by docker exec -ti cluster-control-plane bash -> crictl exec -ti the-container sh)
> ls -l /
...
drw-rw-rw- 2 appuser appuser 26 Oct 9 19:36 shared
I can view a list of the files in the shared directory
I cannot read nor write to the directory
I can read and write in other directories belonging to appuser
The volume being shared by the host (the host running kind) has rw permissions for "other" users
I've played a bit with setting the securityContext for the container without success. This attempt was not thorough as I'm at a loss for how to interpret what I'm "solving for". So for instance, the following did not solve the problem:
# included in the deployment template spec
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroupChangePolicy: "OnRootMismatch"

Kubernetes mount volume keeps timeing out even though volume can be mounted from sudo mount

I have a read only persistent volume that I'm trying to mount onto the statefulset, but after making some changes to the program and re-creating the pods, the pod can now no longer mount to the volume.
PV yaml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
nfs:
server: <ip>
path: "/var/foo"
claimRef:
name: foo-pvc
namespace: foo
PVC yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: foo-pvc
namespace: foo
spec:
accessModes:
- ReadOnlyMany
storageClassName: ""
volumeName: foo-pv
resources:
requests:
storage: 2Gi
Statefulset yaml:
apiVersion: v1
kind: Service
metadata:
name: foo-service
spec:
type: ClusterIP
ports:
- name: http
port: 80
selector:
app: foo-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo-statefulset
namespace: foo
spec:
selector:
matchLabels:
app: foo-app
serviceName: foo-app
replicas: 1
template:
metadata:
labels:
app: foo-app
spec:
serviceAccountName: foo-service-account
containers:
- name: fooContainer
image: <image>
imagePullPolicy: Always
volumeMounts:
- name: writer-data
mountPath: <path>
- name: nfs-objectd
mountPath: <path>
volumes:
- name: nfs-foo
persistentVolumeClaim:
claimName: foo-pvc
volumeClaimTemplates:
- metadata:
name: writer-data
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "foo-sc"
resources:
requests:
storage: 2Gi
k describe pod reports "Unable to attach or mount volumes: unmounted volumes=[nfs-foo]: timed out waiting for the condition". There is a firewall between the machine running kubernetes and the NFS, however the port has been unblocked, and the folder has been exported for mounting on the NFS side. Running sudo mount -t nfs :/var/foo /var/foo is able to successfully mount the NFS, so I don't understand why kuebernetes isn't about to mount it anymore. Its been stuck failing mount for several days now. Is there any other way to debug this?
Thanks!
Based on the error “unable to attach or mount volumes …….timed out waiting for condition”, there were some similar issues reported to the Product Team and it is a known issue. But, this error is more observed on the preemptible/spot nodes when the node is preempted. In similar occurrences of this issue for other users, upgrading the control plane version resolved this issue temporarily in preemptible/spot nodes.
Also, if you are not using any preemptible/spot nodes in your cluster, this issue might have happened when the old node is replaced by a new node. If you are still facing this issue, try upgrading the control plane to the same version i.e. you can execute the following command:
$ gcloud container clusters upgrade CLUSTER_NAME --master --zone ZONE --cluster-version VERSION
Another workaround to fix this issue would be remove the stale VolumeAttachment with the following command:
$ kubectl delete volumeattachment [volumeattachment_name]
After running the command and thus removing the VolumeAttachment, the pod should eventually pick up and retry. You can read more about this issue and its cause here.

Why local persistent volumes not visible in EKS?

In order to test if I can get self written software deployed in amazon using docker images,
I have a test eks cluster.
I have written a small test script that reads and writes a file to see if I understand how to deploy. I have successfully deployed it in minikube, using three replica's. The replica's all use a shared directory on my local file system, and in minikube that is mounted into the pods with a volume
The next step was to deploy that in the eks cluster. However, I cannot get it working in eks. The problem is that the pods don't see the contents of the mounted directory.
This does not completely surprise me, since in minikube I had to create a mount first to a local directory on the server. I have not done something similar on the eks server.
My question is what I should do to make this working (if possible at all).
I use this yaml file to create a pod in eks:
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
storageClassName: local-storage
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data/k8s
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
So what I expect is that I have a local directory, /data/k8s, that is visible in the pods as path /config.
When I apply this yaml, I get a pod that gives an error message that makes clear the data in the /data/k8s directory is not visible to the pod.
Kubectl gives me this info after creation of the volume and claim
[rdgon#NL013-PPDAPP015 probeer]$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-volume 1Gi RWO Retain Available 15s
persistentvolume/pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO Delete Bound default/pv-claim gp2 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO gp2 15s
Which seems to indicate everything is OK. But it seems that the filesystem of the master node, on which I run the yaml file to create the volume, is not the location where the pods look when they access the /config dir.
On EKS, there's no storage class named 'local-storage' by default.
There is only a 'gp2' storage class, which is also used when you don't specify a storageClassName.
The 'gp2' storage class creates a dedicated EBS volume and attaches it your Kubernetes Node when required, so it doesn't use a local folder. You also don't need to create the pv manually, just the pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: gp2
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
If you want a folder on the Node itself, you can use a 'hostPath' volume, and you don't need a pv or pvc for that:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
hostPath:
path: /data/k8s
This is a bad idea, since the data will be lost if another node starts up, and your pod is moved to the new node.
If it's for configuration only, you can also use a configMap, and put the files directly in your kubernetes manifest files.
apiVersion: v1
kind: ConfigMap
metadata:
name: ruud-config
data:
ruud.properties: |
my ruud.properties file content...
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
configMap:
name: ruud-config
Please check whether the pv got created and its "bound" to PVC by running below commands
kubectl get pv
kubectl get pvc
Which will give information whether the objects are created properly
The local path you refer to is not valid. Try:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: /config
volumes:
- name: cmount
hostPath:
path: /data/k8s
type: DirectoryOrCreate # <-- You need this since the directory may not exist on the node.

PV file not saved on host

hi all quick question on host paths for persistent volumes
I created a PV and PVC here
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
and I ran a sample pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
i exec the pod and created a file
root#task-pv-pod:/# cd /usr/share/nginx/html
root#task-pv-pod:/usr/share/nginx/html# ls
tst.txt
However, when I go back to my host and try to ls the file , its not appearing. Any idea why? My PV and PVC are correct as I can see that it has been bounded.
ubuntu#ip-172-31-24-21:/home$ cd /mnt/data
ubuntu#ip-172-31-24-21:/mnt/data$ ls -lrt
total 0
A persistent volume (PV) is a kubernetes resource which has its own lifecycle independent of the pod pv documentation. Using a PVC to consume from a PV makes it visible in some other tool. For example azure files, ELB, a server with NFS, etc. My point here is that there is no reason why the PV should exist in the node.
If you want your persistence to be saved in the node use the hostPath option for PVs. Check this link. Though this is not a good production practice.
First of all, you don't need to create a PV if you are creating a PVC. PVCs create PV, if you have the right storageClass.
Second, hostPath is one delicate PV in Kubernetes world. That's the only PV that doen't need to be created to be mounted in a Pod. So you could have not created neither PV nor PVC and a hostPath volume would work just fine.
To make a test, delete your PV and PVC, and create your Pod like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-volume
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
securityContext:
privileged: true
ports:
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx
mountPath: /root/nginx-volume # path in the pod
volumes:
- name: nginx
hostPath:
path: /var/test # path in the host machine
I know this is a confusing concept, but that's how it is.

How to exempt a directory when using readOnlyRootFilesystem in kubernetes?

I need to block my k8s pods from writing to root folders, with an exemption to /tmp dir. There are 2 reasons I need to write to this dir:
Flask needs to write to somewhere. It's trying to write to /tmp and /etc/... and /opt/... , but all of them are blocked because it's under root folder
I'm going to need to write to a file for liveness probe, but if the entire file system is blocked, then I can't do it
I'm running kubernetes 1.13.6-gke.13 in GKE
The relevant part from the yaml file:
securityContext:
runAsUser: 1000
readOnlyRootFilesystem: true
runAsNonRoot: true
I expect the pod to be able to write to a predefined folder, maybe a mounted one.
Create a volume mount for /tmp directory.
volumeMounts:
- mountPath: /tmp
name: tmp
And in Volumes -
volumes:
- emptyDir: {}
name: tmp
As I understand, you would like to create a POD with access to a local directory. You need to create PV, PVC and POD.
PV definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-flaskapp
labels:
type: local
spec:
storageClassName: <your-storageclass-name>
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/test_flask/app"
PVC definition:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-flaskapp
spec:
storageClassName: <your-storageclass-name>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
POD definition:
apiVersion: v1
kind: Pod
metadata:
name: flaskapp
spec:
containers:
- image: flask:latest
name: flaskapp
ports:
- containerPort: 8080
name: flaskapp
volumeMounts:
- mountPath: /usr/local/flask/webapps
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc-flask
Now you can check if everything works fine:
$ kubectl exec -it flaskapp bash
root#flaskapp:/usr/local/flask# mkdir /usr/local/flask/webapps/sample
root#flaskapp:/usr/local/flask# touch /usr/local/flask/webapps/sample/testfile
root#flaskapp:/usr/local/flask# ls /usr/local/flask/webapps/sample/
testfile
Now when you look at host, you will see the newly created file:
[root#master user]# ls /opt/test_flask/app/sample/
testfile
I hope it will helps you.