How to exempt a directory when using readOnlyRootFilesystem in kubernetes? - kubernetes

I need to block my k8s pods from writing to root folders, with an exemption to /tmp dir. There are 2 reasons I need to write to this dir:
Flask needs to write to somewhere. It's trying to write to /tmp and /etc/... and /opt/... , but all of them are blocked because it's under root folder
I'm going to need to write to a file for liveness probe, but if the entire file system is blocked, then I can't do it
I'm running kubernetes 1.13.6-gke.13 in GKE
The relevant part from the yaml file:
securityContext:
runAsUser: 1000
readOnlyRootFilesystem: true
runAsNonRoot: true
I expect the pod to be able to write to a predefined folder, maybe a mounted one.

Create a volume mount for /tmp directory.
volumeMounts:
- mountPath: /tmp
name: tmp
And in Volumes -
volumes:
- emptyDir: {}
name: tmp

As I understand, you would like to create a POD with access to a local directory. You need to create PV, PVC and POD.
PV definition:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-flaskapp
labels:
type: local
spec:
storageClassName: <your-storageclass-name>
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/test_flask/app"
PVC definition:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-flaskapp
spec:
storageClassName: <your-storageclass-name>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
POD definition:
apiVersion: v1
kind: Pod
metadata:
name: flaskapp
spec:
containers:
- image: flask:latest
name: flaskapp
ports:
- containerPort: 8080
name: flaskapp
volumeMounts:
- mountPath: /usr/local/flask/webapps
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc-flask
Now you can check if everything works fine:
$ kubectl exec -it flaskapp bash
root#flaskapp:/usr/local/flask# mkdir /usr/local/flask/webapps/sample
root#flaskapp:/usr/local/flask# touch /usr/local/flask/webapps/sample/testfile
root#flaskapp:/usr/local/flask# ls /usr/local/flask/webapps/sample/
testfile
Now when you look at host, you will see the newly created file:
[root#master user]# ls /opt/test_flask/app/sample/
testfile
I hope it will helps you.

Related

Volume Mounts for main directories of containers like /mnt, /dev, /var to volumes

Can we mount directly the main dirs of containers to volumes as part of kubernetes podspec.
For ex:
/mnt
/dev
/var
All files and subdir of /mnt, /dev, /var should be mounted to volumes as part of podspec.
How can we do this?
For development purposes, you can create a hostPath Persistent Volume, but if you want to implement this for production, I strongly recommend you to use some NFS.
Here you have an example on how to use a NFS in a Pod definition:
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # change the destination you like the share to be mounted to
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # change this to your NFS server
path: /share1 # change this to the relevant share
And here an example of a hostPath Persistent Volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
In the hostPath example, all files inside /mnt/data will be mounted as /usr/share/nginx/html in the Pod.

Why local persistent volumes not visible in EKS?

In order to test if I can get self written software deployed in amazon using docker images,
I have a test eks cluster.
I have written a small test script that reads and writes a file to see if I understand how to deploy. I have successfully deployed it in minikube, using three replica's. The replica's all use a shared directory on my local file system, and in minikube that is mounted into the pods with a volume
The next step was to deploy that in the eks cluster. However, I cannot get it working in eks. The problem is that the pods don't see the contents of the mounted directory.
This does not completely surprise me, since in minikube I had to create a mount first to a local directory on the server. I have not done something similar on the eks server.
My question is what I should do to make this working (if possible at all).
I use this yaml file to create a pod in eks:
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
storageClassName: local-storage
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data/k8s
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
So what I expect is that I have a local directory, /data/k8s, that is visible in the pods as path /config.
When I apply this yaml, I get a pod that gives an error message that makes clear the data in the /data/k8s directory is not visible to the pod.
Kubectl gives me this info after creation of the volume and claim
[rdgon#NL013-PPDAPP015 probeer]$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-volume 1Gi RWO Retain Available 15s
persistentvolume/pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO Delete Bound default/pv-claim gp2 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO gp2 15s
Which seems to indicate everything is OK. But it seems that the filesystem of the master node, on which I run the yaml file to create the volume, is not the location where the pods look when they access the /config dir.
On EKS, there's no storage class named 'local-storage' by default.
There is only a 'gp2' storage class, which is also used when you don't specify a storageClassName.
The 'gp2' storage class creates a dedicated EBS volume and attaches it your Kubernetes Node when required, so it doesn't use a local folder. You also don't need to create the pv manually, just the pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: gp2
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
If you want a folder on the Node itself, you can use a 'hostPath' volume, and you don't need a pv or pvc for that:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
hostPath:
path: /data/k8s
This is a bad idea, since the data will be lost if another node starts up, and your pod is moved to the new node.
If it's for configuration only, you can also use a configMap, and put the files directly in your kubernetes manifest files.
apiVersion: v1
kind: ConfigMap
metadata:
name: ruud-config
data:
ruud.properties: |
my ruud.properties file content...
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
configMap:
name: ruud-config
Please check whether the pv got created and its "bound" to PVC by running below commands
kubectl get pv
kubectl get pvc
Which will give information whether the objects are created properly
The local path you refer to is not valid. Try:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: /config
volumes:
- name: cmount
hostPath:
path: /data/k8s
type: DirectoryOrCreate # <-- You need this since the directory may not exist on the node.

Multiple Persistent Volumes with the same mount path Kubernetes

I have created 3 CronJobs in Kubernetes. The format is exactly the same for every one of them except the names. These are the following specs:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test-job-1 # for others it's test-job-2 and test-job-3
namespace: cron-test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: test-job-1 # for others it's test-job-2 and test-job-3
image: busybox
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
args:
- cd database-backup && touch $(date +%Y-%m-%d:%H:%M).test-job-1 && ls -la # for others the filename includes test-job-2 and test-job-3 respectively
volumeMounts:
- mountPath: "/database-backup"
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
volumes:
- name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
persistentVolumeClaim:
claimName: test-job-1-pvc # for others it's test-job-2-pvc and test-job-3-pvc
And also the following Persistent Volume Claims and Persistent Volume:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-job-1-pvc # for others it's test-job-2-pvc or test-job-3-pvc
namespace: cron-test
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
resources:
requests:
storage: 1Gi
volumeName: test-job-1-pv # depending on the name it's test-job-2-pv or test-job-3-pv
storageClassName: manual
volumeMode: Filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
namespace: cron-test
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/database-backup"
So all in all there are 3 CronJobs, 3 PersistentVolumes and 3 PersistentVolumeClaims. I can see that the PersistentVolumeClaims and PersistentVolumes are bound correctly to each other. So test-job-1-pvc <--> test-job-1-pv, test-job-2-pvc <--> test-job-2-pv and so on. Also the pods associated with each PVC are are the corresponding pods created by each CronJob. For example test-job-1-1609066800-95d4m <--> test-job-1-pvc and so on. After letting the cron jobs run for a bit I create another pod with the following specs to inspect test-job-1-pvc:
apiVersion: v1
kind: Pod
metadata:
name: data-access
namespace: cron-test
spec:
containers:
- name: data-access
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- name: data-access-volume
mountPath: /database-backup
volumes:
- name: data-access-volume
persistentVolumeClaim:
claimName: test-job-1-pvc
Just a simple pod that keeps running all the time. When I get inside that pod with exec and see inside the /database-backup directory I see all the files created from all the pods created by the 3 CronJobs.
What I exepected to see?
I expected to see only the files created by test-job-1.
Is this something expected to happen? And if so how can you separate the PersistentVolumes to avoid something like this?
I suspect this is caused by the PersistentVolume definition: if you really only changed the name, all volumes are mapped to the same folder on the host.
hostPath:
path: "/database-backup"
Try giving each volume a unique folder, e.g.
hostPath:
path: "/database-backup/volume1"

permission denied when mount in kubernetes pod with root user

When I using this command in kubernetes v1.18 jenkins's master pod to mount a nfs file system:
root#jenkins-67fff76bb6-q77xf:/# mount -t nfs -o v4 192.168.31.2:/infrastructure/jenkins-share-workspaces /opt/jenkins
mount: permission denied
root#jenkins-67fff76bb6-q77xf:/#
why it shows permission denied althrough I am using root user? when I using this command in another machine(not in docker), it works fine, shows the server side works fine. this is my kubernetes jenkins master pod secure text config in yaml:
securityContext:
runAsUser: 0
fsGroup: 0
today I tried another kubernetes pod and mount nfs file system and throw the same error. It seems mount NFS from host works fine, and mount from kubernetes pod have a perssion problem. Why would this happen? the NFS is works fine by PVC binding PV in this kubernetes pod, why it mount from docker failed? I am confusing.
There are two ways to mount nfs volume to a pod
First (directly in pod spec):
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 192.168.31.2
path: /infrastructure/jenkins-share-workspaces
containers:
- name: app
image: example
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
Second (creating persistens nfs volume and volume claim):
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.31.2
path: "/infrastructure/jenkins-share-workspaces"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: example
volumeMounts:
- name: nfs
mountPath: /opt/jenkins
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
EDIT:
The solution above is prefered one, but if you reallly need to use mount in container you need to add capabilities to the pod:
spec:
containers:
- securityContext:
capabilities:
add: ["SYS_ADMIN"]
Try using
securityContext:
privileged: true
This needs if you are using dind for jenkins

PV file not saved on host

hi all quick question on host paths for persistent volumes
I created a PV and PVC here
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
and I ran a sample pod
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
i exec the pod and created a file
root#task-pv-pod:/# cd /usr/share/nginx/html
root#task-pv-pod:/usr/share/nginx/html# ls
tst.txt
However, when I go back to my host and try to ls the file , its not appearing. Any idea why? My PV and PVC are correct as I can see that it has been bounded.
ubuntu#ip-172-31-24-21:/home$ cd /mnt/data
ubuntu#ip-172-31-24-21:/mnt/data$ ls -lrt
total 0
A persistent volume (PV) is a kubernetes resource which has its own lifecycle independent of the pod pv documentation. Using a PVC to consume from a PV makes it visible in some other tool. For example azure files, ELB, a server with NFS, etc. My point here is that there is no reason why the PV should exist in the node.
If you want your persistence to be saved in the node use the hostPath option for PVs. Check this link. Though this is not a good production practice.
First of all, you don't need to create a PV if you are creating a PVC. PVCs create PV, if you have the right storageClass.
Second, hostPath is one delicate PV in Kubernetes world. That's the only PV that doen't need to be created to be mounted in a Pod. So you could have not created neither PV nor PVC and a hostPath volume would work just fine.
To make a test, delete your PV and PVC, and create your Pod like this:
apiVersion: v1
kind: Pod
metadata:
name: nginx-volume
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
securityContext:
privileged: true
ports:
- containerPort: 80
name: nginx-http
volumeMounts:
- name: nginx
mountPath: /root/nginx-volume # path in the pod
volumes:
- name: nginx
hostPath:
path: /var/test # path in the host machine
I know this is a confusing concept, but that's how it is.