mount fuse filesystem inside kubernetes pod - kubernetes

I would like to mount a google drive inside my pod. I add:
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
to the pod. However, google drive ocamlfuse mounts the data directory, but a simple command on the mounted file system results with a Input/output error.
Is it possible to mount a fuse filesystem inside a pod? Is there anything I need on the host?

You can use smarter-device-manager
We have a similar setup with what described in this blog post and I can confirm we can mount /dev/fuse device in unprivileged mode.
https://randomopenstackthoughts.wordpress.com/2021/03/12/how-to-mount-dev-fuse-without-privileged-mode-in-kubernetes/

Related

Persistent volume claim attach to kubernetes deployment to access files present in PVC but should not write any logs to PVC?

I have a requirement to store files in a PVC and attach that PVC to a Nginx ingress controller deployment so that application can access the files which are present in PVC but at the same application should not write back logs and configurations to PVC.
Can you please let me know how can i achieve it?
I Created a PVC and attached it a deployment but it is writing logs and configurations to it.
You can use the GCP fuse and store files to S3 directly that way it would be a little easy to manage if it's works for you.
However, if you want to go with your Idea you have to implement the ReadWriteMany (Read More). So two POD can attach to one PVC and one write and the other one read.
You can also use the EFS, or NFS file systems GKE with Filestore from GCP or MinIO, GlusterFS.
Ref answer glusterfs
I Created a PVC and attached it a deployment but it is writing logs
and configurations to it.
volumeMounts:
- name: file
mountPath: /var/data
readOnly: true
You can set the mode when mounting the file or directory and set it as read-only mode also readOnly: true.

Airflow on Kubernetes - NFS volume won't mount onto worker

I'm fairly new to Kubernetes so apologies for any mixups in terminology.
I'm using the official Airflow helm chart to create a development environment, and have my Dogs (and other) folders in a NFS volume on my local machine. I have configured the values.yaml like so (same for both the scheduler and worker):
# Mount additional volumes into scheduler.
extraVolumes:
- name: dags
nfs:
server: '10.106.0.113'
path: '/home/dev/projects/airflow-jobs/dags'
- name: plugins
nfs:
server: '10.106.0.113'
path: '/home/dev/projects/airflow-jobs/plugins'
- name: scripts
nfs:
server: '10.106.0.113'
path: '/home/dev/projects/airflow-jobs/scripts'
extraVolumeMounts:
- mountPath: '/opt/airflow/dags'
name: 'dags'
- mountPath: '/opt/airflow/plugins'
name: 'plugins'
- mountPath: '/opt/airflow/scripts'
name: 'scripts'
When I then spin this up, only one of the scheduler or worker pod will mount the volume successfully - the other will fail with the following message:
> kubectl describe pod airflow-worker-0
Warning FailedMount 2s kubelet Unable to attach or mount volumes: unmounted volumes=[dags plugins scripts], unattached volumes=[dags plugins scripts logs config kube-api-access-dnsjx]: timed out waiting for the condition
Why am I receiving this error - is it not possible to have two pods using the same NFS store? I had this working before using the same values.yaml file so I don't quite know what has changed!
Figured it out - it was due to the NFS mount being configured as ReadWriteOnce. As per the documentation here, this does allow multiple pods to access the volume, but only if they are located on the same node. So what was happening is that my Scheduler pod would spin up first, mount the volume, and then when the Worker pod followed it would be unable to do so because the Scheduler had reserved the volume. By coincidence, the first time I deployed these two pods must have been assigned the same node.
The simplest solution here would be to mount this as ReadWriteMany, but as I have limited permissions to my cluster and development environment, I simply made some changes to my deployment to ensure that the pods that needed access to this volume were on the same node. Plus, learning experience!
First - get the nodes that each pod is assigned to using kubectl get pods -o wide.
Get all the nodes in the cluster kubectl get nodes --show-labels
Pick a node to assign the two pods that need to share the NFS mount to. This was arbitrary, so lets call it "node123".
Update the labels of the node kubectl label nodes node123 airflow=nfs
Finally, in the values.yaml file, specify the nodeSelector property for the Scheduler and Worker nodes!
# Select certain nodes for airflow worker pods.
nodeSelector:
airflow: nfs
Then re-deploy the chart, and everything works as intended!

What are the differences between a 9P and hostPath mount in Kubernetes?

I am looking to do local dev of an app that is running in Kubernetes on minikube. I want to mount a local directory to speed up development, so I can make code changes to my app (python) without rebuilding the container.
If I understand correctly, I have two out of the box options:
9P mount which is provided by minikube
hostPath mount which comes directly from Kubernetes
What are the differences between these, and in what cases would one be appropriate over the other?
9P mount and hostPath are two different concepts. You cannot mount directory to pod using 9P mount.
9P mount is used to mount host directory into the minikube VM.
HostPath is a persistent volume which mounts a file or directory from the host node's(in your case minikube VM) filesystem into your Pod.
Take a look also on types of Persistent Volumes: pv-types-k8s.
If you want to mount a local directory to pod:
First, you need to mount your directory for example: $HOME/your/path into your minikube VM using 9P. Execute command:
$ minikube start --mount-string="$HOME/your/path:/data"
Then if you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Another solution:
Mount host's $HOME directory into minikube's /hosthome directory. Get your data:
$ ls -la /hosthome/your/path
To mount this directory, you have to just change your Pod's hostPath
hostPath:
path: /hosthome/your/path
Take a look: minikube-mount-data-into-pod.
Also you need to know that:
Minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
More: note-persistence-minikube.
See driver-mounts as an alternative.

Kubernetes: minikube persistent volume local filesystem storage location

I've read through all the docs and a few SO posts and can't find an answer to this question:
Where does minikube persist its persistent volumes in my local mac filing system?
Thanks
First of all keep in mind that Kubernetes is running on Minikube cluster. Minikube itself run on Virtual Machine, so all data would be stored in this VM not on your MacOS.
When you want to point exact place where you would like to save this data in Kubernetes you could choose between:
hostpath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
local
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
However, Minikube supports only hostpath.
In this case you should check Minikube documentation about Persistent Volumes
minikube supports PersistentVolumes of type hostPath out of the box. These PersistentVolumes are mapped to a directory inside the running minikube instance (usually a VM, unless you use --driver=none, --driver=docker, or --driver=podman). For more information on how this works, read the Dynamic Provisioning section below.
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
If you would like to mount directory from host you would need to use minikube mount.
$ minikube mount <source directory>:<target directory>
For more details, please check Minikube Mounting filesystems documentation.
If you are using the volume type hostPath the files are saved on your node.
To access your node filesystem you can use the command: minikube ssh and under your mounted path you'll find your documents.

kubernetes mountPath vs hostPath

I am trying to deploy an app to kubernetes cluster and I want to store data in Persistent Volume. However, I am very confused about two parameters in the setup. Can someone explains what is the different between volumes.hostPath and volumeMounts.mountPath? I read some documentations online but it does not help me to understand.
volumeMounts:
- mountPath: /var/lib/mysql
volumes:
hostPath:
path: /k8s
If my setup is as above, is the volume going to be mounted at /k8s/var/lib/mysql?
The mount path is always the destination inside the Pod a volume gets mounted to.
I think the documentation is pretty clear on what hostPath does:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your Pod. This is not something that most Pods will
need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker
- running cAdvisor in a Container; use a hostPath of /sys
- allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
So your example does not what you think it does. It would mount the node's /k8s directory into the Pod at /var/lib/mysql.
This should be done only if you fully understand the implications!
Host path: The directory in your node.
Mount path: The directory in your pod.
Your setup will mount the node's directory(/k8s) into the pod's directory(at /var/lib/mysql)