What are the differences between a 9P and hostPath mount in Kubernetes? - kubernetes

I am looking to do local dev of an app that is running in Kubernetes on minikube. I want to mount a local directory to speed up development, so I can make code changes to my app (python) without rebuilding the container.
If I understand correctly, I have two out of the box options:
9P mount which is provided by minikube
hostPath mount which comes directly from Kubernetes
What are the differences between these, and in what cases would one be appropriate over the other?

9P mount and hostPath are two different concepts. You cannot mount directory to pod using 9P mount.
9P mount is used to mount host directory into the minikube VM.
HostPath is a persistent volume which mounts a file or directory from the host node's(in your case minikube VM) filesystem into your Pod.
Take a look also on types of Persistent Volumes: pv-types-k8s.
If you want to mount a local directory to pod:
First, you need to mount your directory for example: $HOME/your/path into your minikube VM using 9P. Execute command:
$ minikube start --mount-string="$HOME/your/path:/data"
Then if you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Another solution:
Mount host's $HOME directory into minikube's /hosthome directory. Get your data:
$ ls -la /hosthome/your/path
To mount this directory, you have to just change your Pod's hostPath
hostPath:
path: /hosthome/your/path
Take a look: minikube-mount-data-into-pod.
Also you need to know that:
Minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
More: note-persistence-minikube.
See driver-mounts as an alternative.

Related

Minikube multi-node cluster mounting host machine filesystem to all the nodes

I am creating a Minikube multi-node kubernetes cluster with 2 nodes mounting the $HOME/Minikube/mount directory from the host filesystem to the data directory in the cluster nodes.
I used the following command to achieve this,
minikube start --nodes 2 --cpus 2 --memory 2048 --disk-size 10g --mount-string $HOME/Minikube/mount:/data --mount --namespace test -p multi-node
Minikube version: 1.28.0
Kubernetes client version: v1.26.0
Kubernetes server version: v1.24.3
Expectation was to find the /data directory in both nodes (multi-node(control-plane) and multi-node-m02) mount to the $HOME/Minikube/mount directory of the host filesystem.
But when i ssh to the Minikube nodes i only see the /data directory mount in the multi-node which functions as the kubernetes control plane node. Local filesystem directory is not mount to both nodes.
minikube ssh -n multi-node
ls -la /data/
$ ls -la /data/
total 0
minikube ssh -n multi-node-m02
ls -la /data/
$ ls -la /data
ls: cannot access '/data': No such file or directory
Is there some way to achieve this requirement of mounting a local filesystem directory to all the nodes in a multi-node Minikube k8s cluster?
As they mentioned in this issue, using minikube start --mount has some issues when mounting the files. Try using the minikube mount string.
If the issue still persists the issue is with the storage provisioner broken for multinode mode. For this issue minikube has recently added a local path provisioner, adding it to the default storage class resolves your issue.

Kubernetes: minikube persistent volume local filesystem storage location

I've read through all the docs and a few SO posts and can't find an answer to this question:
Where does minikube persist its persistent volumes in my local mac filing system?
Thanks
First of all keep in mind that Kubernetes is running on Minikube cluster. Minikube itself run on Virtual Machine, so all data would be stored in this VM not on your MacOS.
When you want to point exact place where you would like to save this data in Kubernetes you could choose between:
hostpath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
local
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
However, Minikube supports only hostpath.
In this case you should check Minikube documentation about Persistent Volumes
minikube supports PersistentVolumes of type hostPath out of the box. These PersistentVolumes are mapped to a directory inside the running minikube instance (usually a VM, unless you use --driver=none, --driver=docker, or --driver=podman). For more information on how this works, read the Dynamic Provisioning section below.
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
If you would like to mount directory from host you would need to use minikube mount.
$ minikube mount <source directory>:<target directory>
For more details, please check Minikube Mounting filesystems documentation.
If you are using the volume type hostPath the files are saved on your node.
To access your node filesystem you can use the command: minikube ssh and under your mounted path you'll find your documents.

mount fuse filesystem inside kubernetes pod

I would like to mount a google drive inside my pod. I add:
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
to the pod. However, google drive ocamlfuse mounts the data directory, but a simple command on the mounted file system results with a Input/output error.
Is it possible to mount a fuse filesystem inside a pod? Is there anything I need on the host?
You can use smarter-device-manager
We have a similar setup with what described in this blog post and I can confirm we can mount /dev/fuse device in unprivileged mode.
https://randomopenstackthoughts.wordpress.com/2021/03/12/how-to-mount-dev-fuse-without-privileged-mode-in-kubernetes/

Kubernetes hostpath volume with WLS2 does not work

I have migrated to WSL2 - Windows 10.
Since I have the following issue :
Hostpath volume are not mounted into containers. (directory are empty)
The volumes are well created, the desired path is correct. Like
/volumes/my-cluster/services1/www
Directory "/volumes" have 777 permissions
Volumes looks like that :
vol-www 30Mi RWO Retain Bound jeedom/pvc-www hostpath 19m
PersistenVolumeClaim are bounds to the volumes
pvc-www Bound vol-www 30Mi RWO hostpath 19m
In WSL1, at the start of the deployment (or helm charts installation). If Directories does not exists, they are created. Volume are mounted to the containers and work well
Conditions : volumes must be mounted in /c/.... (not in /mnt/c/...)
With WSL2, there is no need to mount volume in /c/...
docker run -v /volumes/my-cluster/services1/www:/var/html/www my-image work well.
With kubernetes, local directories are not created. directory in container are empty. When creating a files on wsl path, it's not appears in the container, and the other way around doesn't work either.
More, the method who works with WSL1 don't work with WSL2 (mount volume in /c/...)
Thanks

kubernetes mountPath vs hostPath

I am trying to deploy an app to kubernetes cluster and I want to store data in Persistent Volume. However, I am very confused about two parameters in the setup. Can someone explains what is the different between volumes.hostPath and volumeMounts.mountPath? I read some documentations online but it does not help me to understand.
volumeMounts:
- mountPath: /var/lib/mysql
volumes:
hostPath:
path: /k8s
If my setup is as above, is the volume going to be mounted at /k8s/var/lib/mysql?
The mount path is always the destination inside the Pod a volume gets mounted to.
I think the documentation is pretty clear on what hostPath does:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your Pod. This is not something that most Pods will
need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker
- running cAdvisor in a Container; use a hostPath of /sys
- allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
So your example does not what you think it does. It would mount the node's /k8s directory into the Pod at /var/lib/mysql.
This should be done only if you fully understand the implications!
Host path: The directory in your node.
Mount path: The directory in your pod.
Your setup will mount the node's directory(/k8s) into the pod's directory(at /var/lib/mysql)