Kubernetes hostpath volume with WLS2 does not work - kubernetes

I have migrated to WSL2 - Windows 10.
Since I have the following issue :
Hostpath volume are not mounted into containers. (directory are empty)
The volumes are well created, the desired path is correct. Like
/volumes/my-cluster/services1/www
Directory "/volumes" have 777 permissions
Volumes looks like that :
vol-www 30Mi RWO Retain Bound jeedom/pvc-www hostpath 19m
PersistenVolumeClaim are bounds to the volumes
pvc-www Bound vol-www 30Mi RWO hostpath 19m
In WSL1, at the start of the deployment (or helm charts installation). If Directories does not exists, they are created. Volume are mounted to the containers and work well
Conditions : volumes must be mounted in /c/.... (not in /mnt/c/...)
With WSL2, there is no need to mount volume in /c/...
docker run -v /volumes/my-cluster/services1/www:/var/html/www my-image work well.
With kubernetes, local directories are not created. directory in container are empty. When creating a files on wsl path, it's not appears in the container, and the other way around doesn't work either.
More, the method who works with WSL1 don't work with WSL2 (mount volume in /c/...)
Thanks

Related

Where can I locate the actual files of Kubernates PV hostpath

I just created the following PersistantVolume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: sql-pv
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/sqldata"
Then I SSH the Node and traversed to the /var/lib. But I cannot see the sqldata directory created anywhere in it.
Where is the real directory created?
I created a POD that mounts this volume to a path inside the container. When I SSH the container, I can see the file in the mount path. Where are these files stored?
You have setup your cluster on Google Kubernetes Engine, that means nodes are virtual machine instances on GCP. You've probably been connecting to the cluster using the Kubernetes Engine dashboard and Connect to the cluster option. It does not SSH you to any of the node, it just starting GCP Cloud Shell terminal instance with following command like:
gcloud container clusters get-credentials {your-cluster} --zone {your-zone} --project {your-project-name}
That command is configuring kubectl agent on GCP Cloud Shell by setting proper cluster name, certificates etc. in ~/.kube/config file so you have access to the cluster (by communicating with the cluster endpoint), but you are not SSHed to any node. That's why you can't access the path defined in the hostPath.
To find a hostPath directory, you need to:
find on which node is the pod
SSH into this node
Finding a node:
Run following kubectl get pod {pod-name} with -o wide flag command - change {pod-name} to your pod name
user#cloudshell:~ (project)$ kubectl get pod task-pv-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
task-pv-pod 1/1 Running 0 53m xx.xx.x.xxx gke-test-v-1-21-default-pool-82dbc10b-8mvx <none> <none>
SSH to the node:
Run following gcloud compute ssh {cluster-name} command - change {cluster-name} to node name from the previous command:
user#cloudshell:~ (project)$ gcloud compute ssh gke-test-v-1-21-default-pool-82dbc10b-8mvx
Welcome to Kubernetes v1.21.3-gke.2001!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/home/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release-gke/release/v1.21.3-gke.2001/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.21.3-gke.2001
For Kubernetes copyright and licensing information, see:
/home/kubernetes/LICENSES
user#gke-test-v-1-21-default-pool-82dbc10b-8mvx ~ $
Now there will be a hostPath directory (in your case /var/lib/sqldata), there will also be files if pod created some.
Avoid hostPath if possible
It's not recommended using hostPath. As mentioned in the comments, it will cause issues when a pod will be created on the different node (but you have a single node cluster) but it also presents many security risks:
Warning:
HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.
If restricting HostPath access to specific directories through AdmissionPolicy, volumeMounts MUST be required to use readOnly mounts for the policy to be effective.
In your case it's much better to use the gcePersistentDiskvolume type - check this article.

What are the differences between a 9P and hostPath mount in Kubernetes?

I am looking to do local dev of an app that is running in Kubernetes on minikube. I want to mount a local directory to speed up development, so I can make code changes to my app (python) without rebuilding the container.
If I understand correctly, I have two out of the box options:
9P mount which is provided by minikube
hostPath mount which comes directly from Kubernetes
What are the differences between these, and in what cases would one be appropriate over the other?
9P mount and hostPath are two different concepts. You cannot mount directory to pod using 9P mount.
9P mount is used to mount host directory into the minikube VM.
HostPath is a persistent volume which mounts a file or directory from the host node's(in your case minikube VM) filesystem into your Pod.
Take a look also on types of Persistent Volumes: pv-types-k8s.
If you want to mount a local directory to pod:
First, you need to mount your directory for example: $HOME/your/path into your minikube VM using 9P. Execute command:
$ minikube start --mount-string="$HOME/your/path:/data"
Then if you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Another solution:
Mount host's $HOME directory into minikube's /hosthome directory. Get your data:
$ ls -la /hosthome/your/path
To mount this directory, you have to just change your Pod's hostPath
hostPath:
path: /hosthome/your/path
Take a look: minikube-mount-data-into-pod.
Also you need to know that:
Minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
More: note-persistence-minikube.
See driver-mounts as an alternative.

Kubernetes: minikube persistent volume local filesystem storage location

I've read through all the docs and a few SO posts and can't find an answer to this question:
Where does minikube persist its persistent volumes in my local mac filing system?
Thanks
First of all keep in mind that Kubernetes is running on Minikube cluster. Minikube itself run on Virtual Machine, so all data would be stored in this VM not on your MacOS.
When you want to point exact place where you would like to save this data in Kubernetes you could choose between:
hostpath
A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
local
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
However, Minikube supports only hostpath.
In this case you should check Minikube documentation about Persistent Volumes
minikube supports PersistentVolumes of type hostPath out of the box. These PersistentVolumes are mapped to a directory inside the running minikube instance (usually a VM, unless you use --driver=none, --driver=docker, or --driver=podman). For more information on how this works, read the Dynamic Provisioning section below.
minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
If you would like to mount directory from host you would need to use minikube mount.
$ minikube mount <source directory>:<target directory>
For more details, please check Minikube Mounting filesystems documentation.
If you are using the volume type hostPath the files are saved on your node.
To access your node filesystem you can use the command: minikube ssh and under your mounted path you'll find your documents.

kubernetes mountPath vs hostPath

I am trying to deploy an app to kubernetes cluster and I want to store data in Persistent Volume. However, I am very confused about two parameters in the setup. Can someone explains what is the different between volumes.hostPath and volumeMounts.mountPath? I read some documentations online but it does not help me to understand.
volumeMounts:
- mountPath: /var/lib/mysql
volumes:
hostPath:
path: /k8s
If my setup is as above, is the volume going to be mounted at /k8s/var/lib/mysql?
The mount path is always the destination inside the Pod a volume gets mounted to.
I think the documentation is pretty clear on what hostPath does:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your Pod. This is not something that most Pods will
need, but it offers a powerful escape hatch for some applications.
For example, some uses for a hostPath are:
- running a Container that needs access to Docker internals; use a hostPath of /var/lib/docker
- running cAdvisor in a Container; use a hostPath of /sys
- allowing a Pod to specify whether a given hostPath should exist prior to the Pod running, whether it should be created, and what it should exist as
So your example does not what you think it does. It would mount the node's /k8s directory into the Pod at /var/lib/mysql.
This should be done only if you fully understand the implications!
Host path: The directory in your node.
Mount path: The directory in your pod.
Your setup will mount the node's directory(/k8s) into the pod's directory(at /var/lib/mysql)

kubernetes persistent volume accessmode

It seems that Kubernetes supports 3 kinds of access mode for persistent volume: ReadWriteOnce, ReadOnlyMany, ReadWriteMany.
I'm really curious about the scheduler strategy for a pod which uses the ReadWriteOnce mode volume. For example, I created an RC which have pod num=2, I guess the two pods will be scheduled into the same host because they use the volume that has ReadWriteOnce mode?
I really want to know the source code of this part.
I think the upvoted answer is wrong. As per Kubernetes docs on Access Modes
The access modes are:
ReadWriteOnce -- the volume can be mounted as read-write by a single node
ReadOnlyMany -- the volume can be mounted read-only by many nodes
ReadWriteMany -- the volume can be mounted as read-write by many nodes
So AccessModes as defined today, only describe node attach (not pod mount) semantics, and doesn't enforce anything.
So to prevent two pods mount the same PVC even if they are scheduled to be run on the same node you can use pod anti-affinity. It is not the same as not to mount one volume to 2 pods scheduled on the same node. But anti-affinity can be used to ask scheduler not to run 2 pods on the same node. Therefore it prevents mounting one volume into 2 pods.
If a pod mounts a volume with ReadWriteOnce access mode, no other pod can mount it. In GCE (Google Compute Engine) the only allowed modes are ReadWriteOnce and ReadOnlyMany. So either one pod mounts the volume ReadWrite, or one or more pods mount the volume ReadOnlyMany.
The scheduler (code here) will not allow a pod to schedule if it uses a GCE volume that has already been mounted read-write.
(Documentation reference for those who didn't understand the question: persistent volume access modes)
In Kubernetes you provision storage either statically(using a storage class) or dynamically (Persistent Volume). Once the storage is available to bound and claimed, you need to configure it in what way your Pods or Nodes are connecting to the storage (a persistent volume). That could be configured in below four modes.
ReadOnlyMany (ROX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read operation.
ReadWriteMany (RWX)
In this mode multiple pods running on different Nodes could connect to the storage and carry out read and write operation.
ReadWriteOnce (RWO)
In this mode multiple pods running in only one Node could connect to the storage and carry out read and write operation.
ReadWriteOncePod (RWOP)
In this mode the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
Follow the documentation to get more insight.