Ubuntu 20.04 qemu-kvm sharing folders - kubernetes

I am using minikube on ubuntu and start with --driver=none. Since it is recommended to use kvm2(for linux host) and need to be root user with --driver=none, I am now switching to kvm2.
I have few kubernetes deployment using persistent volume and claims.
When minikube started, it figures out to mount the host path to this volumes and running as expected.
Now in kvm2, these mounting is not working as it cannot find the host path.
Is there any way to mount the host path to all the deployments automatically?
The idea is, to have central location say /data, as central persistent storage. Each deployment will have its own subfolder under /data. In the persistent storage, I will define the mountpath to use this subfolder. Is there any way to mount this host's /data folder to the pods automatically through configuration as one time setup?
If there is any better way to achieve the persistent storage, please let me know.

Related

K8s - an NFS share is only mounted read-only in the pod

Environment: external NFS share for persistent storage, accessible to all, R/W, Centos7 VMs (NFS share and K8s cluster), NFS utils installed on all workers.
Mount on a VM, e.g. a K8s worker node, works correctly, the share is R/W
Deployed in the K8s cluster: PV, PVC, Deployment (Volumes - referenced to PVC, VolumeMount)
The structure of the YAML files corresponds to the various instructions and postings, including the postings here on the site.
The pod starts, the share is mounted. Unfortunately, it is read-only. All the suggestions from the postings I have found about this did not work so far.
Any idea what else I could look out for, what else I could try?
Thanks. Thomas
After digging deep, I found the cause of the problem. Apparently, the syntax for the NFS export is very sensitive. One more space can be problematic.
On the NFS server, two export entries were stored in the kernel tables. The first R/O and the second R/W. I don't know whether this is a Centos bug because of the syntax in /etc/exports.
On another Centos machine I was able to mount the share without any problems (r/w). In the container (Debian-based image), however, not (only r/o). I have not investigated whether this is due to Kubernetes or Debian behaves differently.
After correcting the /etc/exports file and restarting the NFS server, there was only one correct entry in the kernel table. After that, mounting R/W worked on a Centos machine as well as in the Debian-based container inside K8s.
Here are the files / table:
privious /etc/exports:
/nfsshare 172.16.6.* (rw,sync,no_root_squash,no_all_squash,no_acl)
==> kernel:
/nfsshare 172.16.6.*(ro, ...
/nfsshare *(rw, ...
corrected /etc/exports (w/o the blank):
/nfsshare *(rw,sync,no_root_squash,no_all_squash,no_acl)
In principle, the idea of using an init container is a good one. Thank you for reminding me of this.
I have tried it.
Unfortunately, it doesn't change the basic problem. The file system is mounted "read-only" by Kubernetes. The init container returns the following error message (from the log):
chmod: /var/opt/dilax/enumeris: Read-only file system

Do not see created file in the host directory mounted by hostPath

I have a kubernetes cluster of 1 master & 1 worker. I deployed 1 nodejs application and mounted a directory /home/user_name/data using hostPath volume type to the pod. The type I set to Directory for the volume.
The nodejs application is working perfectly and writing data to and returning the saved data without any error.
Initially, I also tested by deleting the data folder and trying to apply the deployment and I received error as the volume type is Directory. So it looks the mount is correctly pointing to the directory in the worker node. But when I am trying to see the file which should be created by the nodejs app, I do not see any file in the host directory.
I also checked if the pod is running on worker node and it's running on worker node only.
Mu understanding about hostPath volume type is that it's similar to bindMount volume type of docker.
I have not idea why I cannot see the file which the nodejs app is creating and working perfectly. Any help will be appreciated.

How to mimic Docker ability to pre-populate a volume from a container directory with Kubernetes

I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH

Trying to mount a Lustre filesystem in a pod: tries and problems

For our data science platform we have user directories on a Lustre filesystem (HPC world) that we want to present inside Jupyter notebooks running on OpenShift (they are spawned on demand by JupyterHub after authentication and other custom actions).
We tried the following approaches:
Mounting the Lustre filesystem in a privileged sidecar container in the same pod as the notebook, sharing the mount through an EmptyDir volume (that way the notebook container does not need to be privileged). We have mount/umount of Lustre occur in the sidecar as post-start and pre-stop in the pod lifecycle. Problem is that sometimes when we delete the pod umount does not work or hang for whatever reason, and as EmptyDirs are "destroyed" it flushes everything in Lustre because the fs is still mounted. A real bad result...
Mounting the Lustre filesystem directly on the node and create a hostPath volume in the pod and a volumeMount in the notebook container. It kind of works, but only if the container is run as privileged, which we don't want of course.
We tried more specific sccs which authorize the hostPath volume (hostaccess, hostmount-anyuid), and also made a custom one with hostPath volume and 'allowHostDirVolumePlugin: true', but with no success. We can see the mount from the container, but even with everything wide open (777), we get a "permission denied" with whatever we try to do on the mount(ls, touch,...). Again, it works only if the container is privileged. Does not seem to be SELinux related, at least we have no alerts.
Does anyone see where the problem is, or has another suggestion of solution we can try?

Configure apiserver to use encryption config using minikube

I am trying to configure the kube-apiserver so that it uses encryption to configure secrets in my minikube cluster.
For that, I have followed the documentation on kubernetes.io but got stuck at step 3 that says
Set the --encryption-provider-config flag on the kube-apiserver to point to the location of the config file.
I have discovered the option --extra-config on minikube start and have tried starting my setup using
minikube start --extra-config=apiserver.encryption-provider-config=encryptionConf.yaml
but naturally it doesn't work as encryptionConf.yaml is located in my local file system and not in the pod that's spun up by minikube. The error minikube log gives me is
error: error opening encryption provider configuration file "encryptionConf.yaml": open encryptionConf.yaml: no such file or directory
What is the best practice to get the encryption configuration file onto the kube-apiserver? Or is minikube perhaps the wrong tool to try out these kinds of things?
I found the solution myself in this GitHub issue where they have a similar issue for passing a configuration file. The comment that helped me was the slightly hacky solution that made use of the fact that the directory /var/lib/localkube/certs/ from the minikube VM is mounted into the apiserver.
So my final solution was to run
minikube mount .:/var/lib/minikube/certs/hack
where in the current directory I had my encryptionConf.yaml and then start minikube like so
minikube start --extra-config=apiserver.encryption-provider-config=/var/lib/minikube/certs/hack/encryptionConf.yaml
Based on drivers used some directories are mounted on to your minikube VM.
Check this link - https://kubernetes.io/docs/setup/minikube/#mounted-host-folders
Also ~/.minikube/files is also mounted into the VM at /files. So you can keep your files there and use that path for API server config
I had similar issues in windows regarding filepath location
since C:\Users\%USERNAME%\ is by default mounted in minikube VM
so i copied the files to Desktop folder( any folder under C drive )
minikube --extra-config=apiserver.encryption-provider-config=/c/Users/%USERNAME%/.../<file-name>
hope this is helpful for folks facing this issues on windows platform.