Do not see created file in the host directory mounted by hostPath - kubernetes

I have a kubernetes cluster of 1 master & 1 worker. I deployed 1 nodejs application and mounted a directory /home/user_name/data using hostPath volume type to the pod. The type I set to Directory for the volume.
The nodejs application is working perfectly and writing data to and returning the saved data without any error.
Initially, I also tested by deleting the data folder and trying to apply the deployment and I received error as the volume type is Directory. So it looks the mount is correctly pointing to the directory in the worker node. But when I am trying to see the file which should be created by the nodejs app, I do not see any file in the host directory.
I also checked if the pod is running on worker node and it's running on worker node only.
Mu understanding about hostPath volume type is that it's similar to bindMount volume type of docker.
I have not idea why I cannot see the file which the nodejs app is creating and working perfectly. Any help will be appreciated.

Related

Apache common io with kubernetes

Apache common io libraries used for file deletion in kubernetes persistent volume not working as expected. FileUtils.forceDelete is used to delete a file in kubernetes pv but this does not delete the file. User running this APP has full permission to r/w/x.

K8s - an NFS share is only mounted read-only in the pod

Environment: external NFS share for persistent storage, accessible to all, R/W, Centos7 VMs (NFS share and K8s cluster), NFS utils installed on all workers.
Mount on a VM, e.g. a K8s worker node, works correctly, the share is R/W
Deployed in the K8s cluster: PV, PVC, Deployment (Volumes - referenced to PVC, VolumeMount)
The structure of the YAML files corresponds to the various instructions and postings, including the postings here on the site.
The pod starts, the share is mounted. Unfortunately, it is read-only. All the suggestions from the postings I have found about this did not work so far.
Any idea what else I could look out for, what else I could try?
Thanks. Thomas
After digging deep, I found the cause of the problem. Apparently, the syntax for the NFS export is very sensitive. One more space can be problematic.
On the NFS server, two export entries were stored in the kernel tables. The first R/O and the second R/W. I don't know whether this is a Centos bug because of the syntax in /etc/exports.
On another Centos machine I was able to mount the share without any problems (r/w). In the container (Debian-based image), however, not (only r/o). I have not investigated whether this is due to Kubernetes or Debian behaves differently.
After correcting the /etc/exports file and restarting the NFS server, there was only one correct entry in the kernel table. After that, mounting R/W worked on a Centos machine as well as in the Debian-based container inside K8s.
Here are the files / table:
privious /etc/exports:
/nfsshare 172.16.6.* (rw,sync,no_root_squash,no_all_squash,no_acl)
==> kernel:
/nfsshare 172.16.6.*(ro, ...
/nfsshare *(rw, ...
corrected /etc/exports (w/o the blank):
/nfsshare *(rw,sync,no_root_squash,no_all_squash,no_acl)
In principle, the idea of using an init container is a good one. Thank you for reminding me of this.
I have tried it.
Unfortunately, it doesn't change the basic problem. The file system is mounted "read-only" by Kubernetes. The init container returns the following error message (from the log):
chmod: /var/opt/dilax/enumeris: Read-only file system

Ubuntu 20.04 qemu-kvm sharing folders

I am using minikube on ubuntu and start with --driver=none. Since it is recommended to use kvm2(for linux host) and need to be root user with --driver=none, I am now switching to kvm2.
I have few kubernetes deployment using persistent volume and claims.
When minikube started, it figures out to mount the host path to this volumes and running as expected.
Now in kvm2, these mounting is not working as it cannot find the host path.
Is there any way to mount the host path to all the deployments automatically?
The idea is, to have central location say /data, as central persistent storage. Each deployment will have its own subfolder under /data. In the persistent storage, I will define the mountpath to use this subfolder. Is there any way to mount this host's /data folder to the pods automatically through configuration as one time setup?
If there is any better way to achieve the persistent storage, please let me know.

Kubernetes persistent volume claim on /var/www/html problem

I have a magento deployment on nginx which uses a persistent volume and a persistent volume claim. Everything works fine, but I am struggeling with one problem. I am using an initContainer to install magento via cli (which works fine) but as soon my POD starts and mounts the PVC to the /var/www/html (my webroot) the data previously (in the initContainer) installed data is lost (or better replaced by the new mount). My workaround was to install magento into /tmp/magento (in the initContainer) and as soon the "real" POD is up, the data from /tmp/magento is copied to /var/www/html. As you can imagine this takes a while and is kind of a permission hell, but it works.
Is there any way, that I can install my app directly in my target directory, without "overmapping" my files? I have to use an PV/PVC because I am mounting the POD directory via NFS and also I don't want to loose my files.
Update: The Magento deployment is inside a docker image and is installed during the docker build. So if I install the data into the target location, the kubernetes mount replaces the data with an empty mount. That's the main reason for the workaround. The goal is to have the whole installation inside the image.
If Magento is already installed inside the imaged and located by some path (say /tmp/magento) but you want it to be accessible by path /var/www/html/magento, why don't you just create a symlink pointing to the existing location?
So your Magento will be installed during the image build process and in the entrypoint an additional command
ln -s /tmp/magento /var/www/html/magento
will be run before the Nginx server starts itself. No need for intiContainers.

glusterfs volume creation failed - brick is already part of volume

In a cloud , we have a cluster of glusterfs nodes (participating in gluster volume) and clients (that mount to gluster volumes). These nodes are created using terraform hashicorp tool.
Once the cluster is up and running, if we want to change the gluster machine configuration like increasing the compute size from 4 cpus to 8 cpus , terraform has the provision to recreate the nodes with new configuration.So the existing gluster nodes are destroyed and new instances are created but with the same ip. In the newly created instance , volume creation command fails saying brick is already part of volume.
sudo gluster volume create VolName replica 2 transport tcp ip1:/mnt/ppshare/brick0 ip2:/mnt/ppshare/brick0
volume create: VolName: failed: /mnt/ppshare/brick0 is already part
of a volume
But no volumes are present in this instance.
I understand if I have to expand or shrink volume, I can add or remove bricks from existing volume. Here, I'm changing the compute of the node and hence it has to be recreated. I don't understand why it should say brick is already part of volume as it is a new machine altogether.
It would be very helpful if someone can explain why it says Brick is already part of volume and where it is storing the volume/brick information. So that I can recreate the volume successfully.
I also tried the below steps from this link to clear the glusterfs volume related attributes from the mount but no luck.
https://linuxsysadm.wordpress.com/2013/05/16/glusterfs-remove-extended-attributes-to-completely-remove-bricks/.
apt-get install attr
cd /glusterfs
for i in attr -lq .; do setfattr -x trusted.$i .; done
attr -lq /glusterfs (for testing, the output should pe empty)
Simply put "force" in the end of "gluster volume create ..." command.
Please check if you have directories /mnt/ppshare/brick0 created.
You should have /mnt/ppshare without the brick0 folder. The create command creates those folders. The error indicates that the brick0 folders are present.