I am creating a Minikube multi-node kubernetes cluster with 2 nodes mounting the $HOME/Minikube/mount directory from the host filesystem to the data directory in the cluster nodes.
I used the following command to achieve this,
minikube start --nodes 2 --cpus 2 --memory 2048 --disk-size 10g --mount-string $HOME/Minikube/mount:/data --mount --namespace test -p multi-node
Minikube version: 1.28.0
Kubernetes client version: v1.26.0
Kubernetes server version: v1.24.3
Expectation was to find the /data directory in both nodes (multi-node(control-plane) and multi-node-m02) mount to the $HOME/Minikube/mount directory of the host filesystem.
But when i ssh to the Minikube nodes i only see the /data directory mount in the multi-node which functions as the kubernetes control plane node. Local filesystem directory is not mount to both nodes.
minikube ssh -n multi-node
ls -la /data/
$ ls -la /data/
total 0
minikube ssh -n multi-node-m02
ls -la /data/
$ ls -la /data
ls: cannot access '/data': No such file or directory
Is there some way to achieve this requirement of mounting a local filesystem directory to all the nodes in a multi-node Minikube k8s cluster?
As they mentioned in this issue, using minikube start --mount has some issues when mounting the files. Try using the minikube mount string.
If the issue still persists the issue is with the storage provisioner broken for multinode mode. For this issue minikube has recently added a local path provisioner, adding it to the default storage class resolves your issue.
Related
Problem
To make k8s multinodes dev env, I was trying to use NFS persistent volume in minikube with multi-nodes and cannot run pods properly. It seems there's something wrong with NFS setting. So I run minikube ssh and tried to mount the nfs volume manually first by mount command but it doesnt work, which bring me here.
When I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node, the output is
mount.nfs: requested NFS version or transport protocol is not supported
Some relavant info is
NFS client: minikube nodes
NFS server: my Mac PC
minikube driver: docker
Cluster comprises 3 nodes. (1 master and 2 worker nodes)
Currently there's no k8s resources (such as deployment, pv and pvc) in cluster.
minikube nodes' os is Ubuntu so I guess "nfs-utils" is not relavant and not installed. "nfs-common" is preinstalled in minikube.
Please see the following sections for more detail.
Goal
The goal is mount cmd in minikube nodes succeeds and nfs share on my Mac pc mounts properly.
What I've done so far is
On NFS server side,
created /etc/exports file on mac pc. The content is like
/PATH/TO/EXPORTED/DIR/ON/MACPC -mapall=user:group 192.168.xx.xx(=the output of "minikube ip")
and run nfsd update and then showmount -e cmd outputs
Exports list on localhost:
/PATH/TO/EXPORTED/DIR/ON/MACPC 192.168.xx.xx(=the output of "minikube ip")
rpcinfo -p shows rpcbind(=portmapper in linux), status, nlockmgr, rquotad, nfs, mountd are all up in tcp and udp
ping 192.168.xx.xx(=the output of "minikube ip") says
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
and continues
It seems I can't reach minikube from host.
On NFS client side,
started nfs-common and rpcbind services with systemctl cmd in all minikube nodes. By running sudo systemctl status rpcbind and sudo systemctl status nfs-common, I confirmed rpcbind and nfs-common are running.
minikube ssh output
Last login: Mon Mar 28 09:18:38 2022 from 192.168.xx.xx(=I guess my macpc's IP seen from minikube cluster)
so I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node.
The output is
mount.nfs: requested NFS version or transport protocol is not supported
rpcinfo -p shows only portmapper and status are running. I am not sure this is ok.
ping 192.168.xx.xx(=macpc's IP) works properly.
ping host.minikube.internal works properly.
nc -vz 192.168.xx.xx(=macpc's IP) 2049 outputs connection refused
nc -vz host.minikube.internal 2049 outputs succeeded!
Thanks in advance!
I decided to use another type of volume instead.
I am trying to integrate RStudio Worbench with Kubernetes as described in the official documentation https://docs.rstudio.com/rsw/integration/launcher-kubernetes/. In step 9 the Launcher starts a Kubernetes Job. The job is successfully assigned to a pod but the pod stucks in 'ContainerCreating' status display the next events:
Mounting command: mount
Mounting arguments: -t nfs MY.NFS.SERVER.IP:/home/MY_USER_DIR /var/lib/kubelet/pods/SOME_UUID/volumes/kubernetes.io~nfs/mount0
Output: mount.nfs: Connection timed out
Warning FailedMount 13m (x6 over 54m) kubelet Unable to attach or mount volumes: unmounted volumes=[mount0], unattached volumes=[kube-api-access-dllcd mount0]: timed out waiting for the condition
Warning FailedMount 2m29s (x26 over 74m) kubelet Unable to attach or mount volumes: unmounted volumes=[mount0], unattached volumes=[mount0 kube-api-access-dllcd]: timed out waiting for the condition
Configuration details:
Kubernetes is successfully installed on Amazon EKS and I am controlling the cluster from an admin EC2 instance outside the EKS cluster which I'm running NFS server and RStudio on
I can deploy an RStudio test job only without volume mounts
Both nfs-kernel-server service and RStudio are running
Our RStudio users are able to launch jobs in Local mode
The file /etc/exports contains:
/nfsexport 127.0.0.1(rw,sync,no_subtree_check)
/home/MY_USER_DIR MY.IP.SUBNET.RANGE/16(rw,sync,no_subtree_check)
Inbound traffic to the NFS server from the Kubernetes worker nodes is allowed via port 2049
What I have tried:
Mount some folder locally on the same machine as the NFS server - that works
Mount using different IPs for the NFS server: localhost, public IPv4, and private IPv4 of the EC2 instance (with and without specifying the port 2049) - that did not work
Connect to a client machine and try to manually mount from there. Trying to mount the share on the server resulted in:
mount.nfs: rpc.statd is not running but is required for
remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: mount.nfs: Operation not permitted
Trying to ping the admin instance from a worker node doesn't work even though all EKS plugins (coredns, kube-proxy, vpc-cni) are active.
Question: What could be the root problem causing the mounting issue? Thank you in advance!
I have faced similar issue with RStudio job launcher. The reason for this issue was because of non-existent directory in the NFS. The home directory in the pod gets mounted to the user's home directory in the NFS.
For example if you logged into the RStudio as user1, the system expects a home directory for user1 in the NFS. With the default configuration, this directory will be /home/user1. If this directory does not exists in NFS, the mount will fail and the command gets timed out.
The simple way to debug this issue is by manually trying to mount the directories
mkdir -p /tmp/home/user1
mount -v -t nfs ipaddress:/home/user1 /tmp/home/user1
The above command will fail if the /home/user1 directory does not exist in the NFS. The solution is to ensure that the user home directories exist in the NFS.
I am looking to do local dev of an app that is running in Kubernetes on minikube. I want to mount a local directory to speed up development, so I can make code changes to my app (python) without rebuilding the container.
If I understand correctly, I have two out of the box options:
9P mount which is provided by minikube
hostPath mount which comes directly from Kubernetes
What are the differences between these, and in what cases would one be appropriate over the other?
9P mount and hostPath are two different concepts. You cannot mount directory to pod using 9P mount.
9P mount is used to mount host directory into the minikube VM.
HostPath is a persistent volume which mounts a file or directory from the host node's(in your case minikube VM) filesystem into your Pod.
Take a look also on types of Persistent Volumes: pv-types-k8s.
If you want to mount a local directory to pod:
First, you need to mount your directory for example: $HOME/your/path into your minikube VM using 9P. Execute command:
$ minikube start --mount-string="$HOME/your/path:/data"
Then if you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Another solution:
Mount host's $HOME directory into minikube's /hosthome directory. Get your data:
$ ls -la /hosthome/your/path
To mount this directory, you have to just change your Pod's hostPath
hostPath:
path: /hosthome/your/path
Take a look: minikube-mount-data-into-pod.
Also you need to know that:
Minikube is configured to persist files stored under the following
directories, which are made in the Minikube VM (or on your localhost
if running on bare metal). You may lose data from other directories on
reboots.
More: note-persistence-minikube.
See driver-mounts as an alternative.
I am trying to find the kubeproxy logs on minikube, It doesn't seem they are located.
sudo cat: /var/log/kubeproxy.log: No such file or directory
A more generic way (besides what hoque described) that you can use on any kubernetes cluster is to check the logs using kubectl.
kubectl logs kube-proxy-s8lcb -n kube-system
Using this solution allow you to check logs for any K8s cluster even if you don't have access to your nodes.
Pod logs are located in /var/log/pods/.
Run
$ minikube ssh
$ ls /var/log/pods/
default_dapi-test-pod_1566526c-1051-4102-a23b-13b73b1dd904
kube-system_coredns-5d4dd4b4db-7ttnf_59d7b01c-4d7d-40f9-8d6a-ac62b1fa018e
kube-system_coredns-5d4dd4b4db-n8d5t_6aa36b9a-6539-4ef2-b163-c7e713861fa2
kube-system_etcd-minikube_188c8af9ff66b5060895a385b1bb50c2
kube-system_kube-addon-manager-minikube_f7d3bd9bbbbdd48d97a3437e231fff24
kube-system_kube-apiserver-minikube_b15fea5ed20174140af5049ecdd1c59e
kube-system_kube-controller-manager-minikube_d8cdb4170ab1aac172022591866bd7eb
kube-system_kube-proxy-qc4xl_30a6100a-db70-42c1-bbd5-4a818379a004
kube-system_kube-scheduler-minikube_14ff2730e74c595cd255e47190f474fd
I am using aws and kops to deploy kubernetes cluster and kubectl to manage it https://medium.com/#zhanghenry/how-to-deploy-hyperledger-fabric-on-kubernetes-2-751abf44c807 following this tutorial.
but when i try to deploy pods i get following error.
MountVolume.SetUp failed for volume "org1-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/73568350-cfc0-11e8-ad99-0a84e5efbfb6/volumes/kubernetes.io~nfs/org1-pv --scope -- mount -t nfs nfs-server-IP:/opt/share/crypto-config/peerOrganizations/org1 /var/lib/kubelet/pods/73568350-cfc0-11e8-ad99-0a84e5efbfb6/volumes/kubernetes.io~nfs/org1-pv Output: Running as unit run-24458.scope. mount.nfs: Connection timed out
i have configured external nfs server such as
/opt/share *(rw,sync,no_root_squash,no_subtree_check)
any kind of help is appreciated.
I think you should check following things to verify that nfs is mounted successfully or not
run this command on node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is nfs is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if above commands show error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't faced any issues.My cluster of fabric is running successfully