hyperledger fabric set up on k8s fails to deploy containers - kubernetes

I am using aws and kops to deploy kubernetes cluster and kubectl to manage it https://medium.com/#zhanghenry/how-to-deploy-hyperledger-fabric-on-kubernetes-2-751abf44c807 following this tutorial.
but when i try to deploy pods i get following error.
MountVolume.SetUp failed for volume "org1-pv" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/73568350-cfc0-11e8-ad99-0a84e5efbfb6/volumes/kubernetes.io~nfs/org1-pv --scope -- mount -t nfs nfs-server-IP:/opt/share/crypto-config/peerOrganizations/org1 /var/lib/kubelet/pods/73568350-cfc0-11e8-ad99-0a84e5efbfb6/volumes/kubernetes.io~nfs/org1-pv Output: Running as unit run-24458.scope. mount.nfs: Connection timed out
i have configured external nfs server such as
/opt/share *(rw,sync,no_root_squash,no_subtree_check)
any kind of help is appreciated.

I think you should check following things to verify that nfs is mounted successfully or not
run this command on node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is nfs is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if above commands show error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't faced any issues.My cluster of fabric is running successfully

Related

Minikube multi-node cluster mounting host machine filesystem to all the nodes

I am creating a Minikube multi-node kubernetes cluster with 2 nodes mounting the $HOME/Minikube/mount directory from the host filesystem to the data directory in the cluster nodes.
I used the following command to achieve this,
minikube start --nodes 2 --cpus 2 --memory 2048 --disk-size 10g --mount-string $HOME/Minikube/mount:/data --mount --namespace test -p multi-node
Minikube version: 1.28.0
Kubernetes client version: v1.26.0
Kubernetes server version: v1.24.3
Expectation was to find the /data directory in both nodes (multi-node(control-plane) and multi-node-m02) mount to the $HOME/Minikube/mount directory of the host filesystem.
But when i ssh to the Minikube nodes i only see the /data directory mount in the multi-node which functions as the kubernetes control plane node. Local filesystem directory is not mount to both nodes.
minikube ssh -n multi-node
ls -la /data/
$ ls -la /data/
total 0
minikube ssh -n multi-node-m02
ls -la /data/
$ ls -la /data
ls: cannot access '/data': No such file or directory
Is there some way to achieve this requirement of mounting a local filesystem directory to all the nodes in a multi-node Minikube k8s cluster?
As they mentioned in this issue, using minikube start --mount has some issues when mounting the files. Try using the minikube mount string.
If the issue still persists the issue is with the storage provisioner broken for multinode mode. For this issue minikube has recently added a local path provisioner, adding it to the default storage class resolves your issue.

cannot mount NFS share on my Mac PC to minikube cluster

Problem
To make k8s multinodes dev env, I was trying to use NFS persistent volume in minikube with multi-nodes and cannot run pods properly. It seems there's something wrong with NFS setting. So I run minikube ssh and tried to mount the nfs volume manually first by mount command but it doesnt work, which bring me here.
When I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node, the output is
mount.nfs: requested NFS version or transport protocol is not supported
Some relavant info is
NFS client: minikube nodes
NFS server: my Mac PC
minikube driver: docker
Cluster comprises 3 nodes. (1 master and 2 worker nodes)
Currently there's no k8s resources (such as deployment, pv and pvc) in cluster.
minikube nodes' os is Ubuntu so I guess "nfs-utils" is not relavant and not installed. "nfs-common" is preinstalled in minikube.
Please see the following sections for more detail.
Goal
The goal is mount cmd in minikube nodes succeeds and nfs share on my Mac pc mounts properly.
What I've done so far is
On NFS server side,
created /etc/exports file on mac pc. The content is like
/PATH/TO/EXPORTED/DIR/ON/MACPC -mapall=user:group 192.168.xx.xx(=the output of "minikube ip")
and run nfsd update and then showmount -e cmd outputs
Exports list on localhost:
/PATH/TO/EXPORTED/DIR/ON/MACPC 192.168.xx.xx(=the output of "minikube ip")
rpcinfo -p shows rpcbind(=portmapper in linux), status, nlockmgr, rquotad, nfs, mountd are all up in tcp and udp
ping 192.168.xx.xx(=the output of "minikube ip") says
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
and continues
It seems I can't reach minikube from host.
On NFS client side,
started nfs-common and rpcbind services with systemctl cmd in all minikube nodes. By running sudo systemctl status rpcbind and sudo systemctl status nfs-common, I confirmed rpcbind and nfs-common are running.
minikube ssh output
Last login: Mon Mar 28 09:18:38 2022 from 192.168.xx.xx(=I guess my macpc's IP seen from minikube cluster)
so I run
sudo mount -t nfs 192.168.xx.xx(=macpc's IP):/PATH/TO/EXPORTED/DIR/ON/MACPC /PATH/TO/MOUNT/POINT/IN/MINIKUBE/NODE
in minikube master node.
The output is
mount.nfs: requested NFS version or transport protocol is not supported
rpcinfo -p shows only portmapper and status are running. I am not sure this is ok.
ping 192.168.xx.xx(=macpc's IP) works properly.
ping host.minikube.internal works properly.
nc -vz 192.168.xx.xx(=macpc's IP) 2049 outputs connection refused
nc -vz host.minikube.internal 2049 outputs succeeded!
Thanks in advance!
I decided to use another type of volume instead.

RStudio Job Launcher - NFS mounting issue

I am trying to integrate RStudio Worbench with Kubernetes as described in the official documentation https://docs.rstudio.com/rsw/integration/launcher-kubernetes/. In step 9 the Launcher starts a Kubernetes Job. The job is successfully assigned to a pod but the pod stucks in 'ContainerCreating' status display the next events:
Mounting command: mount
Mounting arguments: -t nfs MY.NFS.SERVER.IP:/home/MY_USER_DIR /var/lib/kubelet/pods/SOME_UUID/volumes/kubernetes.io~nfs/mount0
Output: mount.nfs: Connection timed out
Warning FailedMount 13m (x6 over 54m) kubelet Unable to attach or mount volumes: unmounted volumes=[mount0], unattached volumes=[kube-api-access-dllcd mount0]: timed out waiting for the condition
Warning FailedMount 2m29s (x26 over 74m) kubelet Unable to attach or mount volumes: unmounted volumes=[mount0], unattached volumes=[mount0 kube-api-access-dllcd]: timed out waiting for the condition
Configuration details:
Kubernetes is successfully installed on Amazon EKS and I am controlling the cluster from an admin EC2 instance outside the EKS cluster which I'm running NFS server and RStudio on
I can deploy an RStudio test job only without volume mounts
Both nfs-kernel-server service and RStudio are running
Our RStudio users are able to launch jobs in Local mode
The file /etc/exports contains:
/nfsexport 127.0.0.1(rw,sync,no_subtree_check)
/home/MY_USER_DIR MY.IP.SUBNET.RANGE/16(rw,sync,no_subtree_check)
Inbound traffic to the NFS server from the Kubernetes worker nodes is allowed via port 2049
What I have tried:
Mount some folder locally on the same machine as the NFS server - that works
Mount using different IPs for the NFS server: localhost, public IPv4, and private IPv4 of the EC2 instance (with and without specifying the port 2049) - that did not work
Connect to a client machine and try to manually mount from there. Trying to mount the share on the server resulted in:
mount.nfs: rpc.statd is not running but is required for
remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: mount.nfs: Operation not permitted
Trying to ping the admin instance from a worker node doesn't work even though all EKS plugins (coredns, kube-proxy, vpc-cni) are active.
Question: What could be the root problem causing the mounting issue? Thank you in advance!
I have faced similar issue with RStudio job launcher. The reason for this issue was because of non-existent directory in the NFS. The home directory in the pod gets mounted to the user's home directory in the NFS.
For example if you logged into the RStudio as user1, the system expects a home directory for user1 in the NFS. With the default configuration, this directory will be /home/user1. If this directory does not exists in NFS, the mount will fail and the command gets timed out.
The simple way to debug this issue is by manually trying to mount the directories
mkdir -p /tmp/home/user1
mount -v -t nfs ipaddress:/home/user1 /tmp/home/user1
The above command will fail if the /home/user1 directory does not exist in the NFS. The solution is to ensure that the user home directories exist in the NFS.

Rancher Kubernetes can't create persisten volume claim

I can't create Persistent volume claim in Kubernetes (Rancher 2.3).
My storage class use VMware cloud provider 'vSphere Storage for Kubernetes' provided by Rancher
In rancher web interface, the Event show errors like:
(combined from similar events): Failed to provision volume with StorageClass "t-a-g1000-t6m-e0fd": Post https://vsphere.exemple.com:443/sdk: dial tcp: lookup vsphere.exemple.com on [::1]:53: read udp [::1]:51136->[::1]:53: read: connection refused
I get the same error on my Kubernetes Master:
docker logs kube-controller-manager
For some reason, the DNS resolver of the kube-controller-manager pod on kubernet master was empty:
docker exec kube-controller-manager cat /etc/resolv.conf
# Generated by dhcpcd
# /etc/resolv.conf.head can replace this line
# /etc/resolv.conf.tail can replace this line
Since the host server resolv.confwas correct, I simply restarted the container:
docker restart kube-controller-manager
(an alternative ugly way would have been to edit the resolv.conf manually using docker restart kube-controller-manager, then do run the appropriate echo XXX >> /etc/resolv.conf ... bad idea )
Some other containers may have a similar issue on this node. This is a hacky way to identify and restart those containers:
cd cd /var/lib/docker/containers
ls -1 "$(grep nameserver -L */resolv.conf)" | sed -e 's#/.*##'
0c10e1374644cc262c8186e28787f53e02051cc75c1f943678d7aeaa00e5d450
70fd116282643406b72d9d782857bb7ec76dd85dc8a7c0a83dc7ab0e90d30966
841def818a8b4df06a0d30b0b7a66b75a3b554fb5feffe78846130cdfeb39899
ae356e26f1bf8fafe530d57d8c68a022a0ee0e13b4e177d3ad6d4e808d1b36da
d593735a2f6d96bcab3addafcfe3d44b6d839d9f3775449247bdb8801e2e1692
d9b0dfaa270d3f50169fb1aed064ca7a594583b9f71a111f653c71d704daf391
Restart affected containers:
cd /var/lib/docker/containers ; ls -1 $(grep nameserver -L */resolv.conf) | sed -e 's#/.*##' | xargs -n1 -r docker restart

kubernetes allow privileged local testing cluster

I'm busy testing out kubernetes on my local pc using https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
which launches a dockerized single node k8s cluster. I need to run a privileged container inside k8s (it runs docker in order to build images from dockerfiles). What I've done so far is add a security context privileged=true to the pod config which returns forbidden when trying to create the pod. I know that you have to enable privileged on the node with --allow-privileged=true and I've done this by adding the parameter arg to step two (running the master and worker node) but it still returns forbidden when creating the pod.
Anyone know how to enable privileged in this dockerized k8s for testing?
Here is how I run the k8s master:
docker run --privileged --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v1.0.1 /hyperkube kubelet --api-servers=http://localhost:8080 --v=2 --address=0.0.0.0 --allow-privileged=true --enable-server --hostname-override=127.0.0.1 --config=/etc/kubernetes/manifests
Update: Privileged mode is now enabled by default (both in the apiserver and in the kubelet) starting with the 1.1 release of Kubernetes.
To enable privileged containers, you need to pass the --allow-privileged flag to the Kubernetes apiserver in addition to the Kubelet when it starts up. The manifest file that you use to launch the Kubernetes apiserver in the single node docker example is bundled into the image (from master.json), but you can make a local copy of that file, add the --allow-privileged=true flag to the apiserver command line, and then change the --config flag you pass to the Kubelet in Step Two to a directory containing your modified file.