Setting up kubernetes on linux raspberry - kubernetes

I am trying to setup kubernetes cluster on raspberry pi. I have two pi, one of them will work as master and other one will work as worker.
I am not using Hypriot Os instead using Raspbian stretch image. I followed these tutorial link1 link2. Link1 recommend to use Hypriot Os but I continued with Raspbian Stretch. This is what I have done till now on both master and worker:
Installed docker
Disabled swapfile
Added cgroup in /boot/cmdline.txt
Installed kubernetes in both the pi.
Initiated the master and worker then joined the master node.
Till now everything seems to working ok. But while running the command kubectl get nodes, I get:
NAME STATUS ROLES AGE VERSION
raspberrypi NotReady master 1h v1.8.4
worker NotReady <none> 40m v1.8.4
My first question is why it shows worker as NotReady even my worker pi is on and running.
Next question is how can I access the cluster from the its dashboard. How to install dashboard.?

Issues have been resolved in the comment section.
for debugging k8s Nodes in the cluster, we use the following command to get precise information
get the list of nodes
kubectl get nodes
get the comprehensive info
kubectl describe nodes NODE_NAME
by the above system information, we can verify and validate the status of kubelet docker and kube-proxy

It is showing you status as NotReady because you have not installed any networking driver for it. Weave is suitable for networking in case of Raspberry pi. You can install it by using below commands:
kubectl apply -f https://git.io/weave-kube-1.6
Have a look at these tutorials:
https://www.youtube.com/watch?v=zc0sbXwONM4&list=PLWw98q-Xe7iHSVH-AE9hDGBFtC9rFxcME

Related

Kubernetes (MicroK8S) pod stuck in ContainerCreating state ("Pulling image")

In some cases after a node reboot the Kubernetes cluster managed by MicroK8S cannot start pods.
With a describe of the pod failing to be ready I could see that the pod was stuck in a "Pulling image" state during several minutes without any other event, as following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/my-service-abc to my-desktop
Normal Pulling 8m kubelet Pulling image "127.0.0.1:32000/some-service"
Pulling from the node with docker pull 127.0.0.1:32000/some-service works perfectly, so it doesn't seem a problem with Docker.
I have upgraded Docker just in case.
I seem to run the latest version of microk8s.
Running a sudo microk8s inspect gives no error/warning, everything seemed to work perfectly.
As a Docker pull did work locally, it's really the Kubelet app which is communicating with Docker that seemed stuck.
Even with a sudo service docker stop && sudo service docker start it did not work.
Even a rollout did not suffice to get out of the Pulling state after the reboot of docker.
Worst of all, a reboot of the server did not change anything (the pod that were up were still working, but all the other pods (70%) were down and in ContainerCreating state.
Checking the status systemctl status snap.microk8s.daemon-kubelet did not report any error.
The only thing that worked seemed to do this:
sudo systemctl reboot snap.microk8s.daemon-kubelet
However, it also rebooted the whole node, so that's something to do in a last case scenario (same as the node reboot).

kubernetes issue : runtime network not ready

I am beginner in kubernetes and I'm trying to set up my first cluster , my worker node has joined to my cluster successfully but when I run kubectl get nodes it is in NotReady status .
and this massesge exists when I run
kubectl describe node k8s-node-1
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
I have run this command to install a a Pod network add-on:
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
how can I solve this issue?
Adding this answer as community wiki for better visibility. OP already solved the problem with rebooting the machine.
Worth to remember that going thru all the steps with bootstrapping cluster and installing all the prerequisites will make your cluster running successfully . If you had any previous installations please remember to perform kubeadm reset and remove .kube folder from the home or root directory.
I`m also linking this github case with the same issue whereas people provide solution for this problem.

Unable to setup multiple node kubernetes cluster using kubeadm (Vagrant)

I have been setting up multi node kubernetes cluster using kubeadm.Setup included 1 master and worker node each. I have created the VM using vagrant.
I followed the docs,
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm
Created 2 VM's using vagrant
IP: Master- 192.168.33.10 , Worker- 192.168.1.21 (Both host only network)
I have experienced 2 scenarios,
Case 1:
Ran kubeadm init --pod-network-cidr=10.244.0.0/16 successfully with all pods running.
Installed "Canal" pod network add on.
Followed all the instructions given at the end of the successfull kubeadm init command.
SSH into 2nd VM and ran kubeadm join .. command and I am struck at "[preflight] Running pre-flight checks"
Case 2:
Did the same process with tag --apiserver-advertise-address=192.168.33.10
Successfully ran the command kubeadm init --apiserver-advertise-address=192.168.33.10
But when I ran the command kubectl get nodes it only showed the master node. (expected the worker node to show too).
Kindly help me understand how can I complete this setup. Thank you.
I have github repository which does exactly what you want. I am pretty sure that you will get idea from it. If anything is not clear, please update with comment or original post.

Failed to create pod sandbox kubernetes cluster

I have an weave network plugin.
inside my folder /etc/cni/net.d there is a 10-weave.conf
{
"name": "weave",
"type": "weave-net",
"hairpinMode": true
}
My weave pods are running and the dns pod is also running
But when i want to run a pod like a simple nginx wich will pull an nginx image
The pod stuck at container creating , describe pod gives me the error , failed create pod sandbox.
When i run journalctl -u kubelet i get this error
cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
is my network plugin not good configured ?
i used this command to configure my weave network
kubectl apply -f https://git.io/weave-kube-1.6
After this won't work i also tried this command
kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
I even tried flannel and that gives me the same error.
The system i am setting kubernetes on is a raspberry pi.
I am trying to build a raspberry pi cluster with 3 nodes and 1 master with kubernetes
Dose anyone have ideas on this?
Thank you all for responding to my question. I solved my problem now. For anyone who has come to my question in the future the solution was as followed.
I cloned my raspberry pi images because i wanted a basicConfig.img for when i needed to add a new node to my cluster of when one gets down.
Weave network (the plugin i used) got confused because on every node and master the os had the same machine-id. When i deleted the machine id and created a new one (and reboot the nodes) my error got fixed. The commands to do this was
sudo rm /etc/machine-id
sudo rm /var/lib/dbus/machine-id
sudo dbus-uuidgen --ensure=/etc/machine-id
Once again my patience was being tested. Because my kubernetes setup was normal and my raspberry pi os was normal. I founded this with the help of someone in the kubernetes community. This again shows us how important and great are IT community is. To the people of the future who will come to this question. I hope this solution will fix your error and will decrease the amount of time you will be searching after a stupid small thing.
Looking at the pertinent code in Kubernetes and in CNI, the specific error you see seems to indicate that it cannot find any files ending in .json, .conf or .conflist in the directory given.
This makes me think it could be something as the conf file not being present on all the hosts, so I would verify that as a first step.

Kubernetes Master Server is failing to become up and running

Installed kubeadm v1.6.0-alpha, kubectl v1.5.3, kubelet v1.5.3
Executed command $kubeadm init, to bring the Kubernetes Master up.
Issue observed: Stuck with the below log message
Created API client, waiting for the control plane to become ready
How to make the Kubernetes master server up and running or how to debug the issue?
Could you try using kubelet and kubectl 1.6 to see if it is a version mismatch?