how do i get the minikube nodes in a local cluster - kubernetes

Im trying to set up a local cluster using VM and minikube, as Id been reading its only possible to use it for local purposes, but id like to join a secondary machine, and im searching a way to create the join and hash.

You can easily do it in case your minikube machine is using VirtualBox.
Start the minikube:
$ minikube start --vm-driver="virtualbox"
Check the versions of kubeadm, kubelet and kubectl in minikube and print join command:
$ kubectl version
$ minikube ssh
$ kubelet --version
$ kubeadm token create --print-join-command
Create a new VM in VirtualBox. I've used Vagrant to create Ubuntu 16lts VM for this test. Check that the minikube and the new VM are in the same host-only VM network.
You can use anything that suits you best, but the packages installation procedure would be different for different Linux distributions.
(On the new VM.) Add repository with Kubernetes:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update
(On the new VM.)Install the same version of kubelet kubeadm and other tools on the new VM (1.10.0 in my case)
$ apt-get -y install ebtables ethtool docker.io apt-transport-https kubelet=1.10.0-00 kubeadm=1.10.0-00
(On the new VM.)Use your join command from the step 2. IP address should be from the VM Host-Only-Network. Only having Nat networks didn't work well in my case.
$ kubeadm join 192.168.xx.yy:8443 --token asdfasf.laskjflakflsfla --discovery-token-ca-cert-hash sha256:shfkjshkfjhskjfskjdfhksfh...shdfk
(On the main host) Add network solution to the cluster:
$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
(On the main host) Check your nodes and pods using kubectl:
$ kubectl get nodes:
NAME STATUS ROLES AGE VERSION
minikube Ready master 1h v1.10.0
ubuntu-xenial Ready <none> 36m v1.10.0
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-982l8 1/1 Running 0 10m 10.0.2.15 minikube
kube-system calico-kube-controllers-79dccdc4cc-66zxm 1/1 Running 0 10m 10.0.2.15 minikube
kube-system calico-node-9sgt5 1/2 Running 13 10m 10.0.2.15 ubuntu-xenial
kube-system calico-node-qtpg2 2/2 Running 0 10m 10.0.2.15 minikube
kube-system etcd-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system heapster-6hmhs 1/1 Running 0 1h 172.17.0.4 minikube
kube-system influxdb-grafana-69s5s 2/2 Running 0 1h 172.17.0.5 minikube
kube-system kube-addon-manager-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-apiserver-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-controller-manager-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-dns-86f4d74b45-tzc4r 3/3 Running 0 1h 172.17.0.2 minikube
kube-system kube-proxy-vl5mq 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kube-proxy-xhv8s 1/1 Running 2 35m 10.0.2.15 ubuntu-xenial
kube-system kube-scheduler-minikube 1/1 Running 0 1h 10.0.2.15 minikube
kube-system kubernetes-dashboard-5498ccf677-7gf4j 1/1 Running 0 1h 172.17.0.3 minikube
kube-system storage-provisioner 1/1 Running 0 1h 10.0.2.15 minikube

This isn't possible with minikube. With minikube, the operating domain is a single laptop or local machine. You can't join an additional node, you'll need to build a whole cluster using something like kubeadm

Related

calico-kube-controller stays in pending state

I have a new install of kubernetes on Ubuntu-18 using version 1.24.3 with Calico. The calico-controller will not start:
$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-555bc4b957-z4q2p 0/1 Pending 0 5m14s
kube-system calico-node-jz2j7 1/1 Running 0 5m15s
kube-system coredns-6d4b75cb6d-hwfx9 1/1 Running 0 5m14s
kube-system coredns-6d4b75cb6d-wdh55 1/1 Running 0 5m14s
kube-system etcd-ubuntu-18-extssd 1/1 Running 1 5m27s
kube-system kube-apiserver-ubuntu-18-extssd 1/1 Running 1 5m28s
kube-system kube-controller-manager-ubuntu-18-extssd 1/1 Running 1 5m26s
kube-system kube-proxy-t5z2r 1/1 Running 0 5m15s
kube-system kube-scheduler-ubuntu-18-extssd 1/1 Running 1 5m27s
Someone suggested setting a couple of Calico timeouts to 60 seconds, but that didn't work either.
What could be causing the calico-controller to fail to start, especially since the calico-node is running?
Also, is there a more trouble-free CNI implementation to use? Calico seems very error-prone.
I solved this by installing Weave:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
with this cidr:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16

How to resolve Kubernetes DNS issues when trying to install Weave Cloud Agents for Minikube

I was trying to install the Weave Cloud Agents for my minikube. I used the provided command
curl -Ls https://get.weave.works |sh -s -- --token=xxx
but keep getting the following error:
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I have following dns:
kube-system coredns-6955765f44-7zt4x 1/1 Running 0 38m
kube-system coredns-6955765f44-xdnd9 1/1 Running 0 38m
I tried different suggestions such as https://www.jeffgeerling.com/blog/2019/debugging-networking-issues-multi-node-kubernetes-on-virtualbox or https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/. However none of them resolved my issue.
It seems to an issue which happened before https://github.com/weaveworks/launcher/issues/285.
My Kubernetes is on v1.17.3
Reproduced you issue, have the same error.
minikube v1.7.2 on Centos 7.7.1908
Docker 19.03.5
vm-driver=virtualbox
Connecting cluster to "Old Tree 34" (id: old-tree-34) on Weave Cloud
Installing Weave Cloud agents on minikube at https://192.168.99.100:8443
Performing a check of the Kubernetes installation setup.
There was an error while performing a DNS check: checking DNS failed, the DNS in the Kubernetes cluster is not working correctly. Please check that your cluster can download images and run pods.
I wasnt able to fix this problem, instead of that found a workaround - use Helm. You have second tab 'Helm 'in 'Install the Weave Cloud Agents' with provided command, like
helm repo update && helm upgrade --install --wait weave-cloud \
--set token=xxx \
--namespace weave \
stable/weave-cloud
Lets install Helm and use it.
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
.....
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
helm repo update
helm upgrade --install --wait weave-cloud \
> --set token=xxx \
> --namespace weave \
> stable/weave-cloud
Release "weave-cloud" does not exist. Installing it now.
NAME: weave-cloud
LAST DEPLOYED: Thu Feb 13 14:52:45 2020
NAMESPACE: weave
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME AGE
weave-agent 35s
==> v1/Pod(related)
NAME AGE
weave-agent-69fbf74889-dw77c 35s
==> v1/Secret
NAME AGE
weave-cloud 35s
==> v1/ServiceAccount
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRole
NAME AGE
weave-cloud 35s
==> v1beta1/ClusterRoleBinding
NAME AGE
weave-cloud 35s
NOTES:
Weave Cloud agents had been installed!
First, verify all Pods are running:
kubectl get pods -n weave
Next, login to Weave Cloud (https://cloud.weave.works) and verify the agents are connect to your instance.
If you need help or have any question, join our Slack to chat to us – https://slack.weave.works.
Happy hacking!
Check(wait around 10 min to deploy everything):
kubectl get pods -n weave
NAME READY STATUS RESTARTS AGE
kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 29m
prom-node-exporter-2lwbn 1/1 Running 0 29m
prometheus-5586cdd667-dtdqq 2/2 Running 0 29m
weave-agent-6c77dbc569-xc9qx 1/1 Running 0 29m
weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 29m
weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 29m
weave-scope-agent-7lgll 1/1 Running 0 29m
weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 29m
[vkryvoruchko#nested-vm-image1 bin]$ kubectl get all -n weave
NAME READY STATUS RESTARTS AGE
pod/kube-state-metrics-64599b7996-d8pnw 1/1 Running 0 30m
pod/prom-node-exporter-2lwbn 1/1 Running 0 30m
pod/prometheus-5586cdd667-dtdqq 2/2 Running 0 30m
pod/weave-agent-6c77dbc569-xc9qx 1/1 Running 0 30m
pod/weave-flux-agent-65cb4694d8-sllks 1/1 Running 0 30m
pod/weave-flux-memcached-676f88fcf7-ktwnp 1/1 Running 0 30m
pod/weave-scope-agent-7lgll 1/1 Running 0 30m
pod/weave-scope-cluster-agent-8fb596b6b-mddv8 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus ClusterIP 10.108.197.29 <none> 80/TCP 30m
service/weave-flux-memcached ClusterIP None <none> 11211/TCP 30m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/prom-node-exporter 1 1 1 1 1 <none> 30m
daemonset.apps/weave-scope-agent 1 1 1 1 1 <none> 30m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-state-metrics 1/1 1 1 30m
deployment.apps/prometheus 1/1 1 1 30m
deployment.apps/weave-agent 1/1 1 1 31m
deployment.apps/weave-flux-agent 1/1 1 1 30m
deployment.apps/weave-flux-memcached 1/1 1 1 30m
deployment.apps/weave-scope-cluster-agent 1/1 1 1 30m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-64599b7996 1 1 1 30m
replicaset.apps/prometheus-5586cdd667 1 1 1 30m
replicaset.apps/weave-agent-69fbf74889 0 0 0 31m
replicaset.apps/weave-agent-6c77dbc569 1 1 1 30m
replicaset.apps/weave-flux-agent-65cb4694d8 1 1 1 30m
replicaset.apps/weave-flux-memcached-676f88fcf7 1 1 1 30m
replicaset.apps/weave-scope-cluster-agent-8fb596b6b 1 1 1 30m
Login to https://cloud.weave.works/ and check the same:
Started installing agents on Kubernetes cluster v1.17.2
All Weave Cloud agents are connected!

Kubernetes: how do you list components running on master?

How do you list components running on the master Kubernetes node?
I assume there should be a kubeadm or kubectl command but can't find anything.
E.g. I'm looking to see if the Scheduler is running and I've used kubeadm config view which lists:
scheduler: {}
but not sure if that means the Scheduler is not running or there's simply no config for it.
Since you have installed with kubeadm, the control plane components must be running as pods in kube-system namespace. So you can run the following command to see if scheduler is running.
# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-4x9fp 2/2 Running 0 4d6h
coredns-86c58d9df4-bw2q9 1/1 Running 0 4d6h
coredns-86c58d9df4-gvcl9 1/1 Running 0 4d6h
etcd-k1 1/1 Running 0 4d6h
kube-apiserver-k1 1/1 Running 0 4d6h
kube-controller-manager-k1 1/1 Running 83 4d6h
kube-dash-kubernetes-dashboard-5b7cf769bc-pd2n2 1/1 Running 0 4d6h
kube-proxy-jmrrz 1/1 Running 0 4d6h
kube-scheduler-k1 1/1 Running 82 4d6h
metrics-server-8544b5c78b-k2lwt 1/1 Running 16 4d6h
tiller-deploy-5f4fc5bcc6-gvhlz 1/1 Running 0 4d6h
If you want to know all pods running on a master node(or any particular node), you can use field-selector to select the node.
kubectl get pod --all-namespaces --field-selector spec.nodeName=<nodeName>
To filter pods only in kube-system namespace running on particular node -
kubectl get pod -n kube-system --field-selector spec.nodeName=<nodeName>
Assuming that you want to check what is running in master node and you are unable not do that via Kubernetes API server.
For kubelet since its running as systemd service you can check systemctl status kubelet.service.
Other components such as scheduler is run as container by kubelet so you can check them with standard docker command such as docker ps.

cilium configuration in IPv6 mode

I am using cilium in Kubernetes 1.12 in Direct Routing mode. It is working fine in IPv4 mode. We are using cilium/cilium:no-routes image and cloudnativelabs/kube-router to advertise the routes through BGP.
Now I would like to configure the same in IPv6 only Kubernetes cluster. But I found that kube-router pod is crashing and not creating the route entries for the --pod-network-cidr.
Following is the lab details -
master node: IPv6 private IP -fd0c:6493:12bf:2942::ac18:1164
Work node: IPv6 private IP -fd0c:6493:12bf:2942::ac18:1165
Public IP for both the nodes are IPv4 as i don't have IPv6 public IP.
IPv6 only K8s cluster is created as
master:
sudo kubeadm init --kubernetes-version v1.13.2 --pod-network-cidr=2001:2::/64 --apiserver-advertise-address=fd0c:6493:12bf:2942::ac18:1164 --token-ttl 0
worker:
sudo kubeadm join [fd0c:6493:12bf:2942::ac18:1164]:6443 --token 9k9sdq.el298rka0sjqy0ha --discovery-token-ca-cert-hash sha256:b830c22dc21561c9e9287275ecc675ec6de012662fabde3bd1aba03be66562eb
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master NotReady master 38h v1.13.2 fd0c:6493:12bf:2942::ac18:1164
<none> Ubuntu 18.10 4.18.0-13-generic docker://18.6.0
worker1 Ready <none> 38h v1.13.2 fd0c:6493:12bf:2942::ac18:1165
<none> Ubuntu 18.10 4.18.0-10-generic docker://18.6.0
master node is not ready as cni is not configured yet and codedns pods are not up yet.
Now install the cilium in Ipv6.
1. Run the etcd in master node.
sudo docker run -d --network=host \
--name "cilium-etcd" k8s.gcr.io/etcd:3.2.24 \
etcd -name etcd0 \
-advertise-client-urls http://[fd0c:6493:12bf:2942::ac18:1164]:4001 \
-listen-client-urls http://[fd0c:6493:12bf:2942::ac18:1164]:4001 \
-initial-advertise-peer-urls http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-listen-peer-urls http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-initial-cluster-token etcd-cluster-1 \
-initial-cluster etcd0=http://[fd0c:6493:12bf:2942::ac18:1164]:2382 \
-initial-cluster-state new
Here [fd0c:6493:12bf:2942::ac18:1164] is master node ipv6 ip.
2. sudo mount bpffs /sys/fs/bpf -t bpf
3. Run the kuberouter.
Expected Result:
Kube-router adds the routing entry for the POD-CIDR corresponding to the each of the other nodes in the cluster. Node public IP will be set as GW. Following result is obtained for IPv4. For IPv4, routing entry is created in node-1 for node-2 ( public IP 10.40.139.196 and POD CIDR 10.244.1.0/24). Device is the interface where public IP is bound.
$ ip route show
10.244.1.0/24 via 10.40.139.196 dev ens4f0.116 proto 17
Note: For IPv6 only Kubernetes, --pod-network-cidr=2001:2::/64
Actual result -
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-g7nvf 0/1 ContainerCreating 0 22h
coredns-86c58d9df4-rrtgp 0/1 ContainerCreating 0 38h
etcd-master 1/1 Running 0 38h
kube-apiserver-master 1/1 Running 0 38h
kube-controller-manager-master 1/1 Running 0 38h
kube-proxy-9xb2c 1/1 Running 0 38h
kube-proxy-jfv2m 1/1 Running 0 38h
kube-router-5xjv4 0/1 CrashLoopBackOff 15 73m
kube-scheduler-master 1/1 Running 0 38h
Question -
Can kuberouter use private IPv6 address which is used by the Kubernetes cluster instead of using the public IP which in our case isenter code here IPv4.

Kuberetes V1.6.2 flannel not running on master node

I am running 2 node cluster using vagrant, configured with kubeadm command. When I setup the cluster flannel was running on all three nodes. Now i don't see flannel running in master node. because of this overlay network is not working from master node.
Used this yaml files to configure flannel.
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl get pods --all-namespaces -o wide |grep fla
kube-system kube-flannel-ds-0d3bn 2/2 Running 0 20m 192.168.15.102 node-01
kube-system kube-flannel-ds-86bzs 2/2 Running 0 20m 192.168.15.103 node-02
# k get nodes -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
master01 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
node-01 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
node-02 Ready 26d v1.6.2 <none> CentOS Linux 7 (Core) 3.10.0-514.10.2.el7.x86_64
How can I start the flannel pod in my master node?
I see you using RBAC, maybe there are not enough rights at a node.
Try creating a clusterrolebinding with the necessary rights
$ kubectl create clusterrolebinding nodeName --clusterrole=system:node --
user=nodeName
or can use cluster-admin for testing