Kubectl with minikube - Error restarting cluster: kubeadm.yaml - kubernetes

I have kubernetes + minicube installed (MacOs 10.12.6) installed. But while trying to start the minicube i get constant errors:
$: minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0601 15:24:50.571967 67567 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
I've also tried to do minikube delete and the minikube start that didn't help (Minikube never start - Error restarting cluster). Also kubectl config use-context minikube was done.
I have minikube version: v0.26.1
It looks to me that kubeadm.yaml file is missing or misplaced.

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
In your issue, below steps should do the initialization process successfully:
minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
In the case you mixed Kubernetes and minikube environments I suggest to inspect $HOME/.kube/config file
and delete minikube entries to avoid problem with reinitialization.
If minikube still refuses to start please post logs to analyze. To get detailed log start minikube this way:
minikube start --v=9

Related

How to keep minikube running all the time?

Minikube application stops everyday and shows below error,
ubuntu#ubuntu:~$ kubectl get pods
Unable to connect to the server: dial tcp 192.168.58.2:8443: connect: no route to host
After running below command, it comes back to normal.
ubuntu#ubuntu:~$ minikube start
Please let me know if there any way to keep it up all the time.
Output of minikube start command.
kalpesh#kalpesh:~$ minikube start
😄 minikube v1.24.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image kubernetesui/dashboard:v2.3.1
▪ Using image kubernetesui/metrics-scraper:v1.0.7
🌟 Enabled addons: default-storageclass, storage-provisioner, dashboard
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Minikube config
kalpesh#kalpesh:~$ minikube config view
- cache: map[stock_updates_stock_updates:latest:<nil>]
- cpus: 4
- memory: 8192
It can be the issue with your firewall, try to disable it like:
sudo ufw disable
OR
Get your minikube VM's IP and do the following command in order to create a rich firewall rule to allow all traffic from this VM to your Host:
$ firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="YOUR.IP.ADDRESS.HERE" accept'

How to join worker node in existing cluster?

I was facing some issues while joining worker node in existing cluster.
Please find below the details of my scenario.
I've Created a HA clusters with 4 master and 3 worker.
I removed 1 master node.
Removed node was not a part of cluster now and reset was successful.
Now joining the removed node as worker node in existing cluster.
I'm firing below command
kubeadm join --token giify2.4i6n2jmc7v50c8st 192.168.230.207:6443 --discovery-token-ca-cert-hash sha256:dd431e6e19db45672add3ab0f0b711da29f1894231dbeb10d823ad833b2f6e1b
In above command - 192.168.230.207 is cluster IP
Result of above command
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.230.206:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.230.206:6443: connect: connection refused
Already tried Steps
ted this file(kubectl -n kube-system get cm kubeadm-config -o yaml) using kubeadm patch and removed references of removed node("192.168.230.206")
We are using external etcd so checked member list to confirm removed node is not a part of etcd now. Fired below command etcdctl --endpoints=https://cluster-ip --ca-file=/etc/etcd/pki/ca.pem --cert-file=/etc/etcd/pki/client.pem --key-file=/etc/etcd/pki/client-key.pem member list
Can someone please help me resolve this issue as I'm not able to join this node?
In addition to #P Ekambaram answer, I assume that you probably have completely dispose of all redundant data from previous kubeadm join setup.
Remove cluster entries via kubeadm command on worker node: kubeadm reset;
Wipe all redundant data residing on worker node: rm -rf /etc/kubernetes; rm -rf ~/.kube;
Try to re-join worker node.
Use these instructions one after another to completely remove all old installation on worker node.....
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker.service
yum remove kubeadm kubectl kubelet kubernetes-cni kube*
yum autoremove
rm -rf ~/.kube
then reinstall using
yum install -y kubelet kubeadm kubectl
reboot
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
then use kubeadm join command
fix these problems and run join command
docker service is not enabled, please run 'systemctl enable docker.service'
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
kubelet service is not enabled, please run 'systemctl enable kubelet.service
kubeadm join try to reach 192.168.230.206 whereas new IP is 192.168.230.207.
In addition to change in cm kubeadm-config, you may change your cluster IP address (cluster.server) in this configmap
kubectl edit cm -n kube-public cluster-info

how to remove kubernetes with all it's dependencies Centos 7

i want to uninstall kubernetes from Centos 7 and all it's dependencies and files like :
kube-apiserver
kube-controller-manager
kubectl
kubelet
kube-proxy
kube-scheduler
Check this thread or else these steps should help,
First clean up the pods running into your k8s cluster using,
$ kubectl delete node --all
then remove data volumes and back-up(if it's not needed) from your host system. Finally, you can stop all the k8s services using the script,
$ for service in kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler; do
systemctl stop $service
done
$ yum -y remove kubernetes #if it's registered as a service

Minikube never start - Error restarting cluster

I'm using Arch linux
I had virtualbox 5.2.12 installed
I had the minikube 0.27.0-1 installed
I had the Kubernetes v1.10.0 installed
When i try start the minkube with sudo minikube start i get this error
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0527 12:58:18.929483 22672 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: running command:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: exit status 1
I already try start minekube with others option like:
sudo minikube start --kubernetes-version v1.10.0 --bootstrapper kubeadm
sudo minikube start --bootstrapper kubeadm
sudo minikube start --vm-driver none
sudo minikube start --vm-driver virtualbox
sudo minikube start --vm-driver kvm
sudo minikube start --vm-driver kvm2
Always I get the same error. Can someone help me?
Minikube VM is usually started for simple experiments without any important payload.
That's why it's much easier to recreate minikube cluster than trying to fix it.
To delete existing minikube VM execute the following command:
minikube delete
This command shuts down and deletes the minikube virtual machine. No data or state is preserved.
Check if you have all dependencies at place and run command:
minikube start
This command creates a “kubectl context” called “minikube”. This context contains the configuration to communicate with your minikube cluster. minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
kubectl config use-context minikube
Or pass the context on each command like this:
kubectl get pods --context=minikube
More information about command line arguments can be found here.
Update:
The below answer did not work due to what I suspect are differences in versions between my environment and the information I found and I'm not willing to sink more time into this problem. The VM itself does startup so if you have important information in it, ie: other docker containers you can login to the VM and extract such data from it before minikube delete
Same problem, I ssh'ed into the VM and ran
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml
The result was
failure loading apiserver-kubelet-client certificate: the certificate has expired
So like any good engineer, I googled that and found this:
Source: https://github.com/kubernetes/kubeadm/issues/581
If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta
feature) or your certs already expired, then you will need to manually
update your certs (or recreate your cluster which it appears some (not
just #kachkaev) end up resorting to).
You will need to SSH into your master node. If you are using kubeadm
= 1.8 skip to 2.
Update Kubeadm, if needed. I was on 1.7 previously. $ sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm Backup old apiserver, apiserver-kubelet-client, and
front-proxy-client certs and keys. $ sudo mv
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver.key.old $ sudo mv
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/apiserver-kubelet-client.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.key.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/front-proxy-client.crt.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.key
/etc/kubernetes/pki/front-proxy-client.key.old Generate new apiserver,
apiserver-kubelet-client, and front-proxy-client certs and keys. $
sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address
$ sudo kubeadm alpha phase certs
apiserver-kubelet-client $ sudo kubeadm alpha phase certs
front-proxy-client Backup old configuration files $ sudo mv
/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old $ sudo mv
/etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old $ sudo
mv /etc/kubernetes/controller-manager.conf
/etc/kubernetes/controller-manager.conf.old $ sudo mv
/etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old
Generate new configuration files. There is an important note here. If
you are on AWS, you will need to explicitly pass the --node-name
parameter in this request. Otherwise you will get an error like:
Unable to register node "ip-10-0-8-141.ec2.internal" with API server:
nodes "ip-10-0-8-141.ec2.internal" is forbidden: node ip-10-0-8-141
cannot modify node ip-10-0-8-141.ec2.internal in your logs sudo
journalctl -u kubelet --all | tail and the Master Node will report
that it is Not Ready when you run kubectl get nodes.
Please be certain to replace the values passed in
--apiserver-advertise-address and --node-name with the correct values for your environment.
$ sudo kubeadm alpha phase kubeconfig all
--apiserver-advertise-address 10.0.8.141 --node-name ip-10-0-8-141.ec2.internal [kubeconfig] Wrote KubeConfig file to disk:
"admin.conf" [kubeconfig] Wrote KubeConfig file to disk:
"kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk:
"controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk:
"scheduler.conf"
Ensure that your kubectl is looking in the right place for your config
files. $ mv .kube/config .kube/config.old $ sudo cp -i
/etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id
-u):$(id -g) $HOME/.kube/config $ sudo chmod 777 $HOME/.kube/config $ export KUBECONFIG=.kube/config Reboot your master node $ sudo
/sbin/shutdown -r now Reconnect to your master node and grab your
token, and verify that your Master Node is "Ready". Copy the token to
your clipboard. You will need it in the next step. $ kubectl get nodes
$ kubeadm token list If you do not have a valid token. You can create
one with:
$ kubeadm token create The token should look something like
6dihyb.d09sbgae8ph2atjw
SSH into each of the slave nodes and reconnect them to the master $
sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm $ sudo kubeadm join --token= : --node-name
Repeat Step 9 for each connecting node. From the master node, you can
verify that all slave nodes have connected and are ready with: $
kubectl get nodes Hopefully this gets you where you need to be
#davidcomeyne.
As root try first a full clean up with:
minikube delete --all --purge
Then start up minikube and you need to copy the root certs for another users.

Installing a Kubernetes pod network for cluster nodes hosted on VirtualBox VMs

On OS X 10.11.6, I created 4 CentOS 7 VMs each with two interfaces ( One NAT, and one Host-only network.) in VirtualBox. Each VM's host-only interface receives an IP via DCHCP and DNS via dnsmasq.
OS X is running dnsmasq configure via a /usr/local/etc/dnsmasq.conf file that contains:
interface=vboxnet0
bind-interfaces
dhcp-range=vboxnet0,192.168.56.100,192.168.56.200,255.255.255.0,infinite
dhcp-leasefile=/usr/local/etc/dnsmasq.leases
local=/dev/
expand-hosts
domain=dev
address=/kube-master.dev/192.168.56.100
address=/kube-minion1.dev/192.168.56.101
address=/kube-minion2.dev/192.168.56.102
address=/kube-minion3.dev/192.168.56.103
address=/vbox-host.dev/192.168.56.1
dhcp-host=08:00:27:09:48:16,192.168.56.100
dhcp-host=0a:00:27:00:00:00,192.168.56.1
dhcp-host=08:00:27:95:AE:39,192.168.56.101
dhcp-host=08:00:27:97:C9:D4,192.168.56.102
dhcp-host=08:00:27:9B:AD:B5,192.168.56.103
I can ssh into each VM through their respective host-only adapter's associated address (e.g., kube-master.dev, kube-minion1.dev, kube-minion2.dev, kube-minion3.dev), and then
yum update
skipping a few steps, get to the point of installing kubeadm as per http://kubernetes.io/docs/getting-started-guides/kubeadm/, that is:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni ebtables
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
Then it is unclear to me if the following is correct but on kube-master.dev I execute
kubeadm init --api-advertise-addresses=192.168.56.100 --api-external-dns-names=kube-master.dev
And then on each minion execute:
rm -Rf /etc/kubernetes/manifests/
kubeadm join --token=e7cd12.68011e93d5db7670 192.168.56.100
On kube-master.dev, I then run
kubectl get nodes
to verify the each node has joined the cluster.
The command returns:
NAME STATUS AGE
kube-master.dev Ready 44m
kube-minion1.dev Ready 40m
kube-minion2.dev Ready 39m
kube-minion3.dev Ready 39m
indicating things are groovy.
Afterward, things go entirely off the rail when I attempt to install a pod network.
On kube-master.dev, I run:
kubectl apply -f https://git.io/weave-kube
to install Weave Net, and once the POD network is installed I start monitoring that network is working via executing:
watch kubectl get pods --all-namespaces
And
kube-dns-654381707-05i1t 0/3
never moves off of zero.
So please what am I doing wrong? I've hammered at this for days. The kubeadm documentation is a bit thin in a few place, so I'm not sure I init'ed the master correctly, and installing the pod network is bit conjecture on my part. Also, I haven't found a tutorial other than the Kubernetes kubeadm and the associated youtube video documenting the use of kubeadm to set up a kubernetes cluster.