I installed kubernetes cluster using kubeadm following this guide. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm.
Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance.
For more information, I removed kubeadm, kubectl and kubelet using apt-get purge/remove but when I started installing the cluster again I got next errors:
[preflight] Some fatal errors occurred:
Port 6443 is in use
Port 10251 is in use
Port 10252 is in use
/etc/kubernetes/manifests is not empty
/var/lib/kubelet is not empty
Port 2379 is in use
/var/lib/etcd is not empty
In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"):
kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove
sudo rm -rf ~/.kube
And restart the computer.
use kubeadm reset command. this will un-configure the kubernetes cluster.
If you are clearing the cluster so that you can start again, then, in addition to what #rib47 said, I also do the following to ensure my systems are in a state ready for kubeadm init again:
kubeadm reset -f
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
systemctl restart docker
You then need to re-install docker.io, kubeadm, kubectl, and kubelet to make sure they are at the latest versions for your distribution before you re-initialize the cluster.
EDIT: Discovered that calico adds firewall rules to the raw table so that needs clearing out as well.
kubeadm reset
/*On Debian base Operating systems you can use the following command.*/
# on debian base
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
/*On CentOs distribution systems you can use the following command.*/
#on centos base
sudo yum remove kubeadm kubectl kubelet kubernetes-cni kube*
# on debian base
sudo apt-get autoremove
#on centos base
sudo yum autoremove
/For all/
sudo rm -rf ~/.kube
The guide you linked now has a Tear Down section:
Talking to the master with the appropriate credentials, run:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
Then, on the node being removed, reset all kubeadm installed state:
kubeadm reset
I use the following scripts to completely uninstall an existing Kubernetes cluster and its running docker containers
sudo kubeadm reset
sudo apt purge kubectl kubeadm kubelet kubernetes-cni -y
sudo apt autoremove
sudo rm -fr /etc/kubernetes/; sudo rm -fr ~/.kube/; sudo rm -fr /var/lib/etcd; sudo rm -rf /var/lib/cni/
sudo systemctl daemon-reload
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
# remove all running docker containers
docker rm -f `docker ps -a | grep "k8s_" | awk '{print $1}'`
If wanting to make it easily repeatable, it would make sense to make this into a script. This is assuming you are using a Debian based OS:
#!/bin/sh
# Kube Admin Reset
kubeadm reset
# Remove all packages related to Kubernetes
apt remove -y kubeadm kubectl kubelet kubernetes-cni
apt purge -y kube*
# Remove docker containers/ images ( optional if using docker)
docker image prune -a
systemctl restart docker
apt purge -y docker-engine docker docker.io docker-ce docker-ce-cli containerd containerd.io runc --allow-change-held-packages
# Remove parts
apt autoremove -y
# Remove all folder associated to kubernetes, etcd, and docker
rm -rf ~/.kube
rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/lib/etcd2/ /var/run/kubernetes ~/.kube/*
rm -rf /var/lib/docker /etc/docker /var/run/docker.sock
rm -f /etc/apparmor.d/docker /etc/systemd/system/etcd*
# Delete docker group (optional)
groupdel docker
# Clear the iptables
iptables -F && iptables -X
iptables -t nat -F && iptables -t nat -X
iptables -t raw -F && iptables -t raw -X
iptables -t mangle -F && iptables -t mangle -X
NOTE:
This will destroy everything related to Kubernetes, etcd, and docker on the Node/server this command is run against!
Related
I am setting up Kubernetes for the first on my local machine using Minikube.
I installed kubectl on my local machine using:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
However, when I run the command:
minikube start
I get the following error:
๐ minikube 1.12.3 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.12.3
๐ก To disable this notice, run: 'minikube config set WantUpdateNotification false'
๐ minikube v1.5.2 on Ubuntu 18.04
๐ก Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
๐ Starting existing virtualbox VM for "minikube" ...
โ Waiting for the host to be provisioned ...
๐ณ Preparing Kubernetes v1.16.2 on Docker '18.09.9' ...
๐ Relaunching Kubernetes using kubeadm ...
โ Waiting for: apiserver
๐ Done! kubectl is now configured to use "minikube"
โ ๏ธ /usr/local/bin/kubectl is version 1.19.0, and is incompatible with Kubernetes 1.16.2. You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster
I don't seem to understand what the error means by:
โ ๏ธ /usr/local/bin/kubectl is version 1.19.0, and is incompatible with Kubernetes 1.16.2. You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster
I can't remember installing minikube or Kubernetes before now on my local machine.
I finally figured it out what the issue was.
I ran the command kubectl version and I got the following output:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0",
GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean",
BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2",
GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean",
BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
It showed me the date when I installed Kubernetes using minikube installer, which was sometime in 2019 when I was trying out Kubenetes initially.
Here's how I fixed it:
There are 2 solutions:
Solution 1
Uninstall older/previous versions of Kubernetes using minikube on the Linux machine:
minikube stop; minikube delete
docker stop (docker ps -aq)
rm -r ~/.kube ~/.minikube
sudo rm /usr/local/bin/localkube /usr/local/bin/minikube
systemctl stop '*kubelet*.mount'
sudo rm -rf /etc/kubernetes/
docker system prune -af --volumes
Or on Mac:
minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
Reinstall a new copy of minikube:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
And then run the command below to pull a new base image of Kubernetes:
minkube start
This pulls a new image of Kubernetes, and also configures kubectl to use minikube.
Solution 2:
Run the command below to downgrade kubectl to the same version as Kubenetes on the local machine:
minikube kubectl
This will install the kubectl compatible version for Kubernetes using minikube
That's all.
I hope this helps
Installing kubectl again solved my issue.
In my case there was no need to uninstall older version I just followed installation steps again and it worked for me.
Update the apt package index and install packages needed to use the Kubernetes apt repository:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
If you use Debian 9 (stretch) or earlier you would also need to install apt-transport-https:
sudo apt-get install -y apt-transport-https
Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update apt package index with the new repository and install kubectl:
sudo apt-get update
sudo apt-get install -y kubectl
I Hope this works for you too
Good afternoon, I am new to Kubernetes and I am installing for the development environment Kubernetes, I have a red hat server(redhat) to be node/master, at the same time, I followed the following steps to install:
Following the tutorial on the page:
https://www.linuxtechi.com/install-kubernetes-k8s-minikube-centos-8/
sudo dnf update -y
sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo systemctl start docker
sudo systemctl enable docker
sudo dnf install conntrack -y
#Installing Kubectl
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo (root)
yum install -y kubectl (root)
#Installing Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
mkdir -p /usr/local/bin/
install minikube /usr/local/bin/
However, when I configure minikube, it does not point to the IP of my server, but to the IP 172.17.0.2:
minikube ip
172.17.0.2
kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 172.17.0.2:8443 was refused - did you specify the right host or port?
My IP is 10.154.7.209
What I can be doing wrong? If I can't use minikube to raise a server as master/node, what can I use?
You are not doing anything wrong ๐. minikube is basically using the docker driver so 172.17.0.2 is the IP address of the container where your cluster is running.
But looks like you are using a proxy somewhere in your system. So you need to include the 172.17.0.0/24 range in the NO_PROXY environment variable.
Something like this:
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.0/24 ๐
minikube start
# get pods
minukube kubectl -- get pods --all-namespaces
โ๏ธ
As I am new to Kubernetes, I still haven't figured out how to deploy an application on my local network using one or various physical machines.
Usually the tutorials on the internet describe situations using minikube, but it is only for local machine tests, isn't it? Or situations where the deploy is performed on cloud platforms, like google.
I would really appreciate some support in where to begin? In my case will I need to install only Kubernetes on the machines? Is it a trivial task?
One way is to install kubernetes is via kubeadm.
I've done this with my own VM. But should also work for your physical machines.
You can use kubeadm, here is a doc which will help you to install it.
Each server will need to have kubeadm, kubelet and kubectl installed.
On master node you will do kubeadm init and once the whole process is finished it will provide you with an output which you can run to add worker nodes.
Of course those servers needs to see each other on your network.
There are also other options to install Kubernetes. For example using kops or kubespray.
From https://kubernetes.io/docs/setup/learning-environment/minikube/
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a Virtual
Machine (VM) on your laptop for users looking to try out Kubernetes or
develop with it day-to-day
If you want to set up your own cluster including, there's quite a few options including kubeadm or kubespray.
Personally I've used Vagrant and Ansible to set up a local Kubernetes cluster using kubeadm - there's a good tutorial here... and here's my implementation :)
Kubeadm allows you to create cluster using any machine that runs Docker.
This guide will help you to create your first cluster with 1 master and 1 worker nodes. (This is based on Debian)
Execute the following command on all 2 machines:
Installing Docker from official repository
$ sudo apt-get update
$ sudo apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Install kubeadm, kubectl and kubelet from official repository
$ sudo apt-get update && sudo apt-get install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su -c "cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF"
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
Execute the following command only on your master node
Initialize your cluster
$ sudo kubeadm init
When the init finishes, kubeadm will give you the command to add your workers to your cluster. Save this command for later use.
EXAMPLE:
kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 \
--discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
After the kubeadm init command completes, run the following commands to start using your new cluster
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Your master is not ready yet, we need to create a cni network (You can choose different CNIs)
$ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Check if your master node is ready. It needs a while to start all dependencies
$ kubectl get nodes
EXAMPLE:
user#kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 32s v1.16.2
To check more details about your node:
$ kubectl describe node <NODENAME>
When your master node is ready you can proceed and run the kubectl join command (saved previously) on your worker nodes:
$ sudo kubeadm join 10.128.0.17:6443 --token 27evky.hoel95h16poqici6 --discovery-token-ca-cert-hash sha256:521f69cb935951bbfee142432108caeaeaf1682d8647208ba2f558624890ab63
Check if your worker node is ready (command must be executed on master node)
$ kubectl get nodes
If you want to have more workers, just repeat the kubectl join command on the new workers.
I have kubernetes cluster with one master and 3 worker. I want to create another kubernetes cluster with this kubernetes I mean the other kubernetes cluster (master and worker) should be container forexample I build kubernetes cluster with 1 master and 5 worker.
Since all components in Kubernetes, except Kubelet can be run as pod, you can deploy Kubernetes apiserver, controller,scheduler as a pod in another kubernetes.
You will need to export a SVC exposing API server as a Node port.
IN next step, This Node Port can be used as apiserver URL for second cluster kubelet.
The only challenge you will face, In case you are running calico, ON master node, there could be only one instance of Calico.
So in case you are using operators, your APi server Pod will not be able to reach Operator controller Pod.
That is going to be very difficult. What are you using to run Kubernetes (minikube, Google Kubernetes Engine, etc.)
If using minikube (local) minikube makes use of virtualboxs' hypervisor to create new "containers". If you create a cluster, you are already using that hypervisor. What hypervisor would your 'new' cluster use?
Right now, I think you don't make much sense either. you want to deploy a 'new' Kubernetes Cluster as a pod? Where would this pod get its resources? Let's say you used GKE (Google Cloud) and had a really big node (100vCPUs, 1000RAM). Once you're in this node as a pod and you create another cluster (within this pod) (theoretically), would this pod act as the master node? What if this pod goes down? If the master node goes down, the cluster is lost. Pods are ephemeral. It is theoretically possible, but there is absolutely no logical reason one would implement this. This isn't even an answer, but more of a probe for you to help answer our questions, so we can attempt answering yours.
apt-get update
apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
apt-get update -y
apt-get install docker.io -y
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -y
apt-get install kubelet kubeadm kubectl -y
10. kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
13. sudo chown $(id -u):$(id -g) $HOME/.kube/config
mkdir -p /root/.kube
sudo cp -i /etc/kubernetes/admin.conf /root/.kube/config
16. sudo chown $(id -u):$(id -g) /root/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
18. kubectl taint nodes --all node-role.kubernetes.io/master-
Use the above commands to install Kubernetes and Make kubernetes run as single node cluster for demo purpose - OS Family - Ubuntu 16.04/18.04
I'm using Arch linux
I had virtualbox 5.2.12 installed
I had the minikube 0.27.0-1 installed
I had the Kubernetes v1.10.0 installed
When i try start the minkube with sudo minikube start i get this error
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0527 12:58:18.929483 22672 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: running command:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: exit status 1
I already try start minekube with others option like:
sudo minikube start --kubernetes-version v1.10.0 --bootstrapper kubeadm
sudo minikube start --bootstrapper kubeadm
sudo minikube start --vm-driver none
sudo minikube start --vm-driver virtualbox
sudo minikube start --vm-driver kvm
sudo minikube start --vm-driver kvm2
Always I get the same error. Can someone help me?
Minikube VM is usually started for simple experiments without any important payload.
That's why it's much easier to recreate minikube cluster than trying to fix it.
To delete existing minikube VM execute the following command:
minikube delete
This command shuts down and deletes the minikube virtual machine. No data or state is preserved.
Check if you have all dependencies at place and run command:
minikube start
This command creates a โkubectl contextโ called โminikubeโ. This context contains the configuration to communicate with your minikube cluster. minikube sets this context to default automatically, but if you need to switch back to it in the future, run:
kubectl config use-context minikube
Or pass the context on each command like this:
kubectl get pods --context=minikube
More information about command line arguments can be found here.
Update:
The below answer did not work due to what I suspect are differences in versions between my environment and the information I found and I'm not willing to sink more time into this problem. The VM itself does startup so if you have important information in it, ie: other docker containers you can login to the VM and extract such data from it before minikube delete
Same problem, I ssh'ed into the VM and ran
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml
The result was
failure loading apiserver-kubelet-client certificate: the certificate has expired
So like any good engineer, I googled that and found this:
Source: https://github.com/kubernetes/kubeadm/issues/581
If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta
feature) or your certs already expired, then you will need to manually
update your certs (or recreate your cluster which it appears some (not
just #kachkaev) end up resorting to).
You will need to SSH into your master node. If you are using kubeadm
= 1.8 skip to 2.
Update Kubeadm, if needed. I was on 1.7 previously. $ sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm Backup old apiserver, apiserver-kubelet-client, and
front-proxy-client certs and keys. $ sudo mv
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver.key.old $ sudo mv
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/apiserver-kubelet-client.crt.old $ sudo mv
/etc/kubernetes/pki/apiserver-kubelet-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.key.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/front-proxy-client.crt.old $ sudo mv
/etc/kubernetes/pki/front-proxy-client.key
/etc/kubernetes/pki/front-proxy-client.key.old Generate new apiserver,
apiserver-kubelet-client, and front-proxy-client certs and keys. $
sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address
$ sudo kubeadm alpha phase certs
apiserver-kubelet-client $ sudo kubeadm alpha phase certs
front-proxy-client Backup old configuration files $ sudo mv
/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old $ sudo mv
/etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old $ sudo
mv /etc/kubernetes/controller-manager.conf
/etc/kubernetes/controller-manager.conf.old $ sudo mv
/etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old
Generate new configuration files. There is an important note here. If
you are on AWS, you will need to explicitly pass the --node-name
parameter in this request. Otherwise you will get an error like:
Unable to register node "ip-10-0-8-141.ec2.internal" with API server:
nodes "ip-10-0-8-141.ec2.internal" is forbidden: node ip-10-0-8-141
cannot modify node ip-10-0-8-141.ec2.internal in your logs sudo
journalctl -u kubelet --all | tail and the Master Node will report
that it is Not Ready when you run kubectl get nodes.
Please be certain to replace the values passed in
--apiserver-advertise-address and --node-name with the correct values for your environment.
$ sudo kubeadm alpha phase kubeconfig all
--apiserver-advertise-address 10.0.8.141 --node-name ip-10-0-8-141.ec2.internal [kubeconfig] Wrote KubeConfig file to disk:
"admin.conf" [kubeconfig] Wrote KubeConfig file to disk:
"kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk:
"controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk:
"scheduler.conf"
Ensure that your kubectl is looking in the right place for your config
files. $ mv .kube/config .kube/config.old $ sudo cp -i
/etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id
-u):$(id -g) $HOME/.kube/config $ sudo chmod 777 $HOME/.kube/config $ export KUBECONFIG=.kube/config Reboot your master node $ sudo
/sbin/shutdown -r now Reconnect to your master node and grab your
token, and verify that your Master Node is "Ready". Copy the token to
your clipboard. You will need it in the next step. $ kubectl get nodes
$ kubeadm token list If you do not have a valid token. You can create
one with:
$ kubeadm token create The token should look something like
6dihyb.d09sbgae8ph2atjw
SSH into each of the slave nodes and reconnect them to the master $
sudo curl -sSL
https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm >
./kubeadm.1.8.15 $ chmod a+rx kubeadm.1.8.15 $ sudo mv
/usr/bin/kubeadm /usr/bin/kubeadm.1.7 $ sudo mv kubeadm.1.8.15
/usr/bin/kubeadm $ sudo kubeadm join --token= : --node-name
Repeat Step 9 for each connecting node. From the master node, you can
verify that all slave nodes have connected and are ready with: $
kubectl get nodes Hopefully this gets you where you need to be
#davidcomeyne.
As root try first a full clean up with:
minikube delete --all --purge
Then start up minikube and you need to copy the root certs for another users.