Kubernetes cluster recreated from snapshots issue - kubernetes

OVERVIEW:: I am studying for the Kubernetes Administrator certification. To complete the training course, I created a dual node Kubernetes cluster on Google Cloud, 1 master and 1 slave. As I don't want to leave the instances alive all the time, I took snapshots of them to deploy new instances with the Kubernetes cluster already setup. I am aware that I would need to update the ens4 ip used by kubectl, as this will have changed, which I did.
ISSUE:: When I run "kubectl get pods --all-namespaces" I get the error "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
QUESTION:: Would anyone have had similar issues and know if its possible to recreate a Kubernetes cluster from snapshots?
Adding -v=10 to command, the url matches info in .kube/config file
kubectl get pods --all-namespaces -v=10
I0214 17:11:35.317678 6246 loader.go:375] Config loaded from file: /home/student/.kube/config
I0214 17:11:35.321941 6246 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Accept: application/json, /" 'https://k8smaster:6443/api?timeout=32s'
I0214 17:11:35.333308 6246 round_trippers.go:443] GET https://k8smaster:6443/api?timeout=32s in 11 milliseconds
I0214 17:11:35.333335 6246 round_trippers.go:449] Response Headers:
I0214 17:11:35.333422 6246 cached_discovery.go:121] skipped caching discovery info due to Get https://k8smaster:6443/api?timeout=32s: dial tcp 10.128.0.7:6443: connect: connection refused
I0214 17:11:35.333858 6246 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, /" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" 'https://k8smaster:6443/api?timeout=32s'
I0214 17:11:35.334234 6246 round_trippers.go:443] GET https://k8smaster:6443/api?timeout=32s in 0 milliseconds
I0214 17:11:35.334254 6246 round_trippers.go:449] Response Headers:
I0214 17:11:35.334281 6246 cached_discovery.go:121] skipped caching discovery info due to Get https://k8smaster:6443/api?timeout=32s: dial tcp 10.128.0.7:6443: connect: connection refused
I0214 17:11:35.334303 6246 shortcut.go:89] Error loading discovery information: Get https://k8smaster:6443/api?timeout=32s: dial tcp 10.128.0.7:6443: connect: connection refused

I replicated you issue and wrote this step by step debugging process for you so you can see what was my thinking.
I created 2 node cluster (master + worker) with kubeadm and made a snapshot.
Then I deleted all nodes and recreated them from snapshots.
After recreating master node from snapshot I started seeing the same error you are seeing:
#kmaster ~]$ kubectl get po -v=10
I0217 11:04:38.397823 3372 loader.go:375] Config loaded from file: /home/user/.kube/config
I0217 11:04:38.398909 3372 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.17.3 (linux/amd64) kubernetes/06ad960" 'https://10.156.0.20:6443/api?timeout=32s'
^C
The connection was hanging so I interrupted it (ctrl+c).
First I noticed was that IP address of where kubectl was connecting was different than node ip, so I modified .kube/config file providing proper IP.
After doing this, here is what running kubectl showed:
$ kubectl get po -v=10
I0217 11:26:57.020744 15929 loader.go:375] Config loaded from file: /home/user/.kube/config
...
I0217 11:26:57.025155 15929 helpers.go:221] Connection error: Get https://10.156.0.23:6443/api?timeout=32s: dial tcp 10.156.0.23:6443: connect: connection refused
F0217 11:26:57.025201 15929 helpers.go:114] The connection to the server 10.156.0.23:6443 was refused - did you specify the right host or port?
As you see, connection to apiserver was beeing refused so I checked if apiserver was running:
$ sudo docker ps -a | grep apiserver
5e957ff48d11 90d27391b780 "kube-apiserver --ad…" 24 seconds ago Exited (2) 3 seconds ago k8s_kube-apiserver_kube-apiserver-kmaster_kube-system_997514ff25ec38012de6a5be7c43b0ae_14
d78e179f1565 k8s.gcr.io/pause:3.1 "/pause" 26 minutes ago Up 26 minutes k8s_POD_kube-apiserver-kmaster_kube-system_997514ff25ec38012de6a5be7c43b0ae_1
api-server was exiting for some reason.
I checked its logs (I am only including relevant logs for readability):
$ sudo docker logs 5e957ff48d11
...
W0217 11:30:46.710541 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
panic: context deadline exceeded
Notice apiserver was trying to connect to etcd (notice port: 2379) and receiving connection refused.
My first guess was etcd wasn't running, so I checked etcd container:
$ sudo docker ps -a | grep etcd
4a249cb0743b 303ce5db0e90 "etcd --advertise-cl…" 2 minutes ago Exited (1) 2 minutes ago k8s_etcd_etcd-kmaster_kube-system_9018aafee02ebb028a7befd10063ec1e_19
b89b7e7227de k8s.gcr.io/pause:3.1 "/pause" 30 minutes ago Up 30 minutes k8s_POD_etcd-kmaster_kube-system_9018aafee02ebb028a7befd10063ec1e_1
I was right: Exited (1) 2 minutes ago. I checked its logs:
$ sudo docker logs 4a249cb0743b
...
2020-02-17 11:34:31.493215 C | etcdmain: listen tcp 10.156.0.20:2380: bind: cannot assign requested address
etcd was trying to bind with old IP address.
I modified /etc/kubernetes/manifests/etcd.yaml and changed old IP address to new IP everywhere in file.
Quick sudo docker ps | grep etcd showed its running.
After a while apierver also started running.
Then I tried running kubectl:
$ kubectl get po
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.156.0.20, not 10.156.0.23
Invalid apiserver certificate. SSL certificate was genereated for old IP so that would mean I need to generate new certificate with new IP.
$ sudo kubeadm init phase certs apiserver
...
[certs] Using existing apiserver certificate and key on disk
That's not what I expected. I wanted to generate new certificates, not use old ones.
I deleted old certificates:
$ sudo rm /etc/kubernetes/pki/apiserver.crt \
/etc/kubernetes/pki/apiserver.key
And tried to generate certificates one more time:
$ sudo kubeadm init phase certs apiserver
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.156.0.23]
Looks good. Now let's try using kubectl:
$ kubectl get no
NAME STATUS ROLES AGE VERSION
instance-21 Ready master 102m v1.17.3
instance-22 Ready <none> 95m v1.17.3
As you can see now its working.

Related

Kubernetes: can't join on different subnet - TLS Bootstrap timeout

I have two Ubuntu 18.04 Server machines on AWS (the network conf its okay, I'm able even to connect through SSH between them but they are on different subnets of the same LAN). Ubuntu firewall also disabled.
M1: 172.31.32.210/255.255.240.0 -> 172.31.32.0/20 
M2: 172.31.20.59/255.255.240.0 -> 172.31.16.0/20
The command I execute on the master:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=17
2.31.32.210
# After that
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
I noticed that as they are on different subnets, I need to create a Calico node to able the communications between them: https://docs.projectcalico.org/getting-started/kubernetes/quickstart
After making all that, I introduce the kubeadm join command that return the init procudure, an the following message appears... No way to make the connection:
ubuntu#ip-175-31-20-59:~$ sudo kubeadm join 172.31.45.77:6443 --token yht6uv.zrynwczvad9ra5e4 --discovery-token-ca-cert-hash sha256:6f4f3e98067151768d1339b52159b5469cb83511ad6ea31dc26e15e8631074f6
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
I have followed many tutorials (this or this for example), but I always find the same problem, the TLS Bootstrap when I make the join command on the worker. Any idea?

Can not login to kubernetes dashboard dial tcp 172.17.0.6:8443: connect: connection refused

I deployed Kubernetes v1.15.2 dashboard successfully. Checking the cluster info:
$ kubectl cluster-info
Kubernetes master is running at http://172.19.104.231:8080
kubernetes-dashboard is running at http://172.19.104.231:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
When I access the dashboard, the result is:
[root#ops001 ~]# curl -L http://172.19.104.231:8080/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Error: 'dial tcp 172.17.0.6:8443: connect: connection refused'
Trying to reach: 'https://172.17.0.6:8443/'
This is dashboard status:
[root#ops001 ~]# kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-74d7cc788-mk9c7 1/1 Running 0 92m
What should I do to access the dashboard? When I using proxy to access dashboard UI:
$ kubectl proxy --address='localhost' --port=8086 --accept-hosts='^*$'
Starting to serve on 127.0.0.1:8086
the result is:
[root#ops001 ~]# curl -L http://127.0.0.1:8086/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Error: 'dial tcp 172.17.0.6:8443: connect: connection refused'
Trying to reach: 'https://172.17.0.6:8443/'
What should I do to fix this problem?
I finally find the problem is kubernetes dashboard pod container does not communicate with the proxy nginx container. Because the proxy container is deployed before kubernetes flannel,and not in the same network.Trying to add proxy nginx container to flannel network would solve the problem.Check current flannel network:
[root#ops001 conf.d]# cat /run/flannel/subnet.env
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.224.1/21
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
generate start parameter of docker:
./mk-docker-opts.sh -d /run/docker_opts.env -c
check the parameter:
[root#ops001 conf.d] cat /run/docker_opts.env
DOCKER_OPTS=" --bip=172.30.224.1/21 --ip-masq=false --mtu=1450"
add paramter to docker service:
# vim /lib/systemd/system/docker.service
EnvironmentFile=/run/docker_opts.env
ExecStart=/usr/bin/dockerd $DOCKER_OPTS -H fd://
restart docker and the container would join the flannel network,could communicate with each other:
systemctl daemon-reload
systemctl restart docker
hope this may help you!

Issue while joing the master nodes using k8s on GCP

In GCP We are setting up kubernetes 1.14. HA as Stacked etcd topology.
We have created a image where kubernetes binaries are installed.
We have terrafrom script where an instance group is created with 3 master and 5 worker nodes instances using the above image.
Also, in the terrafrom script, we have created a TCP Load Balancing with 6443 port enabled.
I am able to bootstrap one master by running kubeadm init --config=. However, joining the 2nd master fails with below error.
kubeadm join XX.XX.XX.XX:6443 --token 9a08jv.c0izixklcxtmnze7 --discovery-token-ca-cert-hash sha256:73390a94962247546282a0954cb46f2a282b00534c06aff93773f3fc50aee562 --experimental-control-plane -v 8
Logs
I0423 09:50:33.623004 21078 checks.go:382] validating the presence of executable touch
I0423 09:50:33.623063 21078 checks.go:524] running all checks
I0423 09:50:33.656532 21078 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0423 09:50:33.656705 21078 checks.go:622] validating kubelet version
I0423 09:50:33.716178 21078 checks.go:131] validating if the service is enabled and active
I0423 09:50:33.723119 21078 checks.go:209] validating availability of port 10250
I0423 09:50:33.723377 21078 checks.go:439] validating if the connectivity type is via proxy or direct
I0423 09:50:33.723445 21078 join.go:441] [preflight] Fetching init configuration
I0423 09:50:33.723486 21078 join.go:474] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
I0423 09:50:33.725538 21078 round_trippers.go:416] GET https://XX.XX.XX.XX:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config
I0423 09:50:33.725564 21078 round_trippers.go:423] Request Headers:
I0423 09:50:33.725570 21078 round_trippers.go:426] Accept: application/json, /
I0423 09:50:33.725594 21078 round_trippers.go:426] User-Agent: kubeadm/v1.14.0 (linux/amd64) kubernetes/641856d
I0423 09:50:33.725886 21078 round_trippers.go:441] Response Status: in 0 milliseconds
I0423 09:50:33.725903 21078 round_trippers.go:444] Response Headers:
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://XX.XX.XX.XX:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp XX.XX.XX.XX:6443: connect: connection refused
Note, we had faced the same issue in AWS wih NLB Loadbalacer, we were able to overcome the issue by using Classic Loadbalacer
Thanks in advance for your help.

kubernetes master 6443 connection refused from other hosts

I can't seem to get a node to join the cluster.
[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
I0702 11:09:08.268102 10342 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info'
I0702 11:09:08.268676 10342 round_trippers.go:405] GET https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info in 0 milliseconds
I0702 11:09:08.268873 10342 round_trippers.go:411] Response Headers:
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: connect: connection refused]
The port seems closed (from the node):
telnet 10.0.2.15 6443
Trying 10.0.2.15...
telnet: Unable to connect to remote host: Connection refused
While on the master:
telnet 10.0.2.15 6443
Trying 10.0.2.15...
Connected to 10.0.2.15.
Escape character is '^]'.
^CConnection closed by foreign host.
What may be the cause of this?
Both machines are virtual machines and 10.02.15 is the NAT ip - which is the same for both machines (they are independent)...
Sigh...
In the event it is helpful to someone else:
iptables -t raw -A OUTPUT -p tcp --dport 6443 -j TRACE
iptables -t raw -A PREROUTING -p tcp --dport 6443 -j TRACE
tail -f /var/log/kern.log
If you are running on VM (like using vagrant and virtual box) run init command with the private IP which is used in the vagrant file. So if you use join command on a node it is able to reach it else no.
Syntax: kubeadm init
--apiserver-advertise-address=private-ip-address
Example: kubeadm init --apiserver-advertise-address=192.168.33.50

Minikube does not start, kubectl connection to server was refused

Scouring stack overflow solutions for similar problems did not resolve my issue, so hoping to share what I'm currently experiencing to get help debugging this.
So a small preface; I initially installed minikube/kubectl a couple days back. I went ahead and tried following the minikube tutorial today and am now experiencing issues. I'm following the minikube getting started guide.
I am on MacOS. My versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: net/http: TLS handshake timeout
$ minikube version
minikube version: v0.26.1
$ vboxmanage --version
5.1.20r114629
The following are a string of commands I've tried to check responses..
$ minikube start
Starting VM...
Getting VM IP address...
Moving files into cluster...
E0503 11:08:18.654428 20197 start.go:234] Error updating cluster: downloading binaries: transferring kubeadm file: &{BaseAsset:{data:[] reader:0xc4200861a8 Length:0 AssetName:/Users/philipyoo/.minikube/cache/v1.10.0/kubeadm TargetDir:/usr/bin TargetName:kubeadm Permissions:0641}}: Error running scp command: sudo scp -t /usr/bin output: : wait: remote command exited without exit status or exit signal
$ minikube status
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.103
Edit:
I don't know what happened, but checking the status again returned "Misconfigured". I ran the recommended command $ minikube update-context and now the $ minikube ip points to "172.17.0.1". Pinging this IP returns request timeouts, 100% packet loss. Double-checked context and I'm still using "minikube" both for context and cluster:
$ kubectl config get-cluster
$ kubectl config get-context
$ kubectl get pods
The connection to the server 192.168.99.103:8443 was refused - did you specify the right host or port?
Reading github issues, I ran into this one: kubernetes#44665. So...
$ ls /etc/kubernetes
ls: /etc/kubernetes: No such file or directory
Only the last few entries
$ minikube logs
May 03 18:10:48 minikube kubelet[3405]: E0503 18:10:47.933251 3405 event.go:209] Unable to write event: 'Patch https://192.168.99.103:8443/api/v1/namespaces/default/events/minikube.152b315ce3475a80: dial tcp 192.168.99.103:8443: getsockopt: connection refused' (may retry after sleeping)
May 03 18:10:49 minikube kubelet[3405]: E0503 18:10:49.160920 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.99.103:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:51 minikube kubelet[3405]: E0503 18:10:51.670344 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.103:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: W0503 18:10:53.017289 3405 status_manager.go:459] Failed to get status for pod "kube-controller-manager-minikube_kube-system(c801aa20d5b60df68810fccc384efdd5)": Get https://192.168.99.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: E0503 18:10:52.595134 3405 rkt.go:65] detectRktContainers: listRunningPods failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I'm not exactly sure how to ping an https url, but if I ping the ip
$ kube ping 192.168.99.103
PING 192.168.99.103 (192.168.99.103): 56 data bytes
64 bytes from 192.168.99.103: icmp_seq=0 ttl=64 time=4.632 ms
64 bytes from 192.168.99.103: icmp_seq=1 ttl=64 time=0.363 ms
64 bytes from 192.168.99.103: icmp_seq=2 ttl=64 time=0.826 ms
^C
--- 192.168.99.103 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.363/1.940/4.632/1.913 ms
Looking at kube config file...
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
- cluster:
certificate-authority: /Users/philipyoo/.minikube/ca.crt
server: https://192.168.99.103:8443
name: minikube
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: <removed>
client-key-data: <removed>
- name: minikube
user:
client-certificate: /Users/philipyoo/.minikube/client.crt
client-key: /Users/philipyoo/.minikube/client.key
And to make sure my key/crts are there:
$ ls ~/.minikube
addons/ ca.pem* client.key machines/ proxy-client.key
apiserver.crt cache/ config/ profiles/
apiserver.key cert.pem* files/ proxy-client-ca.crt
ca.crt certs/ key.pem* proxy-client-ca.key
ca.key client.crt logs/ proxy-client.crt
Any help in debugging is super appreciated!
For posterity, the solution to this problem was to delete the
.minikube
directory in the user's home directory, and then try again. Often fixes strange minikube problems.
I had the same issue when I started minikube.
OS
MacOs HighSierra
Minikube
minikube version: v0.33.1
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Solution 1
I just change the permission of the kubeadm file and start the minikube as below. Then it works fine.
sudo chmod 777 /Users/buddhi/.minikube/cache/v1.13.2/kubeadm
In general, you have to do
sudo chmod 777 <PATH_TO_THE_KUBEADM_FILE>
Solution 2
If you no longer need the existing minikube cluster you can try out this.
minikube stop
minikube delete
minikube start
Here you stop and delete existing minikube cluster and create another one.
Hope this might help someone.