kubectl cannot detect localhost:8080 with minikube locally - kubernetes

I am trying to run the tutorial at https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/ locally on by ubuntu 18 machine.
$ minikube start
๐Ÿ˜„ minikube v1.0.1 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.1 images in the background ...
๐Ÿ”ฅ Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
๐Ÿ“ถ "minikube" IP address is 192.168.39.247
๐Ÿณ Configuring Docker as the container runtime ...
๐Ÿณ Version of container runtime is 18.06.3-ce
โŒ› Waiting for image downloads to complete ...
โœจ Preparing Kubernetes environment ...
๐Ÿ’พ Downloading kubeadm v1.14.1
๐Ÿ’พ Downloading kubelet v1.14.1
๐Ÿšœ Pulling images required by Kubernetes v1.14.1 ...
๐Ÿš€ Launching Kubernetes v1.14.1 using kubeadm ...
โŒ› Waiting for pods: apiserver proxy etcd scheduler controller dns
๐Ÿ”‘ Configuring cluster permissions ...
๐Ÿค” Verifying component health .....
๐Ÿ’— kubectl is now configured to use "minikube"
๐Ÿ„ Done! Thank you for using minikube!
So far, so good.
Next, I try to run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Similar response for
$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
As also,
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What am I missing?

Ok so I was able to find the answer myself.
~/.kube/config was present before so I removed it first.
Next, when I ran the commands again, a config file was created again and that mentions the port as 8443.
So, need to make sure there is no old ~/.kube/config file present before starting minikube.

Related

Resolving Minikube metallb imagepullbackoff

I am moving from Docker Desktop to Minikube and have been having some trouble in getting MetalLB to work properly. I am starting Minikube in MacOS Monterey.
I've started a Minikube profile using the command below:
minikube start -p myprofile --cpus=4 --memory='32g' --disk-size='100000mb'
--driver=hyperkit --kubernetes-version=v1.21.8 --addons=metallb
When I check the pods for MetalLB, they are in an ImagePullBackOff status. The pods are trying to pull images docker.io/metallb/controller:v0.9.6 and docker.io/metallb/speaker:v0.9.6 respectively.
NAME READY STATUS RESTARTS AGE
controller-5fd6788656-jvj4m 0/1 ImagePullBackOff 0 26m
speaker-ctdmw 0/1 ImagePullBackOff 0 37m
After running eval $(minikube -p myprofile docker-env) and manually pulling through docker pull docker.io/metallb/speaker:v0.9.6, I get the error:
Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on <ip-address>:53: read udp <ip-address>:49978-><ip-address>:53: i/o timeout
I'm not certain if it's useful, but after SSHing into the Minikube node, I've also verified ping google.com does not return a result.
When starting my Minikube profile, I had the following output:
๐Ÿ˜„ [myprofile] minikube v1.28.0 on Darwin 12.3.1
๐Ÿ†• Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
โœจ Using the hyperkit driver based on existing profile
๐Ÿ‘ Starting control plane node myprofile in cluster myprofile
๐Ÿ”„ Restarting existing hyperkit VM for "myprofile" ...
โ— This VM is having trouble accessing https://k8s.gcr.io
๐Ÿ’ก To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
๐Ÿณ Preparing Kubernetes v1.21.8 on Docker 20.10.20 ...
๐Ÿ”Ž Verifying Kubernetes components...
โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5
โ–ช Using image metallb/speaker:v0.9.6
โ–ช Using image metallb/controller:v0.9.6
๐ŸŒŸ Enabled addons: storage-provisioner, metallb, default-storageclass
โ— /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.21.8.
โ–ช Want kubectl v1.21.8? Try 'minikube kubectl -- get pods -A'
๐Ÿ„ Done! kubectl is now configured to use "myprofile" cluster and "default" namespace by default

Unable to start Kube cluster

I am trying to setup the kube cluster using Oracle VM Virtual Box. The command kubeadm is failing to start the cluster.
It waits on below:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Then fails because of below:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
OS: Ubuntu 16.04-xenial Docker version: 18.09.7 Kube version:
Kubernetes v1.23.5 Cluster type: Flannel
OS: Ubuntu 16.04-xenial Docker version: 20.10.7 Kube version:
Kubernetes v1.23.5 Cluster type: Calico
What I tried so far, with help of Google:
turn off swap - which was already done
combinations of kube-docker as above
restarting kubelet service
other bits I do not remember.
ensured that the static ips have been allocated, and other
prerequisites.
Can anyone assist? I am new to Kube.

Kubernetes HA master set up

I have made a HA Kubernetes cluster. FIrst I added a node and joined the other node as master role.
I basically did the multi etcd set up. This worked fine for me. I did the fail over testing which also worked fine. Now the problem is once I am done working, I drained and deleted the other node and then I shut down the other machine( a VM on GCP). But then my kubectl commands dont work... Let me share the steps:
kubectl get node(when multi node is set up)
NAME STATUS ROLES AGE VERSION
instance-1 Ready <none> 17d v1.15.1
instance-3 Ready <none> 25m v1.15.1
masternode Ready master 18d v1.16.0
kubectl get node ( when I shut down my other node)
root#masternode:~# kubectl get nodes
The connection to the server k8smaster:6443 was refused - did you specify the right host or port?
Any clue?
After reboot the server you need to do some step below:
sudo -i
swapoff -a
exit
strace -eopenat kubectl version

Minikube does not start, kubectl connection to server was refused

Scouring stack overflow solutions for similar problems did not resolve my issue, so hoping to share what I'm currently experiencing to get help debugging this.
So a small preface; I initially installed minikube/kubectl a couple days back. I went ahead and tried following the minikube tutorial today and am now experiencing issues. I'm following the minikube getting started guide.
I am on MacOS. My versions:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: net/http: TLS handshake timeout
$ minikube version
minikube version: v0.26.1
$ vboxmanage --version
5.1.20r114629
The following are a string of commands I've tried to check responses..
$ minikube start
Starting VM...
Getting VM IP address...
Moving files into cluster...
E0503 11:08:18.654428 20197 start.go:234] Error updating cluster: downloading binaries: transferring kubeadm file: &{BaseAsset:{data:[] reader:0xc4200861a8 Length:0 AssetName:/Users/philipyoo/.minikube/cache/v1.10.0/kubeadm TargetDir:/usr/bin TargetName:kubeadm Permissions:0641}}: Error running scp command: sudo scp -t /usr/bin output: : wait: remote command exited without exit status or exit signal
$ minikube status
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.103
Edit:
I don't know what happened, but checking the status again returned "Misconfigured". I ran the recommended command $ minikube update-context and now the $ minikube ip points to "172.17.0.1". Pinging this IP returns request timeouts, 100% packet loss. Double-checked context and I'm still using "minikube" both for context and cluster:
$ kubectl config get-cluster
$ kubectl config get-context
$ kubectl get pods
The connection to the server 192.168.99.103:8443 was refused - did you specify the right host or port?
Reading github issues, I ran into this one: kubernetes#44665. So...
$ ls /etc/kubernetes
ls: /etc/kubernetes: No such file or directory
Only the last few entries
$ minikube logs
May 03 18:10:48 minikube kubelet[3405]: E0503 18:10:47.933251 3405 event.go:209] Unable to write event: 'Patch https://192.168.99.103:8443/api/v1/namespaces/default/events/minikube.152b315ce3475a80: dial tcp 192.168.99.103:8443: getsockopt: connection refused' (may retry after sleeping)
May 03 18:10:49 minikube kubelet[3405]: E0503 18:10:49.160920 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.99.103:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:51 minikube kubelet[3405]: E0503 18:10:51.670344 3405 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.103:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: W0503 18:10:53.017289 3405 status_manager.go:459] Failed to get status for pod "kube-controller-manager-minikube_kube-system(c801aa20d5b60df68810fccc384efdd5)": Get https://192.168.99.103:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube: dial tcp 192.168.99.103:8443: getsockopt: connection refused
May 03 18:10:53 minikube kubelet[3405]: E0503 18:10:52.595134 3405 rkt.go:65] detectRktContainers: listRunningPods failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I'm not exactly sure how to ping an https url, but if I ping the ip
$ kube ping 192.168.99.103
PING 192.168.99.103 (192.168.99.103): 56 data bytes
64 bytes from 192.168.99.103: icmp_seq=0 ttl=64 time=4.632 ms
64 bytes from 192.168.99.103: icmp_seq=1 ttl=64 time=0.363 ms
64 bytes from 192.168.99.103: icmp_seq=2 ttl=64 time=0.826 ms
^C
--- 192.168.99.103 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.363/1.940/4.632/1.913 ms
Looking at kube config file...
$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
- cluster:
certificate-authority: /Users/philipyoo/.minikube/ca.crt
server: https://192.168.99.103:8443
name: minikube
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: <removed>
client-key-data: <removed>
- name: minikube
user:
client-certificate: /Users/philipyoo/.minikube/client.crt
client-key: /Users/philipyoo/.minikube/client.key
And to make sure my key/crts are there:
$ ls ~/.minikube
addons/ ca.pem* client.key machines/ proxy-client.key
apiserver.crt cache/ config/ profiles/
apiserver.key cert.pem* files/ proxy-client-ca.crt
ca.crt certs/ key.pem* proxy-client-ca.key
ca.key client.crt logs/ proxy-client.crt
Any help in debugging is super appreciated!
For posterity, the solution to this problem was to delete the
.minikube
directory in the user's home directory, and then try again. Often fixes strange minikube problems.
I had the same issue when I started minikube.
OS
MacOs HighSierra
Minikube
minikube version: v0.33.1
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Solution 1
I just change the permission of the kubeadm file and start the minikube as below. Then it works fine.
sudo chmod 777 /Users/buddhi/.minikube/cache/v1.13.2/kubeadm
In general, you have to do
sudo chmod 777 <PATH_TO_THE_KUBEADM_FILE>
Solution 2
If you no longer need the existing minikube cluster you can try out this.
minikube stop
minikube delete
minikube start
Here you stop and delete existing minikube cluster and create another one.
Hope this might help someone.

How to resolve Kubernetes API from minikube?

I can't seem to be able to find the Kubernetes API IP address from within the minikube machine, using minikube ssh.
nslookup kubernetes.default.svc.cluster.local 10.0.0.1
Server: 10.0.0.1
Address 1: 10.0.0.1
nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
Same thing happens with just kubernetes. This happens even though it seems that the kube-dns pods are up-and-running just fine:
# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
kube-addon-manager-minikube 1/1 Running 0 9m
kube-dns-910330662-16bvw 3/3 Running 0 8m
kubernetes-dashboard-vbn9m 1/1 Running 0 8m
There is no KubeDNS info in cluster-info (but nothing major in the dump):
# kubectl cluster-info
Kubernetes master is running at https://192.168.64.26:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Version for kubectl:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-07-26T00:12:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
and minikube:
# minikube version
minikube version: v0.21.0
This happens on OSX and both with xhyve and virtualbox as drivers.
There are quite a few minikube/dns issues in the minikube issue tracker, but none of them quite like this. What should the kubectl cluster-info output look like for minikube that has kube-dns working? What about the nslookup, shouldn't that work?
EDIT. It seems like kubectl cluster-info for minikube is returning what it should, but I can't even resolve kubernetes from inside the kube-dns pod either:
# kubectl -n kube-system exec -it kube-dns-910330662-1n64f -c kubedns -- nslookup kubernetes.default.svc.cluster.local localhost
Server: 127.0.0.1
Address 1: 127.0.0.1 localhost
nslookup: can't resolve 'kubernetes.default.svc.cluster.local': Name does not resolve
which really should work?