minikube stops randomly and can't run kubectl command - kubernetes

Sometimes when Minikube is already running and I try to run any kubectl command (like kubectl get pods) I get this error:
Unable to connect to the server: dial tcp 192.168.99.101:8443
So I stop Minikube and start it again and all kubectl commands work fine, but then after a while if I try to run any kubectl command I get the same error as above.
If I type minikube ip I get 192.168.99.100. Why does kubectl try to connect to 192.168.99.101 (as mentioned in the error) when Minikube is running on 192.168.99.100?
Note that I'm very new to Kubernetes.
kubectl config get-contexts gives me this output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
This is minikube logs output https://pastebin.com/kb5jNRyW

This usually happens when the IP of your VM might be changed, and your minikube is pointing to the previous IP, You can check through minikube ip and then check the IP of the VM created, they will be different.
You can also try minikube status, your output will be :
minikube: Running
cluster: Stopped
kubectl: Misconfigured: pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context
You can try minikube update-context and if it doesn't run even then, try minikube start followed by minikube update-context, it won't download everything again, it will only start the VM if shut down.

Related

`kubectl` not found. If you need it, try: 'minikube kubectl -- get pods -A'

I installed minikube in Windows 10 . I am able to start minikube
**C:\WINDOWS\system32>minikube start
* minikube v1.15.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default**
But there is a warning in above output ( 2nd last line ) says
kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
After that I executed this command too minikube kubectl -- get pods -A
Still getting below error while trying kubectl
C:\WINDOWS\system32>kubectl
'kubectl' is not recognized as an internal or external command,
operable program or batch file.
Minikube installs kubectl inside of itself.
So to use the kubectl which you installed via minikube, you have to prepend the command arguments with minikube kubectl --. For example:
# the same as `kubectl version --client`
minikube kubectl -- version --client
For convenience, you may want to add an alias in your shell configuration.
Source: https://minikube.sigs.k8s.io/docs/handbook/kubectl/
kubectl is wrapped around minikube.
Don't forget to add a -- after minikube kubectl
minikube kubectl -- describe pod kube-scheduler-minikube --namespace kube-system
minikube kubectl -- get pods --namespace kube-system
You have installed minikube, kubectl is not a part of minikube package.
It says when you do minikube start that kubectl is not present and if you need to you can use minikube kubectl instead.
This is also mentioned here
If you already have kubectl installed, you can now use it to access your shiny new cluster
It means that the kubectl might not be present on your machine or that it is not added to your PATH.
You can follow these instructions to install it either by downloading executable or by using curl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/windows/amd64/kubectl.exe
After that add the binary to PATH.
You can run kubectl version --client to ensure correct version is downloaded.
Use doskey.exe to create an alias for kubectl.
Example:
doskey kubectl="%PROGRAMFILES%\Kubernetes\Minikube\minikube.exe" kubectl -- $*
You might need to update the path if you've installed minikube somewhere else.

I cannot load the node information on kubernetes

When I ran the command below, I got the below messages
bistel#BISTelResearchDev-DN03:~$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
While in the master node, I get the information as below:
bistel#BISTelResearchDev-NN:/etc/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
bistelresearchdev-dn03 NotReady <none> 62s v1.19.3
bistelresearchdev-nn Ready master 57m v1.19.3
bistel#BISTelResearchDev-NN:/etc/kubernetes$
The bistelresearchdev-dn03 is the worker node and the message appears when I ran any command using kubectl as follows The connection to the server localhost:8080 was refused - did you specify the right host or port?.
I googled it a lot but any trials didn't work for me.
Thanks,
kubectl works only on master node in cluster. If you are getting this error then there is no issue.
I can see the issue here is node is NotReady status for that you can check below things.
Check kubelet is running on node bistelresearchdev-dn03 with systemctl status kubelet
Check network plugin is installed on your cluster.
The first computer you ran on is missing the kube config file.
Normally kubectl expects to find it at
~/.kube/config
If you get the one off the master node and copy it onto your machine your kubectl will see it and be able to use it.

Execute a command on Kubernetes node from the master

I would like to execute a command on a node from the master. For e.g let's say I have worker node: kubenode01
Now a pod (pod-test) is running on this node. Using "kubectl get pods --output=wide" on the master shows that the pod is running on this node.
Trying to execute a command on that pod from the master results into an error e.g:
kubectl exec -ti pod-test -- cat /etc/resolv.conf
The result is:
Error from server: error dialing backend: dial tcp 10.0.22.131:10250: i/o timeout
Any idea?
Thanks in advance
You can execute kubectl commands from anywhere as long as your kubeconfig is configured to point to the right cluster URL (kube-apiserver), with the right credentials and the firewall allows connecting to the kube-apiserver port.
In your case, I'd check if your 10.0.22.131:10250 is the real IP:PORT for your kube-apiserver and that you can access it.
Note that kubectl exec -ti pod-test -- cat /etc/resolv.conf runs on the Pod and not on the Node. If you'd like to run on the Node just simply use SSH.
Update:
There are two other alternatives here:
You can create a pod (or debug pod) with a nodeSelector that specifically makes that pod run on the specific node.
If you are trying to debug something on a pod already running on a specific node, you can also try creating a debug ephemeral container.
On newer versions of Kubernetes you can use a debug pod to run something on a specific node
✌️

Run Kubernetes api server in minikube in verbose mode

Is it possible to run the kubernetes api-server in minikube with maximum log verbosity?
$ minikube start --v 4
didn't work for me. When I exec into the api-server container and do ps, the api-server commandline didn't have --v=4 in it. So, minikube is not passing the --v = 4 down to the api-server.
Thanks.
there is an error in the parameters, try this instead
minikube start --v=7

Error while running kubectl commands

I have recently installed minikube and kubectl. However when I run kubectl get pods or any other command related to kubectl I get the error
Unable to connect to the server: unexpected EOF
Does anyone know how to fix this?I am using Ubuntu server 16.04.Thanks in advance.
The following steps can be used for further debugging.
Check the minikube local cluster status using minikube status command.
$: minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 172.0.x.y
If problem with kubectl configuratuion,then configure it using, kubectl config use-context minikube command.
$: kubectl config use-context minikube
Switched to context "minikube".
Check the cluster status, using kubectl cluster-info command.
$: kubectl cluster-info
Kubernetes master is running at ...
Heapster is running at ...
KubeDNS is running at ...
...
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Note: It can even be due to very simple reason: internet speed (it happend to me just now).
I have same problem too. I solved after change the server addr to localhost
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/minikube/certs/ca.crt
server: https://localhost:8443 # check it
name: m01
...
users:
- name: m01
user:
client-certificate: /var/lib/minikube/certs/apiserver.crt
client-key: /var/lib/minikube/certs/apiserver.key
I think your kubernetes master is not setup properly. You can check that by checking the following services in master node are in active state and running.
etcd2.service
kube-apiserver.service Kubernetes API Server
kube-controller-manager.service Kubernetes Controller Manager
kube-scheduler.service Kubernetes Scheduler