How to keep minikube running all the time? - minikube

Minikube application stops everyday and shows below error,
ubuntu#ubuntu:~$ kubectl get pods
Unable to connect to the server: dial tcp 192.168.58.2:8443: connect: no route to host
After running below command, it comes back to normal.
ubuntu#ubuntu:~$ minikube start
Please let me know if there any way to keep it up all the time.
Output of minikube start command.
kalpesh#kalpesh:~$ minikube start
πŸ˜„ minikube v1.24.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
πŸ”„ Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
πŸ”Ž Verifying Kubernetes components...
β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
β–ͺ Using image kubernetesui/dashboard:v2.3.1
β–ͺ Using image kubernetesui/metrics-scraper:v1.0.7
🌟 Enabled addons: default-storageclass, storage-provisioner, dashboard
πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Minikube config
kalpesh#kalpesh:~$ minikube config view
- cache: map[stock_updates_stock_updates:latest:<nil>]
- cpus: 4
- memory: 8192

It can be the issue with your firewall, try to disable it like:
sudo ufw disable
OR
Get your minikube VM's IP and do the following command in order to create a rich firewall rule to allow all traffic from this VM to your Host:
$ firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="YOUR.IP.ADDRESS.HERE" accept'

Related

The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

I have setup the Kubernetes cluster with Kubespray
Once I restart the node and check the status of the node I am getting as below
$ kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Environment:
OS : CentOS 7
Kubespray
kubelet version: 1.22.3
Need your help on this.
Regards,
Zain
This work for me, I'm using minukube,
When checking the minikube status by running the command minikube status you'll probably get something like that
E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got:
127.0.0.1:55900, want: 127.0.0.1:49736
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
To fix it, I just followed the next steps:
minikube update-context
minukube start
Below step can solve your issue.
kubelet may be down, use the below commands on the master node.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
Then try using kubectl get nodes.
Thank you Sai for your inputs. i was getting journalctl -xeu kubelet output was Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory i was restarted and enabled cri-dockerd services
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
then sudo systemctl start kubelet finally it works for me.
#kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
this link will give https://github.com/kubernetes-sigs/kubespray/issues/8734 more info.
Regards,Zain

minikube and how to debug api server error

I dont get what is going on with minikube. Below are the steps I undertook to fix the problem with a stopped apiserver.
1) I dont know why the api server stopped. How do I debug? this folder is empty:
--> EMPTY ~/.minikube/logs/
2) After stop I start again and minikube says all is well. I do a status check and I get apiserver: Error. So...no logs..how do I debug?
3) And finally what would cause and apiserver error?
Thanks
~$ minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100~$
~$ minikube stop
Stopping local Kubernetes cluster...
Machine stopped.
~$ minikube start
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health .....Kubectl is now configured to use the cluster.
Loading cached images from config file.
Everything looks great. Please enjoy minikube!
~$ minikube status
host: Running
kubelet: Running
apiserver: Error

minikube stops randomly and can't run kubectl command

Sometimes when Minikube is already running and I try to run any kubectl command (like kubectl get pods) I get this error:
Unable to connect to the server: dial tcp 192.168.99.101:8443
So I stop Minikube and start it again and all kubectl commands work fine, but then after a while if I try to run any kubectl command I get the same error as above.
If I type minikube ip I get 192.168.99.100. Why does kubectl try to connect to 192.168.99.101 (as mentioned in the error) when Minikube is running on 192.168.99.100?
Note that I'm very new to Kubernetes.
kubectl config get-contexts gives me this output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
This is minikube logs output https://pastebin.com/kb5jNRyW
This usually happens when the IP of your VM might be changed, and your minikube is pointing to the previous IP, You can check through minikube ip and then check the IP of the VM created, they will be different.
You can also try minikube status, your output will be :
minikube: Running
cluster: Stopped
kubectl: Misconfigured: pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context
You can try minikube update-context and if it doesn't run even then, try minikube start followed by minikube update-context, it won't download everything again, it will only start the VM if shut down.

Kubectl with minikube - Error restarting cluster: kubeadm.yaml

I have kubernetes + minicube installed (MacOs 10.12.6) installed. But while trying to start the minicube i get constant errors:
$: minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0601 15:24:50.571967 67567 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
I've also tried to do minikube delete and the minikube start that didn't help (Minikube never start - Error restarting cluster). Also kubectl config use-context minikube was done.
I have minikube version: v0.26.1
It looks to me that kubeadm.yaml file is missing or misplaced.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
In your issue, below steps should do the initialization process successfully:
minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
In the case you mixed Kubernetes and minikube environments I suggest to inspect $HOME/.kube/config file
and delete minikube entries to avoid problem with reinitialization.
If minikube still refuses to start please post logs to analyze. To get detailed log start minikube this way:
minikube start --v=9

Unable to connect to the server: dial tcp i/o time out

When i run the kubectl version command , I get the following error message.
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
How do I resolve this?
You can get relevant information about the client-server status by using the following command.
kubectl config view
Now you can update or set k8s context accordingly with the following command.
kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
you can do further action on kubeconfig file. the following command will provide you with all necessary information.
kubectl config --help
You have to run first
minikube start
on your terminal. This will do following things for you:
Restarting existing virtualbox VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.3-ce
βŒ› Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.14.1 ...
πŸ”„ Relaunching Kubernetes v1.14.1 using kubeadm ...
βŒ› Waiting for pods: apiserver proxy etcd scheduler controller dns
πŸ“― Updating kube-proxy configuration ...
πŸ€” Verifying component health ......
πŸ’— kubectl is now configured to use "minikube"
πŸ„ Done! Thank you for using minikube!
If you use minikube then you should run, kubectl config use-context minikube
If you use latest docker for desktop that comes with kubernetes then you should run, kubectl config use-context docker-for-desktop
I was facing the same issue on Ubuntu 18.04.1 LTS.
The solution provided here worked for me.
Just putting the same data here:
Get current cluster name and Zone:
gcloud container clusters list
Configure Kubernetes to use your current cluster:
gcloud container clusters get-credentials [cluster name] --zone [zone]
Hope it helps.
I had the same issue when I tried use kubrnetes installed with Docker. It turned out that it was not enbled by default.
First I enabled kubrnetes in Docker options and then I changed context for docker-for-desktop
kubectl config get-contexts
kubectl config use-context docker-for-desktop
It solved the issue.
This problem occurs because of minikube. Restart minikube will solve this problem.Run below command and it will work-
minikube stop
minikube delete
minikube start
Was facing the same problem with accessing GKE master from Google Cloud Shell.
Then I followed this GCloud doc to solve it.
Open GCloud Shell
Get External IP of the current GCloud Shell with:
dig +short myip.opendns.com #resolver1.opendns.com
Add this External IP into the "Master authorized networks" section of the GKE cluster - with a CIDR suffix of /32
After that, running kubectl get nodes from the GCloud Shell worked right away.
I got similar problem when I run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
And here's how I tried and finally worked.
I installed Docker Desktop on Mac (Version 2.0.0.3) firstly.
Then I installed the kubectl with command
$ brew install kubectl
.....
==> Pouring kubernetes-cli-1.16.0.high_sierra.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
rm '/usr/local/bin/kubectl'
To force the link and overwrite all conflicting files:
brew link --overwrite kubernetes-cli
To list all files that would be deleted:
brew link --overwrite --dry-run kubernetes-cli
Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
.....
That doesn't matter, we have already got the kubectl.
Then I install minikube with command
$ brew cask install minikube
...
==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
🍺 minikube was successfully installed!
start minikube first time (VirtualBox not installed)
$ minikube start
πŸ˜„ minikube v1.4.0 on Darwin 10.13.6
πŸ’Ώ Downloading VM boot image ...
> minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 7.75 MiB p/s 18s
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
πŸ”„ Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
...
πŸ’£ Unable to start VM
❌ Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
πŸ’‘ Suggestion: Install VirtualBox, or select an alternative value for --vm-driver
πŸ“˜ Documentation: https://minikube.sigs.k8s.io/docs/start/
⁉️ Related issues:
β–ͺ https://github.com/kubernetes/minikube/issues/3784
Install VirtualBox, then start minikube second time (VirtualBox installed)
$ minikube start
πŸ˜„ 13:37:01.006849 35511 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 failed: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
🐳 Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
E1002 13:37:33.632298 35511 start.go:706] Error caching images: Caching images for kubeadm: caching images: caching image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
❌ Unable to load cached images: loading cached images: loading image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: stat /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: no such file or directoryminikube v1.4.0 on Darwin 10.13.6
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E1002
πŸ’Ύ Downloading kubeadm v1.16.0
πŸ’Ύ Downloading kubelet v1.16.0
🚜 Pulling images ...
πŸš€ Launching Kubernetes ...
πŸ’£ Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new/choose
❌ Problems detected in kube-addon-manager [b17d460ddbab]:
error: no objects passeINFO:d == Kuto apberneply
error: no objectNsF Op:a == Kubernetssed tes ado appdon ely
start minikube 3rd time
$ minikube start
πŸ˜„ minikube v1.4.0 on Darwin 10.13.6
πŸ’‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ Using the running virtualbox "minikube" VM ...
βŒ› Waiting for the host to be provisioned ...
🐳 Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
πŸ”„ Relaunching Kubernetes using kubeadm ...
! still got stuck on Relaunching
I enable Kubernetes config in Docker Preferences setting, restart my Mac and switch the Kubernetes context to docker-for-desktop.
Oh, the kubectl version works this time, but with the context docker-for-desktop
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
start minikube 4th time (after restart system maybe)
$ minikube start
πŸ˜„ minikube v1.4.0 on Darwin 10.13.6
πŸ’‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„ Starting existing virtualbox VM for "minikube" ...
βŒ› Waiting for the host to be provisioned ...
🐳 Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
πŸ”„ Relaunching Kubernetes using kubeadm ...
βŒ› Waiting for: apiserver proxy etcd scheduler controller dns
πŸ„ Done! kubectl is now configured to use "minikube"
Finally, it works with minikube context...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
i checked the firewall port and it was closed, i opened it and it started working.
If you are using azure and have recently changed your password try this:
az account clear
az login
After logging in successfully:
az aks get-credentials --name project_name --resource-group resource_group_name
Now when you run
kubectl get nodes
you should see something. Also, make sure you are using the correct kubectl context.
My problem was that I use 2 virtual networks on my VM. The network which Kubernetes uses is always the one of the Default Gateway. However the communication network between my VMs was the other one.
You can force Kubernetes to use a different network by using the folowing flags:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=xxx.xxx.xxx.xxx --apiserver-advertise-address=xxx.xxx.xxx.xxx
Change the xxx.xxx.xxx.xxx with the commmunication IP address of your K8S master.
I have two contexts and I got this error when I was in the incorrect one of the two so I switched the context and this error was resolved.
To see your current context: kubectl config current-context
To see the contexts you have: kubectl config view
To switch context: kubectl config use-context context-cluster-name
Adding this here so it can help someone with a similar problem.
In our case, we had to configure our VPC network to export its custom routes for VPC peering β€œgke-jn7hiuenrg787hudf-77h7-peer” in project β€œβ€ to the control plane's VPC network.
The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premise resources.
Step-1: Run command to see all list of context:
kubectl config view
Step-2: Now switch your context where you want to work.
kubectl config use-context [context-name]
For example:
kubectl config use-context docker-desktop
I face the same issue, it might be your ip was not added into authorize network list in the Kubernetes Cluster. Simply navigate to:
GCP console -> Kubernetes Engine -> Click into the Clusters you wish to interact with
In the target Cluster page look for:
Control plane authorized networks -> click pencil icon -> Add Authorized Network
Add your External Ip with a CIDR suffix of /32 (xxx.xxx.xxx.xxx/32).
One way to get your external IP on terminal / CMD:
curl -4 ifconfig.co