I dont get what is going on with minikube. Below are the steps I undertook to fix the problem with a stopped apiserver.
1) I dont know why the api server stopped. How do I debug? this folder is empty:
--> EMPTY ~/.minikube/logs/
2) After stop I start again and minikube says all is well. I do a status check and I get apiserver: Error. So...no logs..how do I debug?
3) And finally what would cause and apiserver error?
Thanks
~$ minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100~$
~$ minikube stop
Stopping local Kubernetes cluster...
Machine stopped.
~$ minikube start
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health .....Kubectl is now configured to use the cluster.
Loading cached images from config file.
Everything looks great. Please enjoy minikube!
~$ minikube status
host: Running
kubelet: Running
apiserver: Error
Related
I have setup the Kubernetes cluster with Kubespray
Once I restart the node and check the status of the node I am getting as below
$ kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Environment:
OS : CentOS 7
Kubespray
kubelet version: 1.22.3
Need your help on this.
Regards,
Zain
This work for me, I'm using minukube,
When checking the minikube status by running the command minikube status you'll probably get something like that
E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got:
127.0.0.1:55900, want: 127.0.0.1:49736
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
To fix it, I just followed the next steps:
minikube update-context
minukube start
Below step can solve your issue.
kubelet may be down, use the below commands on the master node.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
Then try using kubectl get nodes.
Thank you Sai for your inputs. i was getting journalctl -xeu kubelet output was Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory i was restarted and enabled cri-dockerd services
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
then sudo systemctl start kubelet finally it works for me.
#kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
this link will give https://github.com/kubernetes-sigs/kubespray/issues/8734 more info.
Regards,Zain
Minikube application stops everyday and shows below error,
ubuntu#ubuntu:~$ kubectl get pods
Unable to connect to the server: dial tcp 192.168.58.2:8443: connect: no route to host
After running below command, it comes back to normal.
ubuntu#ubuntu:~$ minikube start
Please let me know if there any way to keep it up all the time.
Output of minikube start command.
kalpesh#kalpesh:~$ minikube start
😄 minikube v1.24.0 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image kubernetesui/dashboard:v2.3.1
▪ Using image kubernetesui/metrics-scraper:v1.0.7
🌟 Enabled addons: default-storageclass, storage-provisioner, dashboard
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
Minikube config
kalpesh#kalpesh:~$ minikube config view
- cache: map[stock_updates_stock_updates:latest:<nil>]
- cpus: 4
- memory: 8192
It can be the issue with your firewall, try to disable it like:
sudo ufw disable
OR
Get your minikube VM's IP and do the following command in order to create a rich firewall rule to allow all traffic from this VM to your Host:
$ firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="YOUR.IP.ADDRESS.HERE" accept'
I have installed a Kubernetes cluster using kubeadm. It works fine. But accidentally I restarted my host machine where master was up. After the restart the kubelet was not running . I had to do
Kubeadm reset
And
Kubeadm init
What should I do to bring my cluster up automatically after a host machine restart ?
There is no need to do kubeadm reset and kubeadm init in such cases.
To start kubelet service during the current session use:
systemctl start kubelet
To start service automatically during the boot, you must enable it using:
systemctl enable kubelet
Keep in mind that if you are running above commands as a non-root user, you will have to use sudo.
I have kubernetes + minicube installed (MacOs 10.12.6) installed. But while trying to start the minicube i get constant errors:
$: minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0601 15:24:50.571967 67567 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
I've also tried to do minikube delete and the minikube start that didn't help (Minikube never start - Error restarting cluster). Also kubectl config use-context minikube was done.
I have minikube version: v0.26.1
It looks to me that kubeadm.yaml file is missing or misplaced.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
In your issue, below steps should do the initialization process successfully:
minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
In the case you mixed Kubernetes and minikube environments I suggest to inspect $HOME/.kube/config file
and delete minikube entries to avoid problem with reinitialization.
If minikube still refuses to start please post logs to analyze. To get detailed log start minikube this way:
minikube start --v=9
When I do a minikube start I get:
Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
However, if I do this again I get the same output.
Is this creating a new cluster, reprovisioning the existing cluster or just doing nothing?
minikube start resumes the old cluster that is paused/stoped.
But if you delete the minikube machine by minikube delete or manually delete from virtualbox, minikube start will create a new cluster in that case.
MINIKUBE is a lightweight kubernetes implementation that creates a VM on your
local machine.
and deploys a cluster containing only one NODE.
so basically MINIKUBE is the only NODE in the Kubernetes cluster.
Any multiple attempts to start the minikube is directed to create ONE NODE ONLY.
$ kubectl get nodes
this return minikube node
but i came across this link which provides info on creating multiple nodes multi node cluster