I installed minikube and Virtualbox on OS X and was working fine until I executed
minikube delete
After that I tried
minikube start
and got the following
😄 minikube v1.5.2 on Darwin 10.15.1
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
🔑 The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
...
I do not want to use a different driver, why is this happening? I reinstalled minikube but the problem persisted. I could set which driver to use with:
minikube start --vm-driver=virtualbox
But I would rather have the default behavior after a fresh install. How can I set the default driver?
After googling a bit I found how to do it here
minikube config set vm-driver virtualbox
This command output is
⚠️ These changes will take effect upon a minikube delete and then a minikube start
So make sure to run
minikube delete
and
minikube start
Related
I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container
I am trying to runminikube start--vm - driver = virtualbox orminikube start--vm - driver = hyperv by enabling the hyper - visor, but i am getting below error.
Can Someone please take me out of this:
For future, please try to post text instead of pic in your question. Second thing is that you are using quite old versions.
Usually errors like:
status error: host: state: machine does not exist minikube windows
The "minikube" host does not exist
are shown where you have some "leftovers" from previous minikube cluster.
It's hard to determine root cause exactly as it would need exact steps you have performed.
1. Delete previous minikube and start again
You have to use command like:
minikube delete
if you specified name of cluster using
minikube delete -p <your-cluster-name>
After that you should start minikube again.
minikube start --vm-driver=<depends-on-your-needs>
Here you have all drivers you can use with minikube.
2. Use --force flag.
It would look like:
minikube start --vm-driver=hyperv --force
but it's also recommended to user minikube delete before this command.
3. Steps to run Minikube on Windows.
As you mentioned you already have docker, but if you would need to reinstall, you can find good tutorial on Docker official docs.
Good tutorial, how run Minikube on Windows 10 can be found here.
You can also check this StackOverflow thread for more updated version.
4. Further issues with starting minikube
If you will still have issues with running minikube, please update your question with debug logs. It can be found here.
minikube start --vm-driver=hyberv --v=7
or
minikube logs
I use minikube v1.6.2, kubectl 1.17.
I start minikube without Virtualbox, with:
sudo minikube start --vm-driver none
Now, to stop it, I do:
sudo minikube stop
minikube stop # I don't know which one is the good one, but I do both
but, after that, when I do:
kubectl get po
I still get the pods listing. The only way to stop it is to actually reboot my machine.
Why is it happening, and how should I fix it ?
minikube stop when used with --vm-driver=none does not do any cleanup of the pods. As mentioned here:
When minikube starts without a hypervisor, it installs a local kubelet
service on your host machine, which is important to know for later.
Right now it seems that minikube start is the only command aware of
--vm-driver=none. Running minikube stop keeps resulting in errors related to docker-machine, and as luck would have it also results in
none of the Kubernetes containers terminating, nor the kubelet service
stopping.
Of course, if you wish to actually terminate minikube, you will need
to execute service kubelet stop and then ensure the k8s containers are
removed from the output in docker ps.
If you wish to know the overview of none (bare-metal) driver you can find it here.
Also as a workaround you can stop and remove all Docker containers that have 'k8s' in their name by executing the following command: docker stop (docker ps -q --filter name=k8s) and docker rm (docker ps -aq --filter name=k8s).
Please let me know if that helped.
When trying to run minikube with hyperkit, I was getting errors about xhyve not being installed. I installed that and reran minikube start --vm-driver hyperkit with no issues.
I was under the impression that hyperkit was a replacement for xhyve, not a supplement to it.
When I run ps I see both com.docker.hyperkit and docker-machine-driver-xhyve running.
How can I confirm that minikube is correctly using hyperkit?
Docker for Mac changed virtualization layer few times last years, and it can confuse users after updates of environment.
If the process list shows both com.docker.hyperkit and xhyve processes is probably due
to docker-machine environment which was previously set up using docker-machine-driver-xhyve.
You may consider cleaning up installation by
stopping Docker (from command line or from tray icon),
next removing machines created by docker-machine tool.
I can also suggest to remove current minikube installation using
minikube stop && minikube delete
and start fresh one with:
minikube start --v=10 --vm-driver=hyperkit"
That will add additional verbose output of building minikube environment.
This will give you the current driver for the current machine. Replace the second "minikube" with the name of your profile if you're using the --profile flag.
$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
Strange, considering Hyperkit is supposed to replace xhyve eventually.
Make sure Hyperkit is built/installed and referenced by tour PATH.
And that you are using the latest docker-ce for Mac.
Use this command to get a list of each hypervisor instance that's running with hyperkit:
$ ps -ef | grep hyperkit
If minikube is running in hyperkit then the name 'minikube' should show up in the output:
0 29305 1 0 Tue06PM ?? 515:01.32 /usr/local/bin/hyperkit -A -u -F /Users/me/.minikube/machines/minikube/hyperkit.pid -c 2 -m 2000M -s 0:0,...
The instance labeled as 'com.docker.hyperkit' is the process that's being used by Docker and is NOT the minikube instance.
I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container