Unable to start minikube on mac Os Big Sur Apple M1 Chip - minikube

I am running docker desktop 20.10.8 on my imac. I tried to install minikube and getting the following error when I start minikube. Do I need to do anything for this to work on the new macos chip M1?
Here is the error
❌ Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
πŸ’‘ Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled

Related

Minikube installation failed in new MacOS Chip M1 - OS Big Sur

I am trying to install minikube on my mac os with M1 chip and following below guideline.
https://minikube.sigs.k8s.io/docs/start/
I am running docker desktop 20.10.8 on my mac and getting the following error when I start minikube. Do I need to do anything for this to work on the new macos chip M1?
Here is the error
❌ Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared πŸ’‘ Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled

Monitor daemon running but not in quorum

I'm currently testing OS and version upgrades for a ceph cluster. Starting info:
The cluster is currently on Centos 7 and Ceph version Nautilus. I'm trying to change OS with ubuntu 20.04 and version with Octopus. I started with upgrading mon1 first. I will write down the things done in order.
First of I stopped monitor service - systemctl stop ceph-mon#mon1
Then I removed the monitor from cluster - ceph mon remove mon1
Then installed ubuntu 20.04 on mon1. Updated the system and configured ufw.
Installed ceph octopus packages.
Copied ceph.client.admin.keyring and ceph.conf to mon1 /etc/ceph/
Copied ceph.mon.keyring to mon1 to a temporary folder and changed ownership to ceph:ceph
Got the monmap ceph mon getmap -o ${MONMAP} - The thing is i did this after removing the monitor.
Created /var/lib/ceph/mon/ceph-mon1 folder and changed ownership to ceph:ceph
Created the filesystem for monitor - sudo -u ceph ceph-mon --mkfs -i mon1 --monmap /folder/monmap --keyring /folder/ceph.mon.keyring
After noticing I got the monmap after the monitors removal I added it manually - ceph mon add mon1 <ip> --fsid <fsid>
After starting manually and checking cluster state with ceph -s I can see mon1 is listed but is not in quorum. The monitor daemon runs fine on the said mon1 node. I noticed on logs that mon1 is stuck in "probe" state and on other monitor logs there is an output such as mon1 (rank 2) addr [v2:<ip>:3300/0,v1:<ip>:6789/0] is down (out of quorum) , as i said the the monitor daemon is running on mon1 without any visible errors just stuck in probe state.
I wondered if it was caused by os&version change so i first tried out configuring manager, mds and radosgw daemons by creating the respective folders in /var/lib/ceph/... and copying keyrings. All these services work fine, i was able to reach to my buckets, was able to open the Octopus version dashboard, and metadata server is listed as active in ceph -s. So evidently my problem is only with monitor configuration.
After doing some checking found this on red hat ceph documantation:
If the Ceph Monitor is in the probing state longer than expected, it
cannot find the other Ceph Monitors. This problem can be caused by
networking issues, or the Ceph Monitor can have an outdated Ceph
Monitor map (monmap) and be trying to reach the other Ceph Monitors on
incorrect IP addresses. Alternatively, if the monmap is up-to-date,
Ceph Monitor’s clock might not be synchronized.
There is no network error on the monitor, I can reach all the other machines in the cluster. The clocks are synchronized. If this problem is caused by the monmap situation how can I fix this?
Ok so as a result, directly from centos7-Nautilus to ubuntu20.04-Octopus is not possible for monitor services only, apparently the issue is about hostname resolution with different Operating systems. The rest of the services is fine. There is a longer way to do this without issue and is the correct solution. First change os from centos7 to ubuntu18.04 and install ceph-nautilus packages and add the machines to cluster (no issues at all). Then update&upgrade the system and apply "do-release-upgrade". Works like a charm. I think what eblock mentioned was this.

Minikube: Unable to start minikube - Exiting due to DRV_NO_IP:

I am trying to create the minikube cluster, but It always fails.
Any suggestions are very welcome:
C:\WINDOWS\system32>minikube start --driver=vmware
minikube v1.16.0 on Microsoft Windows 10 Home 10.0.19042 Build 19042
Using the vmware driver based on user configuration
Starting control plane node minikube in cluster minikube
Creating vmware VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
Deleting "minikube" in vmware ...
! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
Creating vmware VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
Failed to start vmware VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
X Exiting due to DRV_NO_IP: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting
Suggestion: Check your firewall rules for interference, and run 'virt-host-validate' to check for KVM configuration issues. If you are running minikube within a VM, consider using --driver=none
Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
Related issues:
https://github.com/kubernetes/minikube/issues/4249
https://github.com/kubernetes/minikube/issues/3566
I had a similar error when setting up Minikube on Mac OS.
When I run the command minikube start I get the error below:
πŸ˜„ minikube v1.22.0 on Darwin 11.4
✨ Using the vmware driver based on existing profile
πŸ‘ Starting control plane node minikube in cluster minikube
πŸƒ Updating the running vmware "minikube" VM ...
🀦 StartHost failed, but will try again: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
πŸƒ Updating the running vmware "minikube" VM ...
😿 Failed to start vmware VM. Running "minikube delete" may fix it: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
❌ Exiting due to GUEST_PROVISION: Failed to start host: provision: IP not found for MAC 00:0c:29:41:e9:b9 in DHCP leases
The issue was caused by an interruption when I was creating the VMWare VM for Minikube.
I tried fixing it by deleting the existing minikube vm and creating another one using:
minikube delete
minikube start
But then I ran into another issue this time:
Exiting due to DRV_NO_IP: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting.
Here's how I fixed it
After multiple research, it's just best to set up minikube with docker driver which will save you all this hassle.
For Mac OS:
If you already have docker installed or docker-compose simply uninstall it using:
brew uninstall docker
brew uninstall docker-compose
Next, install Docker desktop. This will install Docker/Docker Desktop, Docker-compose and other dependencies using:
brew install --cask docker
Next, start the docker engine by opening the Docker application. Afterwhich you can confirm the docker version using:
docker --version
Finally, setup minikube using docker driver:
minikube start --driver=docker

Minikube start automatically selects hyperkit as driver

I installed minikube and Virtualbox on OS X and was working fine until I executed
minikube delete
After that I tried
minikube start
and got the following
πŸ˜„ minikube v1.5.2 on Darwin 10.15.1
✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])
πŸ”‘ The 'hyperkit' driver requires elevated permissions. The following commands will be executed:
...
I do not want to use a different driver, why is this happening? I reinstalled minikube but the problem persisted. I could set which driver to use with:
minikube start --vm-driver=virtualbox
But I would rather have the default behavior after a fresh install. How can I set the default driver?
After googling a bit I found how to do it here
minikube config set vm-driver virtualbox
This command output is
⚠️ These changes will take effect upon a minikube delete and then a minikube start
So make sure to run
minikube delete
and
minikube start

How to fix VM issue with minikube start ?

I am a beginner to Kubernetes and starting off with this tutorial. I installed VM and expected to be able to start a cluster by using the command:
minikube start
But I get the error:
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 13:34:45.394430 41676 start.go:174] Error starting host: Error
creating host: Error executing step: Creating VM.
: Error setting up host only network on machine start: The host-only
adapter we just created is not visible. This is a well known
VirtualBox bug. You might want to uninstall it and reinstall at least
version 5.0.12 that is is supposed to fix this issue.
It says that it is a well known bug in Virtualbox but I installed its latest version. Any ideas?
Figured out the issue. VirtualBox was not installed correctly as Mac had blocked it. It wasn't obvious at first.
Restarting won't work if VirtualBox isn't installed correctly.
System Preferences -> Security & Privacy -> Allow -> Then allow the software corporation (in this case Oracle)
Restart
Now it worked as expected.
Have you tried restarting your computer after installing the VirtualBox ?
(seems to be also a known bug to docker-machine which is used by minikube to initialize you local env)
This definitely worked for me, starting minikube by specifying vm-driver and kubernetes-version
minikube start --vm-driver=hyperkit --kubernetes-version v1.16.0
Faced a similar issue in Mac after upgrading to big sur. The running minikube instance started giving the same error.
The solution that worked for me was to run a minikube delete, followed by a minikube start.
More combination of this option can be found in the thread below -
https://github.com/kubernetes/minikube/issues/3614