minikube detecting old version when kubectl is up to date - kubernetes

I am installing minikube again on my Windows machine (did before a couple years ago but hadn't used in over a year) and the installation of the most recent kubectl and minikube went well. That is up until I tried to start minikube with:
minikube start --vm-driver=virtualbox
Which gives the error:
C:\>minikube start --vm-driver=virtualbox
* minikube v1.6.2 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
* Selecting 'virtualbox' driver from user configuration (alternates: [])
! Specified Kubernetes version 1.10.0 is less than the oldest supported version: v1.11.10
X Sorry, Kubernetes 1.10.0 is not supported by this release of minikube
Which doesn't make sense since my kubectl version --client gives back the version of v1.17.0:
C:\>kubectl version --client
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}
I did find that for some reason when I have the kubectl.exe that was downloaded to the correct kubectl folder in my program files(x86) (which the environment variable I had already was pointing to) it would say the version is v1.14.3. But then I copied the same file from that folder and just pasted it into the C Drive at its root and then it says the version is v1.17.0.
I am assuming that is just because it being at root is the same as adding it to the environment variables, but then that means something has an old v1.14.3 kubectl file, but there aren't any other kubectl files in there.
So basically, I am not sure if there is something that needs to be set in minikube (which from the documentation I haven't seen a reference to) but somehow minikube is detecting an old kubectl that I need to get rid of.

Since you already had the minikube installed before and update the installation, the best thing to do is execute minikube delete to clean up all previous configuration.
The minikube delete command can be used to delete your cluster. This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.
After that execute minikube start --vm-driver=virtualbox and wait the cluster up.
References:
https://kubernetes.io/docs/setup/learning-environment/minikube/#deleting-a-cluster

Related

has to restart minikube to make my service successfully resquested [duplicate]

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container

Kubernetes create deployment unexpected SchemaError

I'm following that tutorial (https://www.baeldung.com/spring-boot-minikube)
I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-crud
spec:
selector:
matchLabels:
app: simple-crud
replicas: 3
template:
metadata:
labels:
app: simple-crud
spec:
containers:
- name: simple-crud
image: simple-crud:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
but when I run kubectl create -f simple-crud-dpl.yaml i got:
error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties
I'm using the newest version of kubectl:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.
After installing kubectl with brew you should run:
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
And also optionally:
brew link --overwrite --dry-run kubernetes-cli.
I second #rennekon's answer. I found that I had docker running on my machine which also installs kubectl. That installation of kubectl causes this issue to show.
I took the following steps:
uninstalled it using brew uninstall kubectl
reinstalled it using brew install kubectl
(due to symlink creation failure) I forced brew to create symlinks using brew link --overwrite kubernetes-cli
I was then able to run my kubectl apply commands successfully.
I too had the same problem. In my Mac system kubectl is running from docker which is preinstalled when I install Docker. You can check this by using below command
ls -l $(which kubectl)
which returns as
/usr/local/bin/kubectl ->
/Applications/Docker.app/Contents/Resources/bin/kubectlcode.
Now we have to overwrite the symlink with kubectl which is installed using brew
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
(optinal)
brew unlink kubernetes-cli && brew link kubernetes-cli
To Verify
ls -l $(which kubectl)
I encountered the same issue on minikube/ Windows 10 after installing Docker.
It was caused by the version mismatch of kubectl that was mentioned a couple of times already in this thread. Docker installs version 1.10 of kubectl.
You have a couple of options:
1) Make sure the path to your k8s bin is above the ones in docker
2) Replace the kubectl in 'c:\Program Files\Docker\Docker\resources\bin' with the correct one
Your client version is too old. In my env this version comes with Docker. I have to download new client from https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe and now works fine:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
You can use "--validate=false" in your command. For example:
kubectl create -f simple-crud-dpl.yaml --validate=false
You are using the wrong kubectl version.
Kubectl is compatible 1 version up and down as described in the official docs
The error is confusing but it simply means that your version 1.10 isn't sending all the required parameters to the 1.14 api.
I am on Windows 10 with Docker Client and Minikube both installed. I was getting the error below;
error: SchemaError(io.k8s.api.core.v1.Node): invalid object doesn't have additional properties
I resolved it by updating the version of kubectl.exe to that being used by minikube. Here are the steps:
Note: Minikube tends to use the latest version of Kubernetes so it will be advisable to grab the latest kubectl.
Download the matching version of kubectl.exe.
Navigate to your Docker path where your kubectl is located e.g.
C:\Program Files\Docker\Docker\resources\bin
Place your downloaded kubectl.exe there. If it asks you replace it, please do.
Now type refreshenv in Powershell.
Check the new version if it's what you have placed there; kubectl version.
Now you are good, retry whatever tasks you was doing.
I was getting below error while running kubectl explain pod on windows 10
error: SchemaError(io.k8s.api.core.v1.NodeCondition): invalid object doesn't have additional properties
I had both Minikube and Docker Desktop installed. Reason for this error, as mentioned in earlier answers as well, was mismatch between server (major 1 minor 15) and client version (major 1 minor 10). Client version was coming from Docker Desktop.
To fix I upgraded kubectl client version to v1.15.1 as described here
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/windows/amd64/kubectl.exe
Mac user !!! This is for those who installed docker desktop first. The error will show up when you use the apply command. The error comes for a version miss match as some people said here. I did not install kubectl using homebrew. Rather kubectl auto get install when you install docker desktop for mac.
To fix this what I have done is bellow:
Remove the kubectl executable file
rm /usr/local/bin/kubectl
Download kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Change the permission:
chmod +x ./kubectl
Move the executable file :
sudo mv ./kubectl /usr/local/bin/kubectl
That is it folks!
Just to show it worked here is the output:
kubectl apply -f ./deployment.yaml
deployment.apps/tomcat-deployment created
Make sure the yml file is correct. I downloaded a valid file from here to test :
https://github.com/LevelUpEducation/kubernetes-demo/tree/master/Introduction%20to%20Kubernetes/Your%20First%20k8s%20App
Was running into the same issue after installing kubectl on my Mac today. Uninstalling kubectl [via brew uninstall kubectl] and reinstalling [brew install kubectl] resolved the issue for me.
According to the kubectl docs,
You must use a kubectl version that is within one minor version difference of your cluster.
kubectl v1.10 client apparently makes requests to kubectl v1.14 server without some newly (in 4 minor versions) required parameters.
For brew users, reinstall kubernetes-cli. It's worth checking what installed the incompatible version. For brew users, check the command symlink ls -l $(which kubectl).
For me Docker installation was the problem. As Docker now comes with Kubernetes support, it installs kubectl along with its own installation. I had downloaded kubectl and minikube without knowing it, then my minikube was being used by Docker's kubectl installation.
Make sure that it is not also happening with you.
A second cause would be a deprecated apiVersion in your .yaml files.
I had a similar problem with error
error: SchemaError(io.k8s.api.storage.v1beta1.CSIDriverList): invalid object doesn't have additional properties
My issue was that my mac was using google's kubectl that was installed with the gcp tools. My path looks there first before going into /usr/local/bin/
Once I run kubectl from /usr/local/bin my problem went away.
In my case, kubectl is always using google's kubectl by gcloud tool, or there was most probably a conflict between Homebrew installed and Gcloud Installed kubectl. I uninstalled Homebrew kubectl and upgrade gcloud tool to the latest, which eventually upgrades the kubectl also in the process. It resolved my issue.
I don't think the problem is with imagePullPolicy, unless you don't have the image locally. The error is about autoscaling, which means it's not able to create replicas of the container.
Can you set replicas: 1 and give it a try?
On windows 10.0, uninstalling Docker helped me get away this problem. Doing with kubectl and minikube.
I know this has already been answered but I though I should post my response since the responses above were helpful but it took me a while to relate it to Azure Dev Ops.
I was getting this error when I was trying to deploy an app to a AKS cluster from Azure Devops. As mentioned above, one of the issues this error could appear is because of version mismatch which was the cause in my case. I fixed it by updating my AKS version into the kubectl advanced configuration section as shown in the figure below

Minikub Not Starting Ubuntu 18.4

Minikube not starting with several error messages.
kubectl version gives following message with port related message:
iqbal#ThinkPad:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes 1.12.
Indeed, the compatibility matrix between kubernetes and docker recommends to run :
Docker 18.06 + kubernetes 1.12 (Docker 18.09 is not supported now).
Thus, make sure docker version is NOT above 18.06. Then, run the following:
# clean up
minikube delete
minikube start --vm-driver="none"
kubectl get nodes
If you are still encountering issues, please give more details, namely minikube logs.
If you want to change the VM driver add the appropriate --vm-driver=xxx flag to minikube start. Minikube supports
the following drivers:
virtualbox
vmwarefusion
KVM2
KVM (deprecated in favor of KVM2)
hyperkit
xhyve
hyperv
none (Linux-only) - this driver can be used to run the Kubernetes cluster components on the host instead of in a VM. This can be useful for CI workloads which do not support nested virtualization. For example, if your vm is virtualbox then use:
$ minikube delete
$ minikube start --vm-driver=virtualbox

kubernetes time-out after re-installation on macOS

I followed the instructions on https://kubernetes.io/docs/tasks/tools/install-kubectl/ on my Mac and installed Kubernetes CLI using brew.
brew install kubernetes-cli
kubectl and Minikube were installed already some time ago, so I was expecting an update. Now kubectl version and kubernetes cluster-info time out.
pa-demo jps$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
When I try to install kubernetes-cli again, I get:
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core, homebrew/cask).
==> New Formulae
topgrade
==> Updated Formulae
bison ✔ azure-cli bwfmetaedit erlang#20 ghostscript jfrog-cli-go ldc p11-kit smlnj youtube-dl
sphinx-doc ✔ babel crystal fauna-shell helmfile juju mkvtoolnix pyside tarsnap-gui
alexjs bat doctl fortio influxdb kore nginx re2c thors-serializer
Warning: kubernetes-cli 1.11.2 is already installed and up-to-date
To reinstall 1.11.2, run `brew reinstall kubernetes-cli`
You may have installed Minikube, but that doesn't necessarily mean it's actively running. You'd need to run minikube start to actually start the cluster on your machine. This also configures your kubeconfig file to point at the cluster it built.

Minikube: kubectl connection refused - did you specify the right host or port?

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container