kubectl : connection refused - kubernetes

I am on the way of installing minkube 0.19.1 in Ubuntu 16.04 following the kubernetes documentation. As prerequisits I have installed kubectl and Oracle VirtualBox.
When I check kubectl with kubectl version it gives following.
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:34:20Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
But when I netstat the port to check the process it gives nothing for the results.
I have setup Google cloud SDK as well.
I have searched and tried for many solutions inclusing this but was not able to resolve my issue.
Here are my gcloud config and info results.
$gcloud config list
[compute]
zone = asia-southeast1-a
[core]
account = userName#mail.com
disable_usage_reporting = False
project = sampleproject1990
$gcloud info
Google Cloud SDK [159.0.0]
Platform: [Linux, x86_64] ('Linux', 'userName', '4.8.0-54-generic', '#57~16.04.1-Ubuntu SMP Wed May 24 16:22:28 UTC 2017', 'x86_64', 'x86_64')
Python Version: [2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]
Installation Root: [/home/userName/products/google-cloud-sdk]
Installed Components:
kubectl: []
core: [2017.06.09]
gcloud: []
gsutil: [4.26]
bq: [2.0.24]
alpha: [2017.03.24]
System PATH: [PATH=/usr/lib/jvm/java-8-oracle/bin:/home/userName/bin:/home/userName/.local/bin:/usr/local/maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/usr/local/apache-maven-3.3.9/bin]
Python PATH: [/home/userName/products/./google-cloud-sdk/lib/third_party:/home/userName/products/google-cloud-sdk/lib:/usr/lib/python2.7/:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
WARNING: There are old versions of the Google Cloud Platform tools on your system PATH.
/usr/local/bin/kubectl
Installation Properties: [/home/userName/products/google-cloud-sdk/properties]
User Config Directory: [/home/userName/.config/gcloud]
Active Configuration Name: [my-configuration]
Active Configuration Path: [/home/userName/.config/gcloud/configurations/config_my-configuration]
Account: [userName#mail.com]
Project: [sampleproject1990]
Current Properties:
[core]
project: [sampleproject1990]
account: [userName#mail.com]
disable_usage_reporting: [False]
[compute]
zone: [asia-southeast1-a]
Logs Directory: [/home/userName/.config/gcloud/logs]
Last Log File: [/home/userName/.config/gcloud/logs/2017.06.21/12.39.23.391849.log]
git: [git version 2.7.4]
ssh: [OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016]
Can anyone tell me how I can fix this issue ?

I had similar issues with Minikube and virtualbox driver. Please ensure the interface to which the virtualbox is configured, is up .
I did a sudo ifconfig vboxnet0 up and my issue got resolved

I faced the same issue. Turns out that I was running the command without being the root user. So, if you login as the super user (sudo -i), it might work.

This issue is because the Kubelet is not running or is not healthy.
One way to resolve this issue:
$ sudo swapoff -a
$ sudo systemctl enable kubelet
$ sudo systemctl start kubelet
After this, deploy Kubernetes with kubeadm as given below:
$ sudo kubeadm init --ignore-preflight-errors=all
After loading the kubeadm credentials, untaint the master node and join worker nodes if you are working on a cluster.
And now give the command:
$ sudo kubectl cluster-info
The server and the client should be running with the same Kubernetes version.
If this solution doesn't work, scrape Kubernetes, kubectl, kubeadm and kubelet and follow the Kubernetes installation steps alone from this guide.

Related

has to restart minikube to make my service successfully resquested [duplicate]

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
πŸŽ‰ "minikube" context has been updated to point to 127.0.0.1:56537
πŸ’— Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container

How to migrate kubeconfig on windows 10

I have been trying to install minikube for two days now. I have run into issue after issue. This one has me stumped.
install minikube on windows 10
Docker has been running with hyperv for months
followed windows using choco ignoring everything with hyperv install.
W0107 08:23:27.485052 3337 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
From what i understand there is a new config. that i need to migrate to. to do that i need to use kubeadm but I haven't been able to find any information on where to find these files or how to do the migration. Here is what i have tried.
From an elevated command prompt i ran:
minikube ssh
Then i found kubeadm in the following directory.
cd /var/lib/minikube/binaries/v1.17.0
where by i started throwing some random commands at it in hopes of some help
$ ./kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:17:50Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
$ ./kubeadm config view
failed to load admin kubeconfig: open /home/docker/.kube/config: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
$ ./kubeadm init --config defaults
unable to read config from "defaults" : open defaults: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
As the error message said to run this i gave that a go. That didnt work either
$ ./kubeadm config migrate --old-config old.yaml --new-config new.yaml
open old.yaml: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
After digging around in the logs i found that it was trying to load the following config file so i tried to load that as the old one in hopes it was smart enough to make its own new one.
./kubeadm config migrate --old-config /var/tmp/minikube/kubeadm.yaml --new-config new.yaml
open /var/tmp/minikube/kubeadm.yaml: permission denied
To see the stack trace of this error execute with --v=5 or higher
Ok then lets check the permissions on the file
$ ls -la /var/tmp/minikube/kubeadm.yaml
-rw-r----- 1 root root 1156 Jan 1 0001 /var/tmp/minikube/kubeadm.yaml
Well that's not good let's try to update it
$ chmod u=r /var/tmp/minikube/kubeadm.yaml
chmod: changing permissions of '/var/tmp/minikube/kubeadm.yaml': Operation not
permitted
ok
$ sudu chmod u=r /var/tmp/minikube/kubeadm.yaml
-bash: sudu: command not found
edit sudo
$ sudo chmod u=r /var/tmp/minikube/kubeadm.yaml
$ ls -la /var/tmp/minikube/kubeadm.yaml
-r--r----- 1 root root 1156 Jan 1 0001 /var/tmp/minikube/kubeadm.yaml
update 777
$ sudo chmod 777 /var/tmp/minikube/kubeadm.yaml
$ ls -la /var/tmp/minikube/kubeadm.yaml
-rwxrwxrwx 1 root root 1156 Jan 1 0001 /var/tmp/minikube/kubeadm.yaml
fig new.yamlconfig migrate --old-config /var/tmp/minikube/kubeadm.yaml --new-conf
W0107 13:19:23.298409 4361 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0107 13:19:23.298437 4361 validation.go:28] Cannot validate kubelet config - no validator is available
failed to write the new configuration to the file "new.yaml": open new.yaml: permission denied
To see the stack trace of this error execute with --v=5 or higher
Still no dice. this appears to be a very limited bash shell.
have a file
Ok thanks to some chmod 777's i know have a file but what do i do with it?
./kubeadm config migrate --old-config /var/tmp/minikube/kubeadm.yaml --new-config /home/docker/new.yaml
W0107 13:22:21.615314 6352 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0107 13:22:21.615375 6352 validation.go:28] Cannot validate kubelet config - no validator is available
There seems to be little or no documentation on how to deal with this I have cross posted the issue on the forum. #6227 any help would be greatly appreciated. I have tried to remove minikube and add it again with the same results.
current status
sudo chmod 777 /var/tmp/minikube/kubeadm.yaml
ls -la /var/tmp/minikube/kubeadm.yaml
cd /var/lib/minikube/binaries/v1.17.0
./kubeadm config migrate --old-config /var/tmp/minikube/kubeadm.yaml --new-config /home/docker/new.yaml
sudo chmod 777 /var/tmp/minikube
mv /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadmold.yaml
mv /home/docker/new.yaml /var/tmp/minikube/kubeadm.yaml
minikube start --vm-driver=hyperv --v=7 --alsologtostderr
No change same error message.
1.16.0
C:\Windows\system32>minikube start --vm-driver=hyperv --kubernetes-version=1.16.0
* minikube v1.6.0 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Selecting 'hyperv' driver from user configuration (alternates: [])
* Creating hyperv VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
! Unable to verify SSH connectivity: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address. Will retry...
X minikube is unable to connect to the VM: dial tcp: address fe80::215:5dff:fe37:c505:22: too many colons in address
This is likely due to one of two reasons:
- VPN or firewall interference
- hyperv network configuration issue
Suggested workarounds:
- Disable your local VPN or firewall software
- Configure your local VPN or firewall to allow access to fe80::215:5dff:fe37:c505
- Restart or reinstall hyperv
- Use an alternative --vm-driver
As you are able to see kubeadm version it suggest that you used Kubernetes kubeadm, which is bit different from Minikube.
Kubeadm is a tool to get Kubernetes working on existing machine. It will configure and start all required Kubernetes components. Using Kubeadm you are able to create cluster with multiple nodes (kubeadm join).
Minikube is a tool which start single Kubernets Cluster local node.
There is already good explanation in this Stackoverflow question.
I don't think that Kubeadm configuration from Linux will work on Windows. As you mentioned in comment, that you want to run Minikube in windows to learn Kubernetes, I will provide step by step how to run Minikube on Windows 10.
Installation Minikube on Windows 10
As you already have docker I will skip that installation steps.
1. Download kubectl and minikube.
Newest version is provided in official kubernetes docs.
kubectl v1.17 and minikube from github (minikube-windows-amd64.exe).
2. Add to PATH form folder to Environment Variables.
Create folder where you will paste kubectl.exe and renamed minikube.exe file.
Add This folder to PATH. (If someone would need here is tutorial).
3. Create external Virtual Switch Manager in Hyper-V.
Go to Hyper-V. From the right menu choose Virtual Switch Manager. Choose External and name it Primary Virtual Switch. Then apply.
4. Verify minikube and kubeadm versions.
PS C:\WINDOWS\system32> kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}​
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}​
PS C:\WINDOWS\system32> minikube version​
minikube version: v1.6.2​
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392
5. Create Minikube Cluster.
Run PowerShell as Administrator.
minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
PS C:\WINDOWS\system32> minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
* minikube v1.6.2 on Microsoft Windows 10 Enterprise 10.0.17134 Build 17134​
* Selecting 'hyperv' driver from user configuration (alternates: [])​
* Creating hyperv VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...​
* Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...​
* Pulling images ...​
* Launching Kubernetes ...​
* Waiting for cluster to come online ...​
* Done! kubectl is now configured to use "minikube"​
Now you can use kubectl commands and already have default resources required for running Minikube.
PS C:\WINDOWS\system32> kubectl get pods --all-namespaces​
NAMESPACE NAME READY STATUS RESTARTS AGE​
kube-system coredns-6955765f44-c4cbj 1/1 Running 0 31m​
kube-system coredns-6955765f44-rqfth 1/1 Running 0 31m​
kube-system etcd-minikube 1/1 Running 0 31m​
kube-system kube-addon-manager-minikube 1/1 Running 0 31m​
kube-system kube-apiserver-minikube 1/1 Running 0 31m​
kube-system kube-controller-manager-minikube 1/1 Running 0 31m​
kube-system kube-proxy-j6q29 1/1 Running 0 31m​
kube-system kube-scheduler-minikube 1/1 Running 0 31m​
kube-system storage-provisioner 1/1 Running 0 31m
In addition, you can check this article about running Minikube on windows.
Also you can consider Docker for Windows which will do many things automatically, however it will install older version of kubernetes (1.14 at the moment).

The connection to the server xxxx:6443 was refused - did you specify the right host or port?

I follow this to install kubernetes on my cloud.
When I run command kubectl get nodes I get this error:
The connection to the server localhost:6443 was refused - did you specify the right host or port?
How can I fix this?
If you followed only mentioned docs it means that you have only installed kubeadm, kubectl and kubelet.
If you want to run kubeadm properly you need to do 3 steps more.
1. Install docker
Install Docker ubuntu version. If you are using another system chose it from left menu side.
Why:
If you will not install docker you will receive errror like below:
preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": e
xecutable file not found in $PATH
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
2. Initialization of kubeadm
You have installed properly kubeadm and docker but now you need to initialize kubeadm. Docs can be found here
In short version you have to run command
$ sudo kubeadm init
After initialization you will receive information to run commands like:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
and token to join another VM to cluster. It looks like
kubeadm join 10.166.XX.XXX:6443 --token XXXX.XXXXXXXXXXXX \
--discovery-token-ca-cert-hash sha256:aXXXXXXXXXXXXXXXXXXXXXXXX166b0b446986dd05c1334626aa82355e7
If you want to run some special action in init phase please check this docs.
3. Change node status to Ready
After previous step you will be able to execute
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu-kubeadm NotReady master 4m29s v1.16.2
But your node will be in NotReady status. If you will describe it $ kubectl describe node you will see error:
Ready False Wed, 30 Oct 2019 09:55:09 +0000 Wed, 30 Oct 2019 09:50:03 +0000 KubeletNotReady runtime network not ready: Ne
tworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
It means that you have to install one of CNIs. List of them can be found here.
EDIT
Also one thing comes to my mind.
Sometimes when you turned off and on VM you need to restart
kubelet and docker service. You can do it by using
$ service docker restart
$ systemctl restart kubelet
Hope it helps.
Looks like kubeconfig file is missing.. Did you copy admin.conf file to ~/.kube/config ?
Verify if there are any proxies set like "http_proxy" or "https_proxy", mostly we set it as environment variables. If yes, then remove the proxies and it should work for you.
I did the following 2 steps. The kubectl works now.
$ service docker restart
$ systemctl restart kubelet

Minikube: kubectl connection refused - did you specify the right host or port?

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
πŸŽ‰ "minikube" context has been updated to point to 127.0.0.1:56537
πŸ’— Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container

Kubernetes configuration step 2 CentOS 7

From http://kubernetes.io/docs/getting-started-guides/kubeadm/
CentOS Linux release 7.2.1511 (Core)
(1/4) Installing kubelet and kubeadm on your hosts
.....
it's ok
$sudo docker -v
Docker version 1.10.3, build cb079f6-unsupported
$sudo kubeadm version
$kubeadm version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.0.1534+cf7301f16c0363-dirty", GitCommit:"cf7301f16c036363c4fdcb5d4d0c867720214598", GitTreeState:"dirty", BuildDate:"2016-09-27T18:10:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
$sudo systemctl enable docker && systemctl start docker
$sudo systemctl enable kubelet && systemctl start kubelet
it's ok again
$ sudo kubeadm init
<master/tokens> generated token: "15a340.9910f948879b5d99"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
And at that place proccess stopped.
Probably, i can'nt understand something, but RedHat OpenShift version 3 use kubernetes+docker. I tried OpenShift v3 docker version download - it was ok.
I fixed that issue with a likewise setup by declaring the private ip address as localhost in the /etc/hosts file.
Example: /etc/hosts
10.0.0.2 localhost
Then I run a problem where kubectl get nodes threw:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This I fixed by copying the generated conf to the local kube config.
cp /etc/kubernetes/kubelet.conf ~/.kube/config
There are a couple of possibilities here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY
.....
<master/apiclient> all control plane components are healthy after 20.585964 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 8.259447 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 66.415198 seconds
kubeadm: I am an alpha version, my authors welcome your feedback and bug reports
kubeadm: please create an issue using https://github.com/kubernetes/kubernetes/issues/new
kubeadm: and make sure to mention #kubernetes/sig-cluster-lifecycle. Thank you!
failed creating essential kube-proxy addon [Timeout: request did not complete within allowed duration]
Fixed. But I installed and configuregd succefully 1.2.0 version... Oh