I'm total beginner for kubernetes and tried various methods in stackoverflow, youtube and blogs. None of them worked for me.
When running kubectl command randomly getting following error:
Eg: kubectl get nodes
The connection to the server 192.168.1.105:6443 was refused - did you specify the right host or port?
Name : kubelet
Version : 1.25.3
OS : Rocky Linux with vmware player
Working nodes: Rocky Linux 2x with same software versions and specs
tried
swapoff -a
Restarting services (docker, kubectl)
Completely re-installed cluster
Config file added : ~/home/.kube/config
daemon.json added
Below command will help to identify isssue
crictl info
Install CRI
The reason is we should tell which CRI should use for the cluster.
Every node should have same CRI. I prefer docker engine.
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
https://github.com/Mirantis/cri-dockerd
kubeadm init --cri-socket=unix:///var/run/cri-dockerd.sock
kubeadm join 192.168.1.200:6443 --token 49tl1j.0txot1q9s94o6wu7 --discovery-token-ca-cert-hash sha256:98a02f3a60fc4c00f399726a5d2b61213a714f8c9be3700eb648315094fa9ee0 --cri-socket=unix:///var/run/cri-dockerd.sock
Related
I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container
I've installed docker on Debian 9.3 and created a swarm using 4 computers.
Now I am trying to install Kubernetes locally and am having some trouble getting things to work.
$ uname -a
Linux tma 4.9.0-4-amd64 #1 SMP Debian 4.9.65-3 (2017-12-03) x86_64 GNU/Linux
I'm trying to follow this guide:
https://kubernetes.io/docs/getting-started-guides/minikube/
I want to use Deb9 since that is what I use in our lab.
I am using KVM as the hypervisor.
Has anyone installed Kubernetes locally via Minikube successfully?
I get the following error: when I issue kubectl cluster-info as sudo and non-sudo
$kubectl cluster-info
Kubernetes master is running at localhost:8080 To further debug and diagnose cluster problems, use kubectl cluster-info dump. The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have Minikube running on Debian.
I can reproduce your error if I don't have anything running. Most probably your cluster didn't start, you'll need to debug more.
This is a great Debian 9 + Minikube resource: https://medium.com/linagora-engineering/install-k8s-minikube-on-top-of-kvm-on-debian-9-9cd5b646063c
I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container
I got Kubernetes Minikube on my laptop (4cores, 8 GB RAM). I just performed the basic installation steps (got miniKube and kubectl, enabled the BIOS virtualization) and I am able to start the cluster:
C:\Users\me>minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
However, when I try to interact with the cluster, I allways get the same error, sample:
C:\Users\me>kubectl get pods --context=minikube
Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: A connection attempt failed because the connected party
did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I execute minikube ip and I ping the result IP and I get a response. Also I tried to give more memory (3Gb vs the standard 2Gb) and nothing changed.
Am I doing something wrong here?
Thanks!
I had same issue as above. I found out that kubectl couldn't connect to the cluster and would throw up the error when i'm on a VPN connection. When I turned off my VPN client, it started working as fine.
I think it could be some problem with the cluster, when I run minikube status I've got the mixed results of cluster running and cluster stopped:
First run:
c:\> minikube status
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
Second run:
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
Third run:
minikube: Running
cluster: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
The service is flapping.
UPDATED:
Connecting to the minikube vm using minikube ssh I realized the kubeconfig file have wrong path separator for certificates generated by minikube automatic configuration. The path on kubeconfig file stands for \var\lib\localkube\certs\ca.cert and it have to be /var/lib/localkube/certs/ca.cert and so on...
To update the file I have to copy the content of the orignal file to my desktop, fix the directory separators and save the correct file to /var/lib/localkube/kubeconfig and restart the service using:
sudo systemclt restart localkube.
I hope everyone can use minikube with this tip.
If it keep to hit 8443 connection issue when changed work environment, would simplify turn off TLS verification for minikube local cluster if there is not clue.
https://github.com/robertluwang/docker-hands-on-guide/blob/master/minikube-no-tls-verify.md
Hope it is helpful for you.
BR/
Robert
from the documentation:
for Troubleshooting
Run minikube start --alsologtostderr -v=7 to debug crashes
I had the same problem:
check if a some service of a VPN is running by checking the task management, for me, I had a running service of my VPN, so kill the task and try to run the command showed above
From http://kubernetes.io/docs/getting-started-guides/kubeadm/
CentOS Linux release 7.2.1511 (Core)
(1/4) Installing kubelet and kubeadm on your hosts
.....
it's ok
$sudo docker -v
Docker version 1.10.3, build cb079f6-unsupported
$sudo kubeadm version
$kubeadm version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.0.1534+cf7301f16c0363-dirty", GitCommit:"cf7301f16c036363c4fdcb5d4d0c867720214598", GitTreeState:"dirty", BuildDate:"2016-09-27T18:10:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
$sudo systemctl enable docker && systemctl start docker
$sudo systemctl enable kubelet && systemctl start kubelet
it's ok again
$ sudo kubeadm init
<master/tokens> generated token: "15a340.9910f948879b5d99"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
And at that place proccess stopped.
Probably, i can'nt understand something, but RedHat OpenShift version 3 use kubernetes+docker. I tried OpenShift v3 docker version download - it was ok.
I fixed that issue with a likewise setup by declaring the private ip address as localhost in the /etc/hosts file.
Example: /etc/hosts
10.0.0.2 localhost
Then I run a problem where kubectl get nodes threw:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This I fixed by copying the generated conf to the local kube config.
cp /etc/kubernetes/kubelet.conf ~/.kube/config
There are a couple of possibilities here -:
1) In older kubeadm versions selinux blocks access at this point
2) If you are behind a proxy you will need to add the usual to the kubeadm environment -:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Plus, which I have not seen documented anywhere -:
KUBERNETES_HTTP_PROXY
KUBERNETES_HTTPS_PROXY
KUBERNETES_NO_PROXY
.....
<master/apiclient> all control plane components are healthy after 20.585964 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 8.259447 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 66.415198 seconds
kubeadm: I am an alpha version, my authors welcome your feedback and bug reports
kubeadm: please create an issue using https://github.com/kubernetes/kubernetes/issues/new
kubeadm: and make sure to mention #kubernetes/sig-cluster-lifecycle. Thank you!
failed creating essential kube-proxy addon [Timeout: request did not complete within allowed duration]
Fixed. But I installed and configuregd succefully 1.2.0 version... Oh