I have setup the Kubernetes cluster with Kubespray
Once I restart the node and check the status of the node I am getting as below
$ kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Environment:
OS : CentOS 7
Kubespray
kubelet version: 1.22.3
Need your help on this.
Regards,
Zain
This work for me, I'm using minukube,
When checking the minikube status by running the command minikube status you'll probably get something like that
E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got:
127.0.0.1:55900, want: 127.0.0.1:49736
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
To fix it, I just followed the next steps:
minikube update-context
minukube start
Below step can solve your issue.
kubelet may be down, use the below commands on the master node.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
Then try using kubectl get nodes.
Thank you Sai for your inputs. i was getting journalctl -xeu kubelet output was Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory i was restarted and enabled cri-dockerd services
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
then sudo systemctl start kubelet finally it works for me.
#kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
this link will give https://github.com/kubernetes-sigs/kubespray/issues/8734 more info.
Regards,Zain
Sometimes when Minikube is already running and I try to run any kubectl command (like kubectl get pods) I get this error:
Unable to connect to the server: dial tcp 192.168.99.101:8443
So I stop Minikube and start it again and all kubectl commands work fine, but then after a while if I try to run any kubectl command I get the same error as above.
If I type minikube ip I get 192.168.99.100. Why does kubectl try to connect to 192.168.99.101 (as mentioned in the error) when Minikube is running on 192.168.99.100?
Note that I'm very new to Kubernetes.
kubectl config get-contexts gives me this output:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
This is minikube logs output https://pastebin.com/kb5jNRyW
This usually happens when the IP of your VM might be changed, and your minikube is pointing to the previous IP, You can check through minikube ip and then check the IP of the VM created, they will be different.
You can also try minikube status, your output will be :
minikube: Running
cluster: Stopped
kubectl: Misconfigured: pointing to stale minikube-vm.
To fix the kubectl context, run minikube update-context
You can try minikube update-context and if it doesn't run even then, try minikube start followed by minikube update-context, it won't download everything again, it will only start the VM if shut down.
I have kubernetes + minicube installed (MacOs 10.12.6) installed. But while trying to start the minicube i get constant errors:
$: minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0601 15:24:50.571967 67567 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
I've also tried to do minikube delete and the minikube start that didn't help (Minikube never start - Error restarting cluster). Also kubectl config use-context minikube was done.
I have minikube version: v0.26.1
It looks to me that kubeadm.yaml file is missing or misplaced.
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
In your issue, below steps should do the initialization process successfully:
minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
In the case you mixed Kubernetes and minikube environments I suggest to inspect $HOME/.kube/config file
and delete minikube entries to avoid problem with reinitialization.
If minikube still refuses to start please post logs to analyze. To get detailed log start minikube this way:
minikube start --v=9
When i run the kubectl version command , I get the following error message.
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
How do I resolve this?
You can get relevant information about the client-server status by using the following command.
kubectl config view
Now you can update or set k8s context accordingly with the following command.
kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
you can do further action on kubeconfig file. the following command will provide you with all necessary information.
kubectl config --help
You have to run first
minikube start
on your terminal. This will do following things for you:
Restarting existing virtualbox VM for "minikube" ...
β Waiting for SSH access ...
πΆ "minikube" IP address is 192.168.99.100
π³ Configuring Docker as the container runtime ...
π³ Version of container runtime is 18.06.3-ce
β Waiting for image downloads to complete ...
β¨ Preparing Kubernetes environment ...
π Pulling images required by Kubernetes v1.14.1 ...
π Relaunching Kubernetes v1.14.1 using kubeadm ...
β Waiting for pods: apiserver proxy etcd scheduler controller dns
π― Updating kube-proxy configuration ...
π€ Verifying component health ......
π kubectl is now configured to use "minikube"
π Done! Thank you for using minikube!
If you use minikube then you should run, kubectl config use-context minikube
If you use latest docker for desktop that comes with kubernetes then you should run, kubectl config use-context docker-for-desktop
I was facing the same issue on Ubuntu 18.04.1 LTS.
The solution provided here worked for me.
Just putting the same data here:
Get current cluster name and Zone:
gcloud container clusters list
Configure Kubernetes to use your current cluster:
gcloud container clusters get-credentials [cluster name] --zone [zone]
Hope it helps.
I had the same issue when I tried use kubrnetes installed with Docker. It turned out that it was not enbled by default.
First I enabled kubrnetes in Docker options and then I changed context for docker-for-desktop
kubectl config get-contexts
kubectl config use-context docker-for-desktop
It solved the issue.
This problem occurs because of minikube. Restart minikube will solve this problem.Run below command and it will work-
minikube stop
minikube delete
minikube start
Was facing the same problem with accessing GKE master from Google Cloud Shell.
Then I followed this GCloud doc to solve it.
Open GCloud Shell
Get External IP of the current GCloud Shell with:
dig +short myip.opendns.com #resolver1.opendns.com
Add this External IP into the "Master authorized networks" section of the GKE cluster - with a CIDR suffix of /32
After that, running kubectl get nodes from the GCloud Shell worked right away.
I got similar problem when I run
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
And here's how I tried and finally worked.
I installed Docker Desktop on Mac (Version 2.0.0.3) firstly.
Then I installed the kubectl with command
$ brew install kubectl
.....
==> Pouring kubernetes-cli-1.16.0.high_sierra.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
rm '/usr/local/bin/kubectl'
To force the link and overwrite all conflicting files:
brew link --overwrite kubernetes-cli
To list all files that would be deleted:
brew link --overwrite --dry-run kubernetes-cli
Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
.....
That doesn't matter, we have already got the kubectl.
Then I install minikube with command
$ brew cask install minikube
...
==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
πΊ minikube was successfully installed!
start minikube first time (VirtualBox not installed)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
πΏ Downloading VM boot image ...
> minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
> minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 7.75 MiB p/s 18s
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
π Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
...
π£ Unable to start VM
β Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
π‘ Suggestion: Install VirtualBox, or select an alternative value for --vm-driver
π Documentation: https://minikube.sigs.k8s.io/docs/start/
βοΈ Related issues:
βͺ https://github.com/kubernetes/minikube/issues/3784
Install VirtualBox, then start minikube second time (VirtualBox installed)
$ minikube start
π 13:37:01.006849 35511 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 failed: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
E1002 13:37:33.632298 35511 start.go:706] Error caching images: Caching images for kubeadm: caching images: caching image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
β Unable to load cached images: loading cached images: loading image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: stat /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: no such file or directoryminikube v1.4.0 on Darwin 10.13.6
π₯ Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E1002
πΎ Downloading kubeadm v1.16.0
πΎ Downloading kubelet v1.16.0
π Pulling images ...
π Launching Kubernetes ...
π£ Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout
πΏ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
π https://github.com/kubernetes/minikube/issues/new/choose
β Problems detected in kube-addon-manager [b17d460ddbab]:
error: no objects passeINFO:d == Kuto apberneply
error: no objectNsF Op:a == Kubernetssed tes ado appdon ely
start minikube 3rd time
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Using the running virtualbox "minikube" VM ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
! still got stuck on Relaunching
I enable Kubernetes config in Docker Preferences setting, restart my Mac and switch the Kubernetes context to docker-for-desktop.
Oh, the kubectl version works this time, but with the context docker-for-desktop
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
start minikube 4th time (after restart system maybe)
$ minikube start
π minikube v1.4.0 on Darwin 10.13.6
π‘ Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
π Starting existing virtualbox VM for "minikube" ...
β Waiting for the host to be provisioned ...
π³ Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
π Relaunching Kubernetes using kubeadm ...
β Waiting for: apiserver proxy etcd scheduler controller dns
π Done! kubectl is now configured to use "minikube"
Finally, it works with minikube context...
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
i checked the firewall port and it was closed, i opened it and it started working.
If you are using azure and have recently changed your password try this:
az account clear
az login
After logging in successfully:
az aks get-credentials --name project_name --resource-group resource_group_name
Now when you run
kubectl get nodes
you should see something. Also, make sure you are using the correct kubectl context.
My problem was that I use 2 virtual networks on my VM. The network which Kubernetes uses is always the one of the Default Gateway. However the communication network between my VMs was the other one.
You can force Kubernetes to use a different network by using the folowing flags:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=xxx.xxx.xxx.xxx --apiserver-advertise-address=xxx.xxx.xxx.xxx
Change the xxx.xxx.xxx.xxx with the commmunication IP address of your K8S master.
I have two contexts and I got this error when I was in the incorrect one of the two so I switched the context and this error was resolved.
To see your current context: kubectl config current-context
To see the contexts you have: kubectl config view
To switch context: kubectl config use-context context-cluster-name
Adding this here so it can help someone with a similar problem.
In our case, we had to configure our VPC network to export its custom routes for VPC peering βgke-jn7hiuenrg787hudf-77h7-peerβ in project ββ to the control plane's VPC network.
The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premise resources.
Step-1: Run command to see all list of context:
kubectl config view
Step-2: Now switch your context where you want to work.
kubectl config use-context [context-name]
For example:
kubectl config use-context docker-desktop
I face the same issue, it might be your ip was not added into authorize network list in the Kubernetes Cluster. Simply navigate to:
GCP console -> Kubernetes Engine -> Click into the Clusters you wish to interact with
In the target Cluster page look for:
Control plane authorized networks -> click pencil icon -> Add Authorized Network
Add your External Ip with a CIDR suffix of /32 (xxx.xxx.xxx.xxx/32).
One way to get your external IP on terminal / CMD:
curl -4 ifconfig.co