Remote kubernetes server unreachable - kubernetes

I tried to install kubernetes with Docker desktop. However, as soon as I type in
kubectl get nodes
I get Remote kubernetes server unreachable error.
I0217 23:42:56.224000 26220 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: dial tcp 172.28.112.98:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Any ideas on how to fix this?

Have you got more than one context in your kubeconfig?
You can check this with kubectl config get-contexts.
If necessary change your context to Docker Desktop Kubernetes using kubectl config use-context docker-desktop.
Is it possible that you may have tried minikube and this has left cluster/context in your .kube\config?
Configure Access to Multiple Clusters

go to your HOME/.kube directory and check the config file. There is a possibility that the server mentioned there is old or not reachable.
you can copy the new config file(from remote server of the other directory of tools like k3s) and add/replace it in your HOME/.kube / config file.

Same error message may also happen when you switch from local k8s cluster to a remote cluster that requires VPN to connect and you are not connected to VPN.

Related

Can't connect to my private cluster using kubectl (Unable to connect to the server: dial tcp )

i have 2 clusters same config, till yesterday i was able to connect to both of them, but today i was not able to connect to one of them, all apps are working fine, i just get this error when using kubectl
Unable to connect to the server: dial tcp IP:443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because the connected host has failed to respond
i tried scaling up and down my cluster from the interface ( the scaling works but still i can't use kubectl)
can someone help please?
it was network conflict with the docker bridge network , basically i was using dockers default bridge network as my GKE endpoint , so when i installed docker on my deployment machine (to automatically build images and push them) i lost the connection to this kubernetes cluster.
So all i had to do was disable the bridge network , and i got my connection back to this cluster

Telepresence with SSH Tunnel for Kubernetes

I have a remote privately managed Kubernetes cluster that I reach by going via an intermediary VM. To use kubectl from my machine I have setup an SSH tunnel that hops onto my VM and then onto my master node - this works fine.
I am trying to configure Telepresence (https://www.telepresence.io/) which attempts to start up (correctly detecting that kubectl works) but then fails due to a timeout.
subprocess.TimeoutExpired: Command '['ssh', '-F', '/dev/null', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-q', '-p', '65367', 'telepresence#127.0.0.1', '/bin/true']' timed out after 5 seconds
Is this a setup that telepresence should support or is the presence of an intermediary VM going to be a roadblock for me?
Telepresence 2 should support this better as it installs a sidecar container that makes it more resilient to interrupted connections. I would give the new version a try to see if you're still seeing timeout errors.
https://www.getambassador.io/docs/latest/telepresence/quick-start/

Kubectl don't allow view resources in cluster from node in GCP

I have a trouble that i can't solve. So, I have k8s cluster on GCP. I can use kubectl from shell That opened directly to cluster. But when I use kubectl from node "I have The connection to the server localhost:8080 was refused - did you specify the right host or port?".
Also I use ./kube/config and it works about 5 minutes, and then again fail.
Maybe someone use GCP and help me. Thank you.
irvifa- I use Kubernetes cluster that provides GCP. When I connect directly from cloud shell. But when I connect from instance kubectl shows for client but for server it has mistake The connection to the server localhost:8080 was refused - did you specify the right host or port?"
If you didn't use COS(Container-Optimized OS), your need to
gcloud container clusters get-credentials "CLUSTER NAME"
By default COS will get credentials when you access to the node.

Unable to communicate between pods on kubernetes on rancher using google cloud?

I have created orderer and cli pods. When i go to cli shell and create channel then it is not able to connect with ordrer.
Error: failed to create deliver client: orderer client failed to connect to orderer:7050: failed to create new connection: context deadline exceeded
The port for order ie. 7050 is open and when i go to orderer shell and do telnet localhost 7050 it is connected but when specify the ip for pod then it does not work.
I am using Google Cloud for deloyment. I have also added firewall rules for ingress and egress for all IP and all Ports.
Any help will be much appreciated.
I was missing this variable
ORDERER_GENERAL_LISTENADDRESS = 0.0.0.0
After adding this variable it worked

kubernetes is running but not listing the worker node

I have setup kubernetes cluster in my ubuntu machine, before it was working
then I have restart the machine, it's not working now properly.
I am getting the following error.
root#master:~# kubectl get nodes
The connection to the server 192...*:6443 was refused - did you specify the right host or port?
root#master:~#
Can you explain how you installed kubernetes? For instance are you using minikube?
perhaps you just need to start minikube?
minikube start
This takes a few minutes to start up so be patient
An alternative cause is that your IP may have changed. does ip addr match the ip its trying to connect to. If it doesn't you may need to edit your ~/kube/config file
Is anything listening on port 6443?
netstat -pant | grep 6443`