I have a remote privately managed Kubernetes cluster that I reach by going via an intermediary VM. To use kubectl from my machine I have setup an SSH tunnel that hops onto my VM and then onto my master node - this works fine.
I am trying to configure Telepresence (https://www.telepresence.io/) which attempts to start up (correctly detecting that kubectl works) but then fails due to a timeout.
subprocess.TimeoutExpired: Command '['ssh', '-F', '/dev/null', '-oStrictHostKeyChecking=no', '-oUserKnownHostsFile=/dev/null', '-q', '-p', '65367', 'telepresence#127.0.0.1', '/bin/true']' timed out after 5 seconds
Is this a setup that telepresence should support or is the presence of an intermediary VM going to be a roadblock for me?
Telepresence 2 should support this better as it installs a sidecar container that makes it more resilient to interrupted connections. I would give the new version a try to see if you're still seeing timeout errors.
https://www.getambassador.io/docs/latest/telepresence/quick-start/
Related
I'm trying to investigate issue with random 'Connection reset by peer' error or long (up 2 minutes) PDO connection initializations but failing to find a solution.
Similar issue: https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/, but that supposed to be fixed in the version of kubernetes that I'm running.
GKE config details:
GKE is running on 1.20.12-gke.1500 version, with a NAT network configuration and a router. Cluster has 2 nodes and router has 2 static IP's assigned with dynamic port allocation and range of 32728-65536 ports per VM.
On the kubernetes:
deployments: docker image with local nginx, php-fpm, and google sql proxy
services: LoadBalancer to expose the deployment
As per replication of the issue I created a simple script connecting in a loop to database and making simple count query. I eliminated issues with the database server by testing the script on a standalone GCE VM where I didn't get any issues. When I'm running the script on any of the application pods in the cluster, I'm getting random 'Connection reset by peer' errors. I have tested that script using google sql proxy service or with direct database IP with same random connection issues.
Any help would be appreciated.
Update
On https://cloud.google.com/kubernetes-engine/docs/release-notes I can see that there has been fix released to solve potentially something that I'm getting: "The following GKE versions fix a known issue in which random TCP connection resets might happen for GKE nodes that use Container-Optimized OS with Docker (cos). To fix the issue, upgrade your nodes to any of these versions:"
I'm updating nodes this evening so I hope that will solve the issue.
Update
The update of nodes solved random connection resets.
Updating cluster and nodes to 1.20.15-gke.3400 version using google cloud panel resolved the issue.
I tried to install kubernetes with Docker desktop. However, as soon as I type in
kubectl get nodes
I get Remote kubernetes server unreachable error.
I0217 23:42:56.224000 26220 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: dial tcp 172.28.112.98:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Any ideas on how to fix this?
Have you got more than one context in your kubeconfig?
You can check this with kubectl config get-contexts.
If necessary change your context to Docker Desktop Kubernetes using kubectl config use-context docker-desktop.
Is it possible that you may have tried minikube and this has left cluster/context in your .kube\config?
Configure Access to Multiple Clusters
go to your HOME/.kube directory and check the config file. There is a possibility that the server mentioned there is old or not reachable.
you can copy the new config file(from remote server of the other directory of tools like k3s) and add/replace it in your HOME/.kube / config file.
Same error message may also happen when you switch from local k8s cluster to a remote cluster that requires VPN to connect and you are not connected to VPN.
My OKD 4.8 single node deployment has been up for more than a month now. Then today it started acting (pods not being created). So I thought maybe I reboot the node. I shut it down via the AWS console. And then I started it again.
However, after restart, it is not responding. The node is running but OKD is not accessible. Neither the console nor the API can be reached. Any oc command results in "The connection to the server api.api1.hostname.info:6443 was refused - did you specify the right host or port?"
The domain name and all zones are hosted by AWS.
Any troubleshooting ideas?
I am new to kubernetes and trying to throw together a quick learning project, however I am confused on how to connect a pod to a local service.
I am storing some config in a ZooKeeper instance that I am running on my host machine, and am trying to connect a pod to it to grab config.
However I cannot get it to work, I've tried the magic "10.0.2.2" that I've read about, but that did not work. I also tried creating a service and endpoint, but again to no avail. Any help would be appreciated, thanks!
Also, for background I'm using minikube on macOS with the hyperkit vm-driver.
I have setup kubernetes cluster in my ubuntu machine, before it was working
then I have restart the machine, it's not working now properly.
I am getting the following error.
root#master:~# kubectl get nodes
The connection to the server 192...*:6443 was refused - did you specify the right host or port?
root#master:~#
Can you explain how you installed kubernetes? For instance are you using minikube?
perhaps you just need to start minikube?
minikube start
This takes a few minutes to start up so be patient
An alternative cause is that your IP may have changed. does ip addr match the ip its trying to connect to. If it doesn't you may need to edit your ~/kube/config file
Is anything listening on port 6443?
netstat -pant | grep 6443`