Unable to connect to the server: dial tcp accounts.google.com :443: getsockopt: operation timed out - kubernetes

I'm trying to get the pods list from the gcloud project.
The gcloud project I've created in the gcp using different laptop.
Now I'm using different machine but logged into same gcp account and using same project.
When I run the command kubectl get pods I get the below error.
Unable to connect to the server: dial tcp a.b.c.d:443: getsockopt: operation timed out
I tried to add an argument --verbose but that doesn't seems to be valid.
How can I further proceed in resolving this error.

gcloud container clusters get-credentials my-cluster-name will log you into your cluster locally
From the docs:
"updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine." - src

Related

Helm: how set Kubernetes cluster Endpoint

I have two containers:
one hosting the cluster (minikube)
one where the deployment is triggered (with helm)
When running elm install I get
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
This is clear, because my cluster is running on a different host). How/Where can I set the Kubernetes Cluster IP address? When I run helm install my app should be deployed on the remote cluster.
It can be done with
helm --kube-context=
The steps to create the context are desribed here

Connection to the Kubernetes container refused from endpoint

I created a cluster (sample for learning) on GCP. Created a Helloworld container pulled from docker hub. I can see the container running on POD. but when I click on the endpoints, in the chrome browser it says "connection refused". I have active internet connection without any issues
any suggestions please?
Container Running
endpoints refused the connection
endpoints from cluster

k8s: Unable to delete deployment due to lack of RAM

I got into a vicious circle. I was trying to deploy a few services on AWS Ubuntu machine. It has 1 Gb RAM. By the end of deploying all RAM was used. I decided to delete some of the deployments but I was even unable to check the status of pods and deployments:
$ kubectl delete -f test.yaml
unable to recognize "test.yaml": Get https://172.31.38.138:6443/api?timeout=32s: dial tcp 172.31.38.138:6443: connect: connection refused
$ kubectl get deployments
Unable to connect to the server: dial tcp 172.31.38.138:6443: i/o timeoutUnable to connect to the server: dial tcp 172.31.38.138:6443: i/o timeout
I do understand that the issue is lack of memory. Hence kube-dns, kube-proxy, etc cannot work correctly. The question is:
How can I delete my test deployments without kubectl delete...?
Thanks
Stop Kubelet service then run docker system prune command to delete all pods. And finally restart kubelet

Why tiller connect to localhost 8080 for kubernetes api?

When use helm for kubernetes package management, after installed the helm client,
after
helm init
I can see tiller pods are running on kubernetes cluster, and then when I run helm ls, it gives an error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe
lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
and use kubectl logs I can see similar message like:
[storage/driver] 2017/08/28 08:08:48 list: failed to list: Get
http://localhost:8080/api/v1/namespaces/kube-system/configmaps?
labelSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
I can see the tiller pod is running at one of the node instead of master, there is no api server running on that node, why it connects to 127.0.0.1 instead of my master ip?
Run this before doing helm init. It worked for me.
kubectl config view --raw > ~/.kube/config
First delete tiller deployment and stop the tiller service.By running below commands,
kubectl delete deployment tiller-deploy --namespace=kube-system
kubectl delete service tiller-deploy --namespace=kube-system
rm -rf $HOME/.helm/
By default, helm init installs the Tiller pod into the kube-system namespace, with Tiller configured to use the default service account.
Configure Tiller with cluster-admin access with the following command:
kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Then install helm server (Tiller) with the following command:
helm init
So I was having this problem since a couple weeks on my work station and none of the answers provided (here or in Github) worked for me.
What it has worked is this:
sudo kubectl proxy --kubeconfig ~/.kube/config --port 80
Notice that I am using port 80, so I needed to use sudo to be able to bing the proxy there, but if you are using 8080 you won't need that.
Be careful with this because the kubeconfig file that the command above is pointing to is in /root/.kube/config instead than in your usual $HOME. You can either use an absolute path to point to the config you want to use or create one in root's home (or use this sudo flag to preserve your original HOME env var --preserve-env=HOME).
Now if you are using helm by itself I guess this is it. To get my setup working, as I am using Helm through the Terraform provider on GKE this was a pain in the ass to debug as the message I was getting doesn't even mention Helm and it's returned by Terraform when planning. For anybody that may be in a similar situation:
The errors when doing a plan/apply operation in Terraform in any cluster with Helm releases in the state:
Error: error installing: Post "http://localhost/apis/apps/v1/namespaces/kube-system/deployments": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/system/secrets/apigee-secrets": dial tcp [::1]:80: connect: connection refused
One of these errors for every helm release in the cluster or something like that. In this case for a GKE cluster I had to ensure that I had the env var GOOGLE_APPLICATION_CREDENTIALS pointing to the key file with valid credentials (application-default unless you are not using the default set up for application auth) :
gcloud auth application-default login
export GOOGLE_APPLICATION_CREDENTIALS=/home/$USER/.config/gcloud/application_default_credentials.json
With the kube proxy in place and the correct credentials I am able again to use Terraform (and Helm) as usual. I hope this is helpful for anybody experiencing this.
kubectl config view --raw > ~/.kube/config
export KUBECONFIG=~/.kube/config
worked for me

kubectl can't connect to Google Container Engine

I have followed the installation steps:
https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl
A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works.
But kubectl says:
"couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused"
I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE.
Where do I find the address of the Kubernetes master of my GCE cluster?
With gcloud beta container clusters list I seemingly get the master IP of my cluster.
I used that with kubectl config set-cluster.
Now it says:
"error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout"
Am I on the right track with this?
Additional strangeness:
gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container
MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.
Run
gcloud container clusters get-credentials my-cluster-name
to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.
as #ScyDev stated
Run:
gcloud container get-credentials <cluster_name>
But you may have to set your compute zone before, in case you initialized a new cloud shell terminal. That was my case
if you're working in windows (ex. powershell), you need to check this out:
https://github.com/kubernetes/kubernetes/issues/34395