I am following the hellonode tutorial on kubernetes.io
http://kubernetes.io/docs/hellonode/
I am getting an error when trying to do the 'Create your pod' section.
When I run this command (replacing PROJECT_ID with the one I created) I get the following:
$ kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I get a similar error just typing kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I'm not sure what to do since I have no experience with kubernetes other than following the steps of this tutorial.
I figured out the issue.
In the Create your cluster section I missed a critical step.
The step I missed was: Please ensure that you have configured kubectl to use the cluster you just created. The configured part is a link to how to do this:
The steps are as follows:
gcloud config set project PROJECT
gcloud config set compute/zone ZONE
gcloud config set container/cluster CLUSTER_NAME
gcloud container clusters get-credentials CUSTER_NAME
Related
Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)
I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy
When use helm for kubernetes package management, after installed the helm client,
after
helm init
I can see tiller pods are running on kubernetes cluster, and then when I run helm ls, it gives an error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe
lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
and use kubectl logs I can see similar message like:
[storage/driver] 2017/08/28 08:08:48 list: failed to list: Get
http://localhost:8080/api/v1/namespaces/kube-system/configmaps?
labelSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
I can see the tiller pod is running at one of the node instead of master, there is no api server running on that node, why it connects to 127.0.0.1 instead of my master ip?
Run this before doing helm init. It worked for me.
kubectl config view --raw > ~/.kube/config
First delete tiller deployment and stop the tiller service.By running below commands,
kubectl delete deployment tiller-deploy --namespace=kube-system
kubectl delete service tiller-deploy --namespace=kube-system
rm -rf $HOME/.helm/
By default, helm init installs the Tiller pod into the kube-system namespace, with Tiller configured to use the default service account.
Configure Tiller with cluster-admin access with the following command:
kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Then install helm server (Tiller) with the following command:
helm init
So I was having this problem since a couple weeks on my work station and none of the answers provided (here or in Github) worked for me.
What it has worked is this:
sudo kubectl proxy --kubeconfig ~/.kube/config --port 80
Notice that I am using port 80, so I needed to use sudo to be able to bing the proxy there, but if you are using 8080 you won't need that.
Be careful with this because the kubeconfig file that the command above is pointing to is in /root/.kube/config instead than in your usual $HOME. You can either use an absolute path to point to the config you want to use or create one in root's home (or use this sudo flag to preserve your original HOME env var --preserve-env=HOME).
Now if you are using helm by itself I guess this is it. To get my setup working, as I am using Helm through the Terraform provider on GKE this was a pain in the ass to debug as the message I was getting doesn't even mention Helm and it's returned by Terraform when planning. For anybody that may be in a similar situation:
The errors when doing a plan/apply operation in Terraform in any cluster with Helm releases in the state:
Error: error installing: Post "http://localhost/apis/apps/v1/namespaces/kube-system/deployments": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/system/secrets/apigee-secrets": dial tcp [::1]:80: connect: connection refused
One of these errors for every helm release in the cluster or something like that. In this case for a GKE cluster I had to ensure that I had the env var GOOGLE_APPLICATION_CREDENTIALS pointing to the key file with valid credentials (application-default unless you are not using the default set up for application auth) :
gcloud auth application-default login
export GOOGLE_APPLICATION_CREDENTIALS=/home/$USER/.config/gcloud/application_default_credentials.json
With the kube proxy in place and the correct credentials I am able again to use Terraform (and Helm) as usual. I hope this is helpful for anybody experiencing this.
kubectl config view --raw > ~/.kube/config
export KUBECONFIG=~/.kube/config
worked for me
I created a Kube cluster using the kube-up script. If I ssh into the intances, kubectl is configured for the local cluster. My question, how is kubectl detecting the kubeconfig when a cluster is created using kube-up script?
I tried to do this using a cluster built from HEAD on GCE and didn't have the same experience. On the master instance, kubectl works. But on the nodes, it isn't configured to communicate with the master:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.8.82+c9d33ec1b4044e", GitCommit:"c9d33
ec1b4044e2a330a9b8b7a9204a99b6c6eec", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason that it works out of the box on the master is that by default kubectl tries to connect to port 8080 on localhost, which is also the insecure port used on the master (until kubernetes#13598 is resolved).
I have followed the installation steps:
https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl
A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works.
But kubectl says:
"couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused"
I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE.
Where do I find the address of the Kubernetes master of my GCE cluster?
With gcloud beta container clusters list I seemingly get the master IP of my cluster.
I used that with kubectl config set-cluster.
Now it says:
"error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout"
Am I on the right track with this?
Additional strangeness:
gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container
MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.
Run
gcloud container clusters get-credentials my-cluster-name
to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.
as #ScyDev stated
Run:
gcloud container get-credentials <cluster_name>
But you may have to set your compute zone before, in case you initialized a new cloud shell terminal. That was my case
if you're working in windows (ex. powershell), you need to check this out:
https://github.com/kubernetes/kubernetes/issues/34395