kubectl can't connect to Google Container Engine - kubernetes

I have followed the installation steps:
https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl
A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works.
But kubectl says:
"couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused"
I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE.
Where do I find the address of the Kubernetes master of my GCE cluster?
With gcloud beta container clusters list I seemingly get the master IP of my cluster.
I used that with kubectl config set-cluster.
Now it says:
"error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout"
Am I on the right track with this?
Additional strangeness:
gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container
MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.

Run
gcloud container clusters get-credentials my-cluster-name
to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.

as #ScyDev stated
Run:
gcloud container get-credentials <cluster_name>
But you may have to set your compute zone before, in case you initialized a new cloud shell terminal. That was my case
if you're working in windows (ex. powershell), you need to check this out:
https://github.com/kubernetes/kubernetes/issues/34395

Related

Using helm and a Kubernetes Cluster with Microk8s on one or two local physical Ubuntu server

I installed Microk8s on a local physical Ubuntu 20-04 server (without a GUI):
microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
helm # Helm 2 - the package manager for Kubernetes
disabled:
When I try to install something with helm it says:
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
What configuration has to be done to use the MicroK8s Kubernetes cluster for helm installations?
Do I have to enable more MicroK8s services for that?
Can I run a Kubernetes cluster on one or two single local physical Ubuntu server with MicroK8s?
Searching for solution for your issue, I have found this one. Try to run:
[microk8s] kubectl config view --raw > ~/.kube/config
Helm interacts directly with the Kubernetes API server so it needs to be able to connect to a Kubernetes cluster. Helms reads the same configuration files used by kubectl to do it automatically.
Based on Learning Helm by O'Reilly Media:
Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that kubectl looks in.
See also:
This discussion about similar issue on Github
This similar issue

Kubernetes, Unable to connect to the server: EOF

Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)

How do I modify kube-apiserver parameters when provisioning a cluster using kops?

the kube-apiserver isn't running
/var/log/kube-apiserver.log has the following:
Flag --address has been deprecated, see --insecure-bind-address instead.
Where are these values stored / configured?
I mean yes the originate from my kops config, which I've now modified. But I'm not able to get these changes reflected:
kops rolling-update cluster
Using cluster from kubectl context: uuuuuuuuuuuuuuuuuuuuuu
Unable to reach the kubernetes API.
Use --cloudonly to do a rolling-update without confirming progress with the k8s API
error listing nodes in cluster: Get https://api.uuuuuuuuuu/api/v1/nodes: dial tcp eeeeeeeeeeeeeee:443: connect: connection refused
https://stackoverflow.com/a/50356764/1663462
Modify /etc/kubernetes/manifests/kube-apiserver.manifest
And then restart kubelet: systemctl restart kubelet

Unable to connect to the server: dial tcp accounts.google.com :443: getsockopt: operation timed out

I'm trying to get the pods list from the gcloud project.
The gcloud project I've created in the gcp using different laptop.
Now I'm using different machine but logged into same gcp account and using same project.
When I run the command kubectl get pods I get the below error.
Unable to connect to the server: dial tcp a.b.c.d:443: getsockopt: operation timed out
I tried to add an argument --verbose but that doesn't seems to be valid.
How can I further proceed in resolving this error.
gcloud container clusters get-credentials my-cluster-name will log you into your cluster locally
From the docs:
"updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine." - src

kubectl run command is failing with a connection refused error

I am following the hellonode tutorial on kubernetes.io
http://kubernetes.io/docs/hellonode/
I am getting an error when trying to do the 'Create your pod' section.
When I run this command (replacing PROJECT_ID with the one I created) I get the following:
$ kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I get a similar error just typing kubectl version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I'm not sure what to do since I have no experience with kubernetes other than following the steps of this tutorial.
I figured out the issue.
In the Create your cluster section I missed a critical step.
The step I missed was: Please ensure that you have configured kubectl to use the cluster you just created. The configured part is a link to how to do this:
The steps are as follows:
gcloud config set project PROJECT
gcloud config set compute/zone ZONE
gcloud config set container/cluster CLUSTER_NAME
gcloud container clusters get-credentials CUSTER_NAME