Unable to connect with kubectl to a GKE cluster - kubernetes

I am currently trying to connect with kubeclt to a GKE cluster.
I followed the steps in the documentation and executed successfully the following:
gcloud container clusters get-credentials <cluster_name> --zone <zone>
Some days ago it work perfectly fine. I was able to setup a connection with kubectl.
The configuration was not changed in any way. And also still try to accessing the cluster through the same network. The cluster itself is running stable. Whatever I try I run into a timeout.
I have already had a look into the kubectl configuration:
kubectl config view
It seems to be that the access token is expired.
...
expiry: "2022-08-01T12:12:35Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
...
Is there any change to update the token? I am not able to update the token with the get-credential command. Already delete the configuration completely and run the command afterwards. But the token is still the same.
I am very thankful for any hints or ideas on this.

Have you tried rerunning your credentials command again to refresh your local kubeconfig?
gcloud container clusters get-credentials <cluster_name> --zone <zone>
Alternatively, try the beta variant:
gcloud beta container clusters get-credentials <cluster_name> --zone <zone>
(You may need to install the beta package using gcloud components install beta)

Related

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

error: failed to discover supported resources kubernetes google cloud platform

I was performing a practical where i was deploying a containerised sample application using kubernetes.
i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.
its showing an error "error: failed to discover supported resources: Get https://35.240.145.231/apis/extensions/v1beta1: x509: certificate signed by unknown authority".
From Error, i can recollect that its because of "SSL Certificate" not authorised.
I even exported the config file resides at "$HOME/.kube/config". but still getting the same error.
please anyone help me understand the real issue behind this.
Best,
Swapnil Pawar
You may try following steps,
List all the available clusters,
$ gcloud container clusters list
Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,
$ gcloud container clusters get-credentials <cluster_name> --ZONE <location>
or if the location is configured for a region then,
$ gcloud container clusters get-credentials <cluster_name> --REGION <location>
The above command will update your kubectl config file $HOME/.kube/config
Now, the tricky part.
If you have more than one cluster that you have configured, then your $HOME/.kube/config will have two or more entries. You can verify it by doing a cat command on the config file.
To select a particular context/cluster, you need to run the following commands
$ kubectl config get-contexts -o=name // will give you a list of available contexts
$ kubectl config use-context <CONTEXT_NAME>
$ kubectl config set-context <CONTEXT_NAME>
Now, you may run the kubectl run.

How to refresh kubernetes config access-token in GKE

how to refresh token in gcloud or GKE cluster in spinnaker
kubeconfig expaires after 1 hour, how to refresh token in gcloud or GKE cluster in spinnaker
This appears to be a known issue, could you try updating the credential file using the following command:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
This Github thread suggested the above solution and it worked for other users. Additionally review this thread to get an overview of this issue.

How to acces Google kubernetes cluster without googlecloud SDK?

I'm having trouble figuring out how I can set my kubectl context to connect to a googlecloud cluster without using the gcloud sdk. (to run in a controlled CI environment)
I created a service account in googlecloud
Generated a secret from that (json format)
From there, how do I configure kubectl context to be able to interact with the cluster ?
Right in the Cloud Console you can find the connect link
gcloud container clusters get-credentials "your-cluster-name" --zone "desired-zone" --project "your_project"
But before this you should configure gcloud tool.

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.