How to refresh kubernetes config access-token in GKE - kubernetes

how to refresh token in gcloud or GKE cluster in spinnaker
kubeconfig expaires after 1 hour, how to refresh token in gcloud or GKE cluster in spinnaker

This appears to be a known issue, could you try updating the credential file using the following command:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
This Github thread suggested the above solution and it worked for other users. Additionally review this thread to get an overview of this issue.

Related

Unable to connect with kubectl to a GKE cluster

I am currently trying to connect with kubeclt to a GKE cluster.
I followed the steps in the documentation and executed successfully the following:
gcloud container clusters get-credentials <cluster_name> --zone <zone>
Some days ago it work perfectly fine. I was able to setup a connection with kubectl.
The configuration was not changed in any way. And also still try to accessing the cluster through the same network. The cluster itself is running stable. Whatever I try I run into a timeout.
I have already had a look into the kubectl configuration:
kubectl config view
It seems to be that the access token is expired.
...
expiry: "2022-08-01T12:12:35Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
...
Is there any change to update the token? I am not able to update the token with the get-credential command. Already delete the configuration completely and run the command afterwards. But the token is still the same.
I am very thankful for any hints or ideas on this.
Have you tried rerunning your credentials command again to refresh your local kubeconfig?
gcloud container clusters get-credentials <cluster_name> --zone <zone>
Alternatively, try the beta variant:
gcloud beta container clusters get-credentials <cluster_name> --zone <zone>
(You may need to install the beta package using gcloud components install beta)

"permanent" GKE kubectl service account authentication

I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.

How to acces Google kubernetes cluster without googlecloud SDK?

I'm having trouble figuring out how I can set my kubectl context to connect to a googlecloud cluster without using the gcloud sdk. (to run in a controlled CI environment)
I created a service account in googlecloud
Generated a secret from that (json format)
From there, how do I configure kubectl context to be able to interact with the cluster ?
Right in the Cloud Console you can find the connect link
gcloud container clusters get-credentials "your-cluster-name" --zone "desired-zone" --project "your_project"
But before this you should configure gcloud tool.

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.

Kubernetes unable to pull images from gcr.io

I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials