How to programmatically generate kubernetes config from GCP service account using python API - kubernetes

I already found the way using gcloud CLI.
gcloud auth activate-service-account --key-file=serviceaccount.json
gcloud container clusters get-credentials $clusterName \
--zone=$zone --project=$project
kubectl config view --minify --flatten
However, to eliminate dependency to gcloud cli, Is there any programmatic way to achieve a similar result as above? Preferably using API exposed in Google's python client library.
My expected result is a portable config file that can be passed to any kubectl --kubeconfig=... command.
update: I have found that the commands I showed above results in a kube config file that still depends on gcloud cli as auth helper, probably to automatically handle token expiration. So, any workarounds are welcome.

I wrote a shell script which basically does exactly what you are expecting.
https://gitlab.com/workshop21/open-source/rbac

Related

Easiest and quickest way to authenticate GKE when creating GKE cluster via terraform

I'm trying to create a GKE cluster via terraform on a test project, however it now asks me for a more complicated means of authentication and I was wondering if there is a nice quick and easy way to authenticate? Like a copy-paste password or so?
Assuming you are just running this from the command line, you'll need to install the gcloud CLI and then run the following command prior to running Terraform:
gcloud init
gcloud auth application-default login
You can check out https://learn.hashicorp.com/tutorials/terraform/gke for more info.
Note: for a real automated deployment, you would not want to use gcloud auth application-default login but rather you'd set up a service account.

Switching gcloud accounts for Terraform and Kubernetes

I have two emails associated with two separate gcloud projects.
I can easily switch the projects with:
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
first#project1
* second#project2
$ gcloud config set account first#project1
I can then see, that gcloud did change the active account. I can also do this with:
$ gcloud config configurations list
...
$ gcloud config configurations set project1
And I can see the active configuration changes.
However it does not seem to have any effect for kubectl and terraform commands as they still use the previous configuration.
What am i doing wrong? How should I switch between the projects? It seems it has something to do with application-default account, but that looks it cannot be easily switched without relogin?
Edit: to precise the question:
What would be a correct sequence of commands to switch between gcloud auths (eg. first#project1, second#project2) so that it is usable in Kubernetes, Terraform and others?
Kubectl and terraform have own config or we can say context
for kubectl you need to change the cluster config using
kubectl config get-contexts
kubectl config use-context <cluster-name>
Or else each time you have set the context of Kubernetes cluster using Gcloud and it will get auto changed for kubectl
gcloud container clusters get-credentials cluster-name which takes the --project also.
Read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
For changing project in terraform there are different ways
Using different serviceaccount keys JSON
Changing project config inside terraform provider
Setting up environment variable GOOGLE_APPLICATION_CREDENTIALS
setting project inside the Provider
provider "google" {
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
Best approach to use : https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#credentials-1
As you are writing IAC so all config in code.
List of all possible methods for authentication terraform:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#authentication
SDK provides the following command, this helps in applying credentials to all API calls that make use of the Application Default Credentials client library.
Terraform is one of the classic applications that have this dependency.
gcloud auth application-default login
Here is the documentation for the above command.

Kubernetes suddenly stopped being able to connect to server

I was successfully able to connect to the kubernetes cluster and work with the services and pods. At one point this changed and everytime I try to connect to the cluster I get the following error:
PS C:\Users\xxx> kubectl get pods
Unable to connect to the server: error parsing output for access token command "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": yaml: line 4: could not find expected ':'
I am unsure of what the issue is. Google unfortunately doesn't yield any results for me either.
I have not changed any config files or anything. It was a matter of it working one second and not working the next.
Thanks.
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true Try running this or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Referenced from here
The workaround for this issue being:
gcloud container clusters get-credentials <cluster-name> If you dont know your cluster name find it by gcloud container clusters list Finally, if those don't have issues, do gcloud auth application-default login and login with relative details

Automatically get cluster credentials when activating gcloud configuration

Working in GCP with several kubernetes clusters, I would like to automatically get cluster credentials when switching gcloud configurations.
I have created several configurations for gcloud with gcloud config configurations create [config-name] and I have set what I need, specifically gcloud config set container/cluster [cluster-name].
When I switch configurations with gcloud config configurations activate [config-name], everything goes ok, except I do not get the credentials for the cluster I have configured for that configuration. Instead I need to run gcloud container clusters get-credentials [cluster-name].
Is there any way to automatically get credentials for a cluster when activating a gcloud configuration?
I think not.
gcloud and kubectl are distinct tools and each maintains its own configuration.
gcloud container custers get-credentials is a bridging helper that configures kubectl configuration (conventionally located in ~/.kube/config file) with a gcloud auth helper to facilitate accessing Kubernetes Engine clusters. But, otherwise the 2 tools are unrelated.
Have a look at this post I wrote that covers using different configurations with kubectl. It's not exactly what you want but I hope it will be useful:
https://medium.com/google-cloud/context-light-gcloud-and-kubectl-89185d38ce82

What's the CLI authentication process as of Google Container Engine/Kubernetes 1.4.5?

Which steps must one currently go through in order to authenticate against Google Container Engine/Kubernetes 1.4.5?
As I set up a third Google Cloud project today, I experienced that my previous GKE cluster setup flow no longer worked. My flow was the following:
gcloud auth login
gcloud config set compute/region europe-west1
gcloud config set compute/zone europe-west1-d
gcloud config set project myproject
gcloud container clusters get-credentials staging
# An example of a typical kubectl command to see that you've got the right cluster
kubectl get pods --all-namespaces
Whereas this used to work perfectly, I was now getting permission errors while trying to query the cluster, e.g. kubectl get pods would emit the following error message: the server does not allow access to the requested resource (get pods)
After googling back and forth, I realized that kubectl depends on something called Application Default Credentials. At some point I also noticed by chance that gcloud auth login emits the following:
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
So I realized eventually, that with the current gcloud/Kubernetes version I also need to call gcloud auth application-default in order to use the credentials of my current account rather than that of the previously activated project.
So, I am hoping someone can please clarify what is the actual authentication workflow for Google Container Engine/Kubernetes version 1.4.5??
You found out the right answer. kubectl's GCP authentication plugin only supports Application Default Credentials, which were recently decoupled from gcloud's standard credentials. So, in 1.4.5 you need to run gcloud auth application-default login to ensure that kubectl is using the credentials you expect.
We think that most users just expect to use the same credentials as gcloud, with ADC being useful for some service account scenarios where gcloud might not even be installed. So, there is a pull request to Kubernetes to add a "use gcloud credentials" option to the kubectl gcp authentication plugin. This should be available in kubectl 1.5.