CLI command ordering for toggling between two GKE/kubectl accounts w/ two emails - kubernetes

Several weeks ago I asked this question and received a very helpful answer. The gist of that question was: "how do I switch back and forth between two different K8s/GCP accounts on the same machine?" I have 2 different K8s projects with 2 different emails (gmails) that live on 2 different GKE clusters in 2 different GCP accounts. And I wanted to know how to switch back and forth between them so that when I run kubectl and gcloud commands, I don't inadvertently apply them to the wrong project/account.
The answer was to basically leverage kubectl config set-context along with a script.
This question (today) is an extenuation of that question, a "Part 2" so to speak.
I am confused about the order in which I:
Set the K8s context (again via kubectl config set-context ...); and
Run gcloud init; and
Run gcloud auth; and
Can safely run kubectl and gcloud commands and be sure that I am hitting the right GKE cluster
My understanding is that gcloud init only has to be ran once to initialize the gcloud console on your system. Which I have already done.
So my thinking here is that I could be able to do the following:
# 1. switch K8s context to Project 1
kubectl config set-context <context for GKE project 1>
# 2. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 1 (and GCP Account 1)
gcloud auth
# 3. run a bunch of kubectl and gcloud commands for Project/GCP Account 1
# 4. switch K8s context to Project 2
kubectl config set-context <context for GKE project 2>
# 5. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 2 (and GCP Account 2)
gcloud auth
# 6. run a bunch of kubectl and gcloud commands for Project/GCP Account 2
Is my understanding here correct or is it more involved/complicated than this (and if so, why)?

I'll assume familiarity with the earlier answer
gcloud
gcloud init need only be run once per machine and only again if you really want to re-init'ialize the CLI (gcloud).
gcloud auth login ${ACCOUNT} authenticates a (Google) (user or service) account and persists (on Linux by default in ${HOME}/.config/gcloud) and renews the credentials.
gcloud auth list lists the accounts that have been gcloud auth login. The results show which account is being used by default (ACTIVE with *).
Somewhat inconveniently, one way to switch between the currently ACTIVE account is to change gcloud global (every instance on the machine) configuration using gcloud config set account ${ACCOUNT}.
kubectl
To facilitate using previously authenticated (i.e. gcloud auth login ${ACCOUNT}) credentials with Kubernetes Engine, Google provides the command gcloud container clusters get-credentials. This uses the currently ACTIVE gcloud account to create a kubectl context that joins a Kubernetes Cluster with a User and possibly with a Kubernetes Namespace too. gcloud container clusters get-credentials makes changes to kubectl config (on Linux by default in ${HOME}/.kube/config).
What is a User? See Users in Kubernetes. Kubernetes Engine (via kubectl) wants (OpenID Connect) Tokens. And, conveniently, gcloud can provide these tokens for us.
How? Per previous answer
user:
auth-provider:
config:
access-token: [[redacted]]
cmd-args: config config-helper --format=json
cmd-path: path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-02-22T22:22:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
kubectl uses the configuration file to invoke gcloud config config-helper --format=json and extracts the access_token and token_expiry from the result. GKE can then use the access_token to authenticate the user. And, if necessary can renew the token using Google's token endpoint after expiry (token_expiry).
Scenario
So, how do you combine all of the above.
Authenticate gcloud with all your Google accounts
ACCOUNT="client1#gmail.com"
gcloud auth login ${ACCOUNT}
ACCOUNT="client2#gmail.com"
gcloud auth login ${ACCOUNT} # Last will be the `ACTIVE` account
Enumerate these
gcloud auth list
Yields:
ACTIVE ACCOUNT
client1#gmail.com
* client2#gmail.com # This is ACTIVE
To set the active account, run:
$ gcloud config set account `ACCOUNT`
Switch between users for gcloud commands
NOTE This doesn't affect kubectl
Either
gcloud config set account client1#gmail.com
gcloud auth list
Yields:
ACTIVE ACCOUNT
* client1#gmail.com # This is ACTIVE
client2#gmail.com
Or you can explicitly add --account=${ACCOUNT} to any gcloud command, e.g.:
# Explicitly unset your account
gcloud config unset account
# This will work and show projects accessible to client1
gcloud projects list --account=client1#gmail.com
# This will work and show projects accessible to client2
gcloud projects list --account=client2#gmail.com
Create kubectl contexts for any|all your Google accounts (via gcloud)
Either
ACCOUNT="client1#gmail.com"
PROJECT="..." # Project accessible to ${ACCOUNT}
gcloud container clusters get-credentials ${CLUSTER} \
--ACCOUNT=${ACCOUNT} \
--PROJECT=${PROJECT} \
...
Or equivalently using kubectl config set-context directly:
kubectl config set-context ${CONTEXT} \
--cluster=${CLUSTER} \
--user=${USER} \
But it avoids having to gcloud config get-clusters, gcloud config get-users etc.
NOTE gcloud containers clusters get-credentials uses derived names for contexts and GKE uses derived names for clusters. If you're confident you can edit kubectl config directly (or using kubectl config commands) to rename these cluster, context and user references to suit your needs.
List kubectl contexts
kubectl config get-context
Yields:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* client1 a-cluster client1
client2 b-cluster client2
Switch between kubectl contexts (clusters*users)
NOTE This doesn't affect gcloud
Either
kubectl config use-context ${CONTEXT}
Or* you can explicitly add --context flag to any kubectl commands
# Explicitly unset default|current context
kubectl config unset current-context
# This will work and list deployments accessible to ${CONTEXT}
kubectl get deployments --context=${CONTEXT}

Related

Switching gcloud accounts for Terraform and Kubernetes

I have two emails associated with two separate gcloud projects.
I can easily switch the projects with:
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
first#project1
* second#project2
$ gcloud config set account first#project1
I can then see, that gcloud did change the active account. I can also do this with:
$ gcloud config configurations list
...
$ gcloud config configurations set project1
And I can see the active configuration changes.
However it does not seem to have any effect for kubectl and terraform commands as they still use the previous configuration.
What am i doing wrong? How should I switch between the projects? It seems it has something to do with application-default account, but that looks it cannot be easily switched without relogin?
Edit: to precise the question:
What would be a correct sequence of commands to switch between gcloud auths (eg. first#project1, second#project2) so that it is usable in Kubernetes, Terraform and others?
Kubectl and terraform have own config or we can say context
for kubectl you need to change the cluster config using
kubectl config get-contexts
kubectl config use-context <cluster-name>
Or else each time you have set the context of Kubernetes cluster using Gcloud and it will get auto changed for kubectl
gcloud container clusters get-credentials cluster-name which takes the --project also.
Read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
For changing project in terraform there are different ways
Using different serviceaccount keys JSON
Changing project config inside terraform provider
Setting up environment variable GOOGLE_APPLICATION_CREDENTIALS
setting project inside the Provider
provider "google" {
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
Best approach to use : https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#credentials-1
As you are writing IAC so all config in code.
List of all possible methods for authentication terraform:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#authentication
SDK provides the following command, this helps in applying credentials to all API calls that make use of the Application Default Credentials client library.
Terraform is one of the classic applications that have this dependency.
gcloud auth application-default login
Here is the documentation for the above command.

How to easily switch gcloud / kubectl credentials

At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.

"permanent" GKE kubectl service account authentication

I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.

How to access GKE kubectl proxy dashboard?

I would imagine the interface would have some button I could click to launch the kubectl proxy dashboard, but I could not find it.
I tried this command to get the token and entered it in:
gcloud container clusters get-credentials mycluster
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
kubectl proxy
And it shows some things, but not others (services are missing, says it's forbidden).
How do I use kubectl proxy or show that dashboard with GKE?
Provided you are authenticated with gcloud auth login and the current project and k8s cluster is configured to the one you need, authenticate kubectl to the cluster (this will write ~/.kube/config):
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
retrieve the auth token that the kubectl itself uses to authenticate as you
gcloud config config-helper --format=json | jq -r '.credential.access_token'
run
kubectl proxy
Then open a local machine web browser on
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
(This will only work if you checked the checkbox Deploy Dashboard in GCP console)
and use the token from the second command to log in with your Google Account's permissions.
The Dashboard is disabled and deprecated in GKE as of September 2017. GKE provides a built in dashboard through the Management Console GUI.
You can disable it from the Google Cloud Console (UI).
Edit your cluster
Go to "Add-ons" section
Find "Kubernetes dashboard"
Chose "disabled" from dropdown
Save it.
Also according to the documentation this thing will be removed starting GKE 1.15
Warning: The open source Kubernetes Dashboard addon is deprecated for clusters on GKE and will be removed as an option in version 1.15. It is recommended to use the alternative GCP Console dashboards described on this page.
At the time of writing, the dashboard is not deployed by default (neither in the standard Kubernetes distribution, nor as part of a GKE cluster). In order to get it up and running, you have to first follow the instructions from the Kubernetes site, summarized here:
Within the proper kubectl context, run the following: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (nb: this url is obviously subject to change, so do check the official site to obtain the most recent version).
Then do what #Alexander outlines:
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project>
gcloud config config-helper --format=json
kubectl proxy
You'll be prompted for either the access token displayed in the second step or a kubeconfig file:
Pasting the access token in the field provided will gain you entry to the dashboard.

Run kubectl on a Virtual Machine

Im trying to get the kubectl running on a VM. I followed the steps given here and can go thru with the installation. I copied my local kubernetes config (from /Users/me/.kube/config) to the VM in the .kube directory. However when I run any command such as kubectl get nodes it returns error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information
Is there a way I can run kubectl on a VM ?
To use kubectl to talk to Google Container Engine cluster in a non-Google VM, you can create a user-managed IAM Service Account, and use it to authenticate to your cluster:
# Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME#$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME
# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL
# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer
# Configure Application Default Credentials (what kubectl uses) to use that service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE
And then go ahead and use kubectl as you normally would.