Run kubectl on a Virtual Machine - kubernetes

Im trying to get the kubectl running on a VM. I followed the steps given here and can go thru with the installation. I copied my local kubernetes config (from /Users/me/.kube/config) to the VM in the .kube directory. However when I run any command such as kubectl get nodes it returns error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information
Is there a way I can run kubectl on a VM ?

To use kubectl to talk to Google Container Engine cluster in a non-Google VM, you can create a user-managed IAM Service Account, and use it to authenticate to your cluster:
# Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME#$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME
# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL
# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer
# Configure Application Default Credentials (what kubectl uses) to use that service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE
And then go ahead and use kubectl as you normally would.

Related

CLI command ordering for toggling between two GKE/kubectl accounts w/ two emails

Several weeks ago I asked this question and received a very helpful answer. The gist of that question was: "how do I switch back and forth between two different K8s/GCP accounts on the same machine?" I have 2 different K8s projects with 2 different emails (gmails) that live on 2 different GKE clusters in 2 different GCP accounts. And I wanted to know how to switch back and forth between them so that when I run kubectl and gcloud commands, I don't inadvertently apply them to the wrong project/account.
The answer was to basically leverage kubectl config set-context along with a script.
This question (today) is an extenuation of that question, a "Part 2" so to speak.
I am confused about the order in which I:
Set the K8s context (again via kubectl config set-context ...); and
Run gcloud init; and
Run gcloud auth; and
Can safely run kubectl and gcloud commands and be sure that I am hitting the right GKE cluster
My understanding is that gcloud init only has to be ran once to initialize the gcloud console on your system. Which I have already done.
So my thinking here is that I could be able to do the following:
# 1. switch K8s context to Project 1
kubectl config set-context <context for GKE project 1>
# 2. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 1 (and GCP Account 1)
gcloud auth
# 3. run a bunch of kubectl and gcloud commands for Project/GCP Account 1
# 4. switch K8s context to Project 2
kubectl config set-context <context for GKE project 2>
# 5. authenticate w/ GCP so that now gcloud commands will only hit the GCP
# resources associated with Project 2 (and GCP Account 2)
gcloud auth
# 6. run a bunch of kubectl and gcloud commands for Project/GCP Account 2
Is my understanding here correct or is it more involved/complicated than this (and if so, why)?
I'll assume familiarity with the earlier answer
gcloud
gcloud init need only be run once per machine and only again if you really want to re-init'ialize the CLI (gcloud).
gcloud auth login ${ACCOUNT} authenticates a (Google) (user or service) account and persists (on Linux by default in ${HOME}/.config/gcloud) and renews the credentials.
gcloud auth list lists the accounts that have been gcloud auth login. The results show which account is being used by default (ACTIVE with *).
Somewhat inconveniently, one way to switch between the currently ACTIVE account is to change gcloud global (every instance on the machine) configuration using gcloud config set account ${ACCOUNT}.
kubectl
To facilitate using previously authenticated (i.e. gcloud auth login ${ACCOUNT}) credentials with Kubernetes Engine, Google provides the command gcloud container clusters get-credentials. This uses the currently ACTIVE gcloud account to create a kubectl context that joins a Kubernetes Cluster with a User and possibly with a Kubernetes Namespace too. gcloud container clusters get-credentials makes changes to kubectl config (on Linux by default in ${HOME}/.kube/config).
What is a User? See Users in Kubernetes. Kubernetes Engine (via kubectl) wants (OpenID Connect) Tokens. And, conveniently, gcloud can provide these tokens for us.
How? Per previous answer
user:
auth-provider:
config:
access-token: [[redacted]]
cmd-args: config config-helper --format=json
cmd-path: path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-02-22T22:22:22Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
kubectl uses the configuration file to invoke gcloud config config-helper --format=json and extracts the access_token and token_expiry from the result. GKE can then use the access_token to authenticate the user. And, if necessary can renew the token using Google's token endpoint after expiry (token_expiry).
Scenario
So, how do you combine all of the above.
Authenticate gcloud with all your Google accounts
ACCOUNT="client1#gmail.com"
gcloud auth login ${ACCOUNT}
ACCOUNT="client2#gmail.com"
gcloud auth login ${ACCOUNT} # Last will be the `ACTIVE` account
Enumerate these
gcloud auth list
Yields:
ACTIVE ACCOUNT
client1#gmail.com
* client2#gmail.com # This is ACTIVE
To set the active account, run:
$ gcloud config set account `ACCOUNT`
Switch between users for gcloud commands
NOTE This doesn't affect kubectl
Either
gcloud config set account client1#gmail.com
gcloud auth list
Yields:
ACTIVE ACCOUNT
* client1#gmail.com # This is ACTIVE
client2#gmail.com
Or you can explicitly add --account=${ACCOUNT} to any gcloud command, e.g.:
# Explicitly unset your account
gcloud config unset account
# This will work and show projects accessible to client1
gcloud projects list --account=client1#gmail.com
# This will work and show projects accessible to client2
gcloud projects list --account=client2#gmail.com
Create kubectl contexts for any|all your Google accounts (via gcloud)
Either
ACCOUNT="client1#gmail.com"
PROJECT="..." # Project accessible to ${ACCOUNT}
gcloud container clusters get-credentials ${CLUSTER} \
--ACCOUNT=${ACCOUNT} \
--PROJECT=${PROJECT} \
...
Or equivalently using kubectl config set-context directly:
kubectl config set-context ${CONTEXT} \
--cluster=${CLUSTER} \
--user=${USER} \
But it avoids having to gcloud config get-clusters, gcloud config get-users etc.
NOTE gcloud containers clusters get-credentials uses derived names for contexts and GKE uses derived names for clusters. If you're confident you can edit kubectl config directly (or using kubectl config commands) to rename these cluster, context and user references to suit your needs.
List kubectl contexts
kubectl config get-context
Yields:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* client1 a-cluster client1
client2 b-cluster client2
Switch between kubectl contexts (clusters*users)
NOTE This doesn't affect gcloud
Either
kubectl config use-context ${CONTEXT}
Or* you can explicitly add --context flag to any kubectl commands
# Explicitly unset default|current context
kubectl config unset current-context
# This will work and list deployments accessible to ${CONTEXT}
kubectl get deployments --context=${CONTEXT}

How to easily switch gcloud / kubectl credentials

At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.

"permanent" GKE kubectl service account authentication

I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.

How to acces Google kubernetes cluster without googlecloud SDK?

I'm having trouble figuring out how I can set my kubectl context to connect to a googlecloud cluster without using the gcloud sdk. (to run in a controlled CI environment)
I created a service account in googlecloud
Generated a secret from that (json format)
From there, how do I configure kubectl context to be able to interact with the cluster ?
Right in the Cloud Console you can find the connect link
gcloud container clusters get-credentials "your-cluster-name" --zone "desired-zone" --project "your_project"
But before this you should configure gcloud tool.

GCE/GKE Kubectl: the server doesn't have a resource type "services"

I have two kubernetes clusters on google container engine but on seperate google accounts (one using my company's email and another using my personal email). I attempted to switch from one cluster to another. I did this by:
Logging in with my other email address
$ gcloud init
Getting new kubectl credentials
gcloud container cluster get-credentials
Test to see if connected to new cluster
$ kubectl get po
However, I was still not able to get the kubernetes resources in the cluster. The error I received was:
the server doesn't have a resource type "pods"
This occurs because although I logged in with the new credentials... kubectl isn't using the new credentials. In order to change the login/access credentials that kubectl will use to access your cluster you need to run the following command:
gcloud auth application-default login
You will then get the following response:
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth
redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&respons
e_type=code&client_id=...&
scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email
+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&access_type=offline
Credentials saved to file: [/Users/.../.config/gcloud/application_default_credentials.json]
These credentials will be used by any library that requests
Application Default Credentials.
Then get cluster credentials
gcloud container clusters get-credentials [cluster name/id]
You should now be able to access the cluster using kubectl.