I'm having trouble figuring out how I can set my kubectl context to connect to a googlecloud cluster without using the gcloud sdk. (to run in a controlled CI environment)
I created a service account in googlecloud
Generated a secret from that (json format)
From there, how do I configure kubectl context to be able to interact with the cluster ?
Right in the Cloud Console you can find the connect link
gcloud container clusters get-credentials "your-cluster-name" --zone "desired-zone" --project "your_project"
But before this you should configure gcloud tool.
Related
Working in GCP with several kubernetes clusters, I would like to automatically get cluster credentials when switching gcloud configurations.
I have created several configurations for gcloud with gcloud config configurations create [config-name] and I have set what I need, specifically gcloud config set container/cluster [cluster-name].
When I switch configurations with gcloud config configurations activate [config-name], everything goes ok, except I do not get the credentials for the cluster I have configured for that configuration. Instead I need to run gcloud container clusters get-credentials [cluster-name].
Is there any way to automatically get credentials for a cluster when activating a gcloud configuration?
I think not.
gcloud and kubectl are distinct tools and each maintains its own configuration.
gcloud container custers get-credentials is a bridging helper that configures kubectl configuration (conventionally located in ~/.kube/config file) with a gcloud auth helper to facilitate accessing Kubernetes Engine clusters. But, otherwise the 2 tools are unrelated.
Have a look at this post I wrote that covers using different configurations with kubectl. It's not exactly what you want but I hope it will be useful:
https://medium.com/google-cloud/context-light-gcloud-and-kubectl-89185d38ce82
I deploy apps to Kubernetes running on Google Cloud from CI. CI makes use of kubectl config which contains auth information (either in directly CVS or templated from the env vars during build)
CI has seperate Google Cloud service account and I generate kubectl config via
gcloud auth activate-service-account --key-file=key-file.json
and
gcloud container clusters get-credentials <cluster-name>
This sets the kubectl config but the token expires in few hours.
What are my options of having 'permanent' kubectl config other than providing CI with key file during the build and running gcloud container clusters get-credentials ?
You should look into RBAC (role based access control) which will authenticate the role avoiding expiration in contrast to certificates which currently expires as mentioned.
For those asking the same question and upvoting.
This is my current sollution:
For some time I treated key-file.json as an identity token, put it to the CI config and used it within container with gcloud CLI installed. I used the key file/token to log in to GCP and let gcloud generate kubectl config - the same approach used for GCP container registry login.
This works fine but using kubectl in CI is kind of antipattern. I switched to deploying based on container registry push events. This is relatively easy to do in k8s with keel flux, etc. So CI has only to push Docker image to the repo and its job ends there. The rest is taken care of within k8s itself so there is no need for kubectl and it's config in the CI jobs.
I am able to create a google dataproc cluster from the command line using a custom image:
gcloud beta dataproc clusters create cluster-name --image=custom-image-name
as specified in https://cloud.google.com/dataproc/docs/guides/dataproc-images, but I am unable to find information about how to do the same using the v1beta2 REST api in order to create a cluster from within airflow. Any help would be greatly appreciated.
Since custom images can theoretically reside in a different project if you grant read/use access of that custom image to whatever project service account you use for the Dataproc cluster, images currently always need a full URI, not just a short name.
When you use gcloud, there's syntactic sugar where gcloud will resolve the full URI automatically; you can see this in action if you use --log-http with your gcloud command:
gcloud beta dataproc clusters create foo --image=custom-image-name --log-http
If you created one with gcloud you can also gcloud dataproc clusters describe your cluster to see the fully-resolved custom image URI.
I have two kubernetes clusters on google container engine but on seperate google accounts (one using my company's email and another using my personal email). I attempted to switch from one cluster to another. I did this by:
Logging in with my other email address
$ gcloud init
Getting new kubectl credentials
gcloud container cluster get-credentials
Test to see if connected to new cluster
$ kubectl get po
However, I was still not able to get the kubernetes resources in the cluster. The error I received was:
the server doesn't have a resource type "pods"
This occurs because although I logged in with the new credentials... kubectl isn't using the new credentials. In order to change the login/access credentials that kubectl will use to access your cluster you need to run the following command:
gcloud auth application-default login
You will then get the following response:
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth
redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&respons
e_type=code&client_id=...&
scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email
+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform&access_type=offline
Credentials saved to file: [/Users/.../.config/gcloud/application_default_credentials.json]
These credentials will be used by any library that requests
Application Default Credentials.
Then get cluster credentials
gcloud container clusters get-credentials [cluster name/id]
You should now be able to access the cluster using kubectl.
Im trying to get the kubectl running on a VM. I followed the steps given here and can go thru with the installation. I copied my local kubernetes config (from /Users/me/.kube/config) to the VM in the .kube directory. However when I run any command such as kubectl get nodes it returns error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information
Is there a way I can run kubectl on a VM ?
To use kubectl to talk to Google Container Engine cluster in a non-Google VM, you can create a user-managed IAM Service Account, and use it to authenticate to your cluster:
# Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME#$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME
# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL
# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer
# Configure Application Default Credentials (what kubectl uses) to use that service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE
And then go ahead and use kubectl as you normally would.