When I type gcloud container clusters get-credentials, I get response entry generated for ***.. and it looks like it is generated, but when I hit kubectl config view, there is nothing.
Reference of gcloud container clusters get-credentials says,
gcloud container clusters get-credentials updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine.
So I thought the problem was that ~/.kube/config did not exist, but creating an empty file did not change it.
The reason was that the PATH of WSL included the PATH of the Windows side by default, so it was calling the gcloud on Windows (installed by scoop).
I solved the problem by excluding the PATH of Windows, referring to this gists.
Related
I have two emails associated with two separate gcloud projects.
I can easily switch the projects with:
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
first#project1
* second#project2
$ gcloud config set account first#project1
I can then see, that gcloud did change the active account. I can also do this with:
$ gcloud config configurations list
...
$ gcloud config configurations set project1
And I can see the active configuration changes.
However it does not seem to have any effect for kubectl and terraform commands as they still use the previous configuration.
What am i doing wrong? How should I switch between the projects? It seems it has something to do with application-default account, but that looks it cannot be easily switched without relogin?
Edit: to precise the question:
What would be a correct sequence of commands to switch between gcloud auths (eg. first#project1, second#project2) so that it is usable in Kubernetes, Terraform and others?
Kubectl and terraform have own config or we can say context
for kubectl you need to change the cluster config using
kubectl config get-contexts
kubectl config use-context <cluster-name>
Or else each time you have set the context of Kubernetes cluster using Gcloud and it will get auto changed for kubectl
gcloud container clusters get-credentials cluster-name which takes the --project also.
Read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
For changing project in terraform there are different ways
Using different serviceaccount keys JSON
Changing project config inside terraform provider
Setting up environment variable GOOGLE_APPLICATION_CREDENTIALS
setting project inside the Provider
provider "google" {
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
Best approach to use : https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#credentials-1
As you are writing IAC so all config in code.
List of all possible methods for authentication terraform:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#authentication
SDK provides the following command, this helps in applying credentials to all API calls that make use of the Application Default Credentials client library.
Terraform is one of the classic applications that have this dependency.
gcloud auth application-default login
Here is the documentation for the above command.
I was successfully able to connect to the kubernetes cluster and work with the services and pods. At one point this changed and everytime I try to connect to the cluster I get the following error:
PS C:\Users\xxx> kubectl get pods
Unable to connect to the server: error parsing output for access token command "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": yaml: line 4: could not find expected ':'
I am unsure of what the issue is. Google unfortunately doesn't yield any results for me either.
I have not changed any config files or anything. It was a matter of it working one second and not working the next.
Thanks.
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true Try running this or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Referenced from here
The workaround for this issue being:
gcloud container clusters get-credentials <cluster-name> If you dont know your cluster name find it by gcloud container clusters list Finally, if those don't have issues, do gcloud auth application-default login and login with relative details
Working in GCP with several kubernetes clusters, I would like to automatically get cluster credentials when switching gcloud configurations.
I have created several configurations for gcloud with gcloud config configurations create [config-name] and I have set what I need, specifically gcloud config set container/cluster [cluster-name].
When I switch configurations with gcloud config configurations activate [config-name], everything goes ok, except I do not get the credentials for the cluster I have configured for that configuration. Instead I need to run gcloud container clusters get-credentials [cluster-name].
Is there any way to automatically get credentials for a cluster when activating a gcloud configuration?
I think not.
gcloud and kubectl are distinct tools and each maintains its own configuration.
gcloud container custers get-credentials is a bridging helper that configures kubectl configuration (conventionally located in ~/.kube/config file) with a gcloud auth helper to facilitate accessing Kubernetes Engine clusters. But, otherwise the 2 tools are unrelated.
Have a look at this post I wrote that covers using different configurations with kubectl. It's not exactly what you want but I hope it will be useful:
https://medium.com/google-cloud/context-light-gcloud-and-kubectl-89185d38ce82
I already found the way using gcloud CLI.
gcloud auth activate-service-account --key-file=serviceaccount.json
gcloud container clusters get-credentials $clusterName \
--zone=$zone --project=$project
kubectl config view --minify --flatten
However, to eliminate dependency to gcloud cli, Is there any programmatic way to achieve a similar result as above? Preferably using API exposed in Google's python client library.
My expected result is a portable config file that can be passed to any kubectl --kubeconfig=... command.
update: I have found that the commands I showed above results in a kube config file that still depends on gcloud cli as auth helper, probably to automatically handle token expiration. So, any workarounds are welcome.
I wrote a shell script which basically does exactly what you are expecting.
https://gitlab.com/workshop21/open-source/rbac
I was performing a practical where i was deploying a containerised sample application using kubernetes.
i was trying to run container on google cloud platform using kubernetes engine.But while deploying container using "kubectl run" command using google cloud shell.
its showing an error "error: failed to discover supported resources: Get https://35.240.145.231/apis/extensions/v1beta1: x509: certificate signed by unknown authority".
From Error, i can recollect that its because of "SSL Certificate" not authorised.
I even exported the config file resides at "$HOME/.kube/config". but still getting the same error.
please anyone help me understand the real issue behind this.
Best,
Swapnil Pawar
You may try following steps,
List all the available clusters,
$ gcloud container clusters list
Depending upon how you have configured the cluster, if the cluster location is configured for a specific zone then,
$ gcloud container clusters get-credentials <cluster_name> --ZONE <location>
or if the location is configured for a region then,
$ gcloud container clusters get-credentials <cluster_name> --REGION <location>
The above command will update your kubectl config file $HOME/.kube/config
Now, the tricky part.
If you have more than one cluster that you have configured, then your $HOME/.kube/config will have two or more entries. You can verify it by doing a cat command on the config file.
To select a particular context/cluster, you need to run the following commands
$ kubectl config get-contexts -o=name // will give you a list of available contexts
$ kubectl config use-context <CONTEXT_NAME>
$ kubectl config set-context <CONTEXT_NAME>
Now, you may run the kubectl run.