I have created a set of topics, subscriptions and service accounts in gcloud. Since this is automated and i am in the dev environment, i would like to tear down this entire setup.
I can do this by individually deleting these resources like below:
gcloud pubsub topics delete projects/<project name>/topics/<topic name>
But i was wondering if its possible to associate something like a label while creating these resources so that one can delete the group using this label.
Resources have to be deleted individually in gcloud, and by there respective commands (i.e. you have to delete topics with gcloud pubsub topics delete), but you can streamline the process with --format, --filter, piping, and shell utilities. For example, to delete all pubsub topics with the label 'foo' with value 'bar', you can use:
gcloud pubsub topics list --format="value(name)" --filter="labels.foo:bar" | xargs -l gcloud pubsub topics delete $1
Related
I'm trying to connect to a Digital Ocean Kubernates cluster using doctl but when I run
doctl kubernetes cluster kubeconfig save <> I get an error saying .kube/config: not a directory. I've authenticated using doctl and when I run doctl account get I see my account info. I'm confused as to what the problem is. Is this some sort of permission issue or did I miss a config step somewhere?
kubectl (by default) stores a configuration in ${HOME}/.kube/config. It appears you don't have the file and the command doesn't create it if it doesn't exist; I recommend you try creating ${HOME}/.kube first as doctl really ought to create the config file if it doesn't exist.
kubectl facilitates interacting with multiple clusters as multiple users in multiple namespaces through the use a tuple called 'context' which combines a cluster with a user with a(n optional) namespace. The command lets you switch between these easily.
After you're done with a cluster, generally (!) you must tidy up its entires in ${HOME}/.kube/config too as these configs tend to grow over time.
You can change the location of the kubectl config file using an environment variable (KUBECONFIG).
See Organizing Cluster Access Using kubeconfig Files
I have two emails associated with two separate gcloud projects.
I can easily switch the projects with:
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
first#project1
* second#project2
$ gcloud config set account first#project1
I can then see, that gcloud did change the active account. I can also do this with:
$ gcloud config configurations list
...
$ gcloud config configurations set project1
And I can see the active configuration changes.
However it does not seem to have any effect for kubectl and terraform commands as they still use the previous configuration.
What am i doing wrong? How should I switch between the projects? It seems it has something to do with application-default account, but that looks it cannot be easily switched without relogin?
Edit: to precise the question:
What would be a correct sequence of commands to switch between gcloud auths (eg. first#project1, second#project2) so that it is usable in Kubernetes, Terraform and others?
Kubectl and terraform have own config or we can say context
for kubectl you need to change the cluster config using
kubectl config get-contexts
kubectl config use-context <cluster-name>
Or else each time you have set the context of Kubernetes cluster using Gcloud and it will get auto changed for kubectl
gcloud container clusters get-credentials cluster-name which takes the --project also.
Read more at : https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl
For changing project in terraform there are different ways
Using different serviceaccount keys JSON
Changing project config inside terraform provider
Setting up environment variable GOOGLE_APPLICATION_CREDENTIALS
setting project inside the Provider
provider "google" {
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference
Best approach to use : https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#credentials-1
As you are writing IAC so all config in code.
List of all possible methods for authentication terraform:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference#authentication
SDK provides the following command, this helps in applying credentials to all API calls that make use of the Application Default Credentials client library.
Terraform is one of the classic applications that have this dependency.
gcloud auth application-default login
Here is the documentation for the above command.
Such as system:masters、system:anonymous、system:unauthenticated.
Is there a way to have all system groups that do not contain external creation, just the system,kubectl command or a list?
I searched the Kubernetes documentation but didn't find a list or a way to get it.
There is no build-in command to list all the default user groups from the Kubernetes cluster.
However you can try to workaround in several options:
You can create your custom script (i.e. in Bash) based on kubectl get clusterrole command.
You can try install some plugins. Plugin rakkess could help you:
Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use kubectl auth can-i list deployments, but maybe you are looking for a complete overview? This is what rakkess is for. It lists access rights for the current user and all server resources, similar to kubectl auth can-i --list.
See also more information about:
kubelet authentication / authorization
anonymous requests
At work we use Kubernetes hosted in GCP. I also have a side project hosted in my personal GCP account using Google App Engine (deploy using gcloud app deploy).
Often when I try to run a command such as kubectl logs -f service-name, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email#gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various kubectl and gcloud documentation but it doesn't make much sense or talks in circles.
Edit: this is on Linux.
Had the same problem and doing all of the:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
didn't help. I knew gcloud switched fine as I was able to list other resources with it directly.
But it seems kubectl can't pick those changes up automatically, as kubectl/gcloud integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with gcloud, you should re-generate the creds:
gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone>
I'm in the same boat as you - apps deployed in GKE for work and personal projects deployed in my personal GCP account.
gcloud stores a list of logged in accounts that you can switch between to communicate with associated projects. Take a look at these commands:
gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
To work with a specific project under one of your accounts you need to set that configuration via gcloud config set project PROJECT_ID
kubectl has a list of "contexts" on your local machine in ~/.kube/config. Your current context is the cluster you want to run commands against - similar to the active account/project in gcloud.
Unlike gcloud these are cluster specific and store info on cluster endpoint, default namespaces, the current context, etc. You can have contexts from GCP, AWS, on-prem...anywhere you have a cluster. We have different clusters for dev, qa, and prod (thus different contexts) and switch between them a ton. Take a look at the [kubectx project][1] https://github.com/ahmetb/kubectx for an easier way to switch between contexts and namespaces.
kubectl will use the keys from whatever GCP account you are logged in with against the cluster that is set as your current context. i.e., from your error above, if your active account for gcloud is your personal but try to list pods from a cluster at work you will get an error. You either need to set the active account/project for gcloud to your work email or change the kubectl context to a cluster that is hosted in your personal GCP account/project.
For me updating the ~/.kube/config and setting the expiry to a date in past fixes it
TL;DR
Use gcloud config configurations to manage your separate profiles with Google Cloud Platform.
Add an explicit configuration argument to the cmd-args of your kubeconfig's user to prevent gcloud from producing an access token for an unrelated profile.
users:
- user:
auth-provider:
config:
cmd-args: config --configuration=work config-helper --format=json
Can somebody please break it down for a slow person like me, how gcloud and kubectl work together, and how I can easily switch accounts so I can use gcloud commands for my personal projects and kubectl commands for my work projects?
Sure! By following Google's suggested instructions that lead to running gcloud container clusters get-credentials ... when configuring a kubernetes cluster, you will end up with a section of your kubeconfig that contains information on what kubectl should do to acquire an access token when communicating with a cluster that is configured with a given user. That will look something like this:
users:
- name: gke_project-name_cluster-zone_cluster-name
user:
auth-provider:
config:
access-token: &Redacted
cmd-args: config config-helper --format=json
cmd-path: /path/to/google-cloud-sdk/bin/gcloud
expiry: "2022-12-25T01:02:03Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Basically, this tells kubectl to run gcloud config config-helper --format=json when it needs a new token, and to parse the access_token using the json-path .credential.access_token in the response from that command. This is the crux in understanding how kubectl communicates with gcloud.
Like you, I use google cloud both personally and at work. The issue is that this user configuration block does not take into account the fact that it shouldn't use the currently active gcloud account when generating a credential. Even if you don't use kubernetes in either one of your two projects, extensions in vscode for example might try to run a kubectl command when you're working on something in a different project. If this were to happen after your current token is expired, gcloud config config-helper might get invoked to generate a token using a personal account.
To prevent this from happening, I suggest using gcloud config configuations. Configurations are global configuration profiles that you can quickly switch between. For example, I have two configurations that look like:
> gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
work False zev#work.email work-project us-west1-a us-west1
personal True zev#personal.email personal-project northamerica-northeast1-a northamerica-northeast1
With configurations you can alter your kubeconfig to specify which configuration to always use when creating an access token for a given kubernetes user by altering the kubeconfig user's auth-provider.config.cmd-args to include one of your gcloud configurations. With a value like config --configuration=work config-helper --format=json, whenever kubectl needs a new access token, it will use the account from your work configuration regardless of which account is currently active with the gcloud tool.
When creating various Kubernetes objects in GKE, associated GCP resources are automatically created. I'm specifically referring to:
forwarding-rules
target-http-proxies
url-maps
backend-services
health-checks
These have names such as k8s-fw-service-name-tls-ingress--8473ea5ff858586b.
After deleting a cluster, these resources remain. How can I identify which of these are still in use (by other Kubernetes objects, or another cluster) and which are not?
There is no easy way to identify which added GCP resources (LB, backend, etc.) are linked to which cluster. You need to manually go into these resources to see what they are linked to.
If you delete a cluster with additional resources attached, you have to also manually delete these resources as well. At this time, I would suggest taking note of which added GCP resources are related to which cluster, so that you will know which resources to delete when the time comes to deleting the GKE cluster.
I would also suggest to create a feature request here to request for either a more defined naming convention for additional GCP resources being created linked to a specific cluster and/or having the ability to automatically delete all additonal resources linked to a cluster when deleting said cluster.
I would recommend you to look at https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/14-cleanup.md
You can easily delete all the objects by using the google cloud sdk in the following manner :
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
{
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
}
This assumes you named your resources 'kubernetes-the-hard-way', if you do not know the names, you can also use various filter mechanisms to filter resources by namespaces etc to remove these.