Kubernetes: How do I delete clusters and contexts from kubectl config? - kubernetes

kubectl config view shows contexts and clusters corresponding to clusters that I have deleted.
How can I remove those entries?
The command
kubectl config unset clusters
appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts?

kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g.
kubectl config unset users.gke_project_zone_name
kubectl config unset contexts.aws_cluster1-kubernetes
kubectl config unset clusters.foobar-baz
Side note, if you teardown your cluster using cluster/kube-down.sh (or gcloud if you use Container Engine), it will delete the associated kubeconfig entries. There is also a planned kubectl config rework for a future release to make the commands more intuitive/usable/consistent.

For clusters and contexts you can also do
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
There's nothing specific for users though, so you still have to do
kubectl config unset users.my-cluster-admin

Run command below to get all contexts you have:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* Cluster_Name_1 Cluster_1 clusterUser_resource-group_Cluster_1
Delete context:
$ kubectl config delete-context Cluster_Name_1

Unrelated to question, but maybe a useful resource.
Have a look at kubectx + kubens: Power tools for kubectl.
They make it easy to switch contexts and namespace + have the option to delete
Change context:
kubectx dev-cluster-01
Change namespace:
kubens dev-ns-01
Delete context:
kubectx -d my-context

Related

What does it mean "kubectl config set-cluster" what does it actually does?

While using this command for RBAC "kubectl config set-cluster test --server=https://127.0.0.1:52807" the IP here is from the kind-cluster that I am running after which I use "kubectl config set-context test --cluster=test" followed by required credentials & switch to the context by "kubectl config use-context test" and I am in the test context but with the first command I am configuring the config file I got that but m I making a cluster within a cluster what you guys understand please help me clear my doubt what is it actually doing?
kubectl config set-cluster sets a cluster entry in your kubeconfig file (usually found in $HOME/.kube/config). The kubeconfig file defines how your kubectl is configured.
The cluster entry defines where kubectl can find the kubernetes cluster to talk to. You can have multiple clusters defined in your kubeconfig file.
kubectl config set-context sets a context element, which is used to combine a cluster, namespace and user into a single element so that kubectl has everything it needs to communicate with the cluster. You can have multiple contexts, for example one per kubernetes cluster that you're managing.
kubectl config use-context sets your current context to be used in kubectl.
So to walk through your commands:
kubectl config set-cluster test --server=https://127.0.0.1:52807 creates a new entry in kubeconfig under the clusters section with a cluster called test pointing towards https://127.0.0.1:52807
kubectl config set-context test --cluster=test creates a new context in kubeconfig called test and tells that context to point to a cluster called test
kubectl config use-context test changes the the current context in kubeconfig to a context called test (which you just created).
More docs on kubectl config and kubeconfig:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration

kubectl delete all resources except the kubernetes service

Is there a variant of kubectl delete all --all command or some other command to delete all resources except the kubernetes service?
I don't think there's a built-in command for it, which means you'll have to script your way out of it, something like this (add an if for the namespace you want to spare):
$ for ns in $(kubectl get ns --output=jsonpath={.items[*].metadata.name}); do kubectl delete ns/$ns; done;
Note: deleting a namespace deletes all its resources.

Ensure kubectl is running in the correct context

Consider a simple script:
kubectl create -f foo.yaml
kubectl expose deployment foo
There seems to be a race condition, and no way to guarantee that the context of the second command runs in the same context as the first. (Consider the user going to another shell and invoking kubectl config set-context while the script is running.) How do you resolve that? How can I ensure consistency?
I suggest to always use --context flag:
$ kubectl options | grep context
--context='': The name of the kubeconfig context to use
for each kubectl command in order to define a context and prevent an issue described in a question:
ENV=<env_name>
kubectl create --context=$ENV -f foo.yaml
kubectl expose --context=$ENV deployment foo

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.

Is there a way to specific the google cloud platform project for kubectl in the command?

Is there something like:
kubectl get pods --project=PROJECT_ID
I would like not to modify my default gcloud configuration to switch between my staging and production environment.
kubectl saves clusters/contexts in its configuration. If you use the default scripts to bring up the cluster, these entries should've been set for your clutser.
A brief overview of kubectl config:
kubectl config view let you to view the cluster/contexts in
your configuration.
kubectl config set-cluster and kubectl config set-context modifies/adds new entries.
You can use kubectl config use-context to change the default context, and kubectl --context=CONTEXT get pods to switch to a different context for the current command.
You can download credentials for each of your clusters using gcloud container clusters get-credentials which takes the --project flag. Once the credentials are cached locally, you can use the --context flag (as Yu-Ju explains in her answer) to switch between clusters for each command.