kubernetes: How to set active namespace for all kubectl commands? - kubernetes

I am working on kubernetes cluster. In my cluster i am having 3 namespaces.
Default
Staging
Production
At a time when i want to work on staging namespace.
In every kubectl command i have to pass namespace
kubectl get pods -n staging
kubectl get deployment -n staging
Is there any way to set active namespace at a time?

kubectl config set-context --current --namespace=<insert-namespace-name-here>
# Validate it
kubectl config view --minify | grep namespace:
Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference

kubectl config set-context --current --namespace=<insert-namespace-name-here>
Refer here
Also you can use kubectx plugin

Related

Cluster name within the Pod [duplicate]

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.

Get YAML for deployed Kubernetes services?

I am trying to deploy my app to Kubernetes running in Google Container
Engine.
The app can be found at: https://github.com/Industrial/docker-znc.
The Dockerfile is built into an image on Google Container Registry.
I have deployed the app in Kubernetes via the + button. I don't have the YAML
for this.
I have inserted a Secret in Kubernetes for the PEM file required by the app.
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
How do I get the Secret into my Pod for usage?
To get the yaml for a deployment (service, pod, secret, etc):
kubectl get deploy deploymentname -o yaml
How do I get the YAML for the Deployment, Service and Pod created by
Kubernetes by filling in the form?
kubectl get deployment,service,pod yourapp -o yaml --export
Answering #Sinaesthetic question:
any idea how to do it for the full cluster (all deployments)?
kubectl get deploy --all-namespaces -o yaml --export
The problem with this method is that export doesn't include the namespace. So if you want to export many resources at the same time, I recommend doing it per namespace:
kubectl get deploy,sts,svc,configmap,secret -n default -o yaml --export > default.yaml
Unfortunately kubernetes still doesn't support a true get all command, so you need to list manually the type of resources you want to export. You can get a list of resource types with
kubectl api-resources
The same issue is discussed at kubernetes GitHub issues page and the user "alahijani" made a bash script that exports all yaml and writes them to single files and folders.
Since this question ranks well on Google and since I found that solution very good, I represent it here.
Bash script exporting yaml to sub-folders:
for n in $(kubectl get -o=name pvc,configmap,serviceaccount,secret,ingress,service,deployment,statefulset,hpa,job,cronjob)
do
mkdir -p $(dirname $n)
kubectl get -o=yaml --export $n > $n.yaml
done
Another user "acondrat" made a script that do not use directories, which makes it easy to make a kubectl apply -f later.
Bash script exporting yaml to current folder:
for n in $(kubectl get -o=name pvc,configmap,ingress,service,secret,deployment,statefulset,hpa,job,cronjob | grep -v 'secret/default-token')
do
kubectl get -o=yaml --export $n > $(dirname $n)_$(basename $n).yaml
done
The last script does not include service account.
Now that --export is deprecated, to get the output from your resources in the 'original' format (just cleaned up, without any information about the current object state (unnecessary metadata in this circumstance)) you can do the following using yq v4.x:
kubectl get <resource> -n <namespace> <resource-name> -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' -
Syntax for downloading yaml's from kubernetes
kubectl get [resource type] -n [namespace] [resource Name] -o yaml > [New file name]
Create yaml file from running pod:
kubectl get po -n nginx nginx-deployment-755cfc7dcf-5s7j8 -o yaml > podDetail.yaml
Create replicaset yaml file from running pod:
kubectl get rs -n nginx -o yaml > latestReplicaSet.yaml
Create deployement yaml file from running pod:
kubectl get deploy -n nginx -o yaml > latestDeployement.yaml
Also its possible to use the view-last-applied command e.g.
kubectl apply view-last-applied services --all > services.yaml
which will return all the manifests applied to create services. Also you can specify a certain k8 resource by services/resource-name label.
If you need to get 'clean' export, removing the annotations added by Kubernetes, there's an opensource project that does that by piping the output of kubectl get - https://github.com/itaysk/kubectl-neat.
It removes the timestamp metadata, etc.
kubectl get pod mypod -o yaml | kubectl neat
kubectl get pod mypod -oyaml | kubectl neat -o json
Use this command to get yaml format of your service
kubectl get service servicename -n <namespace> -o yaml
You can put it in some file also
kubectl get service servicename -n <namespace> -o yaml > service.yaml
The following code will extract all your K8s definitions at once and place them on individual folders below the current folder.
for OBJ in $(kubectl api-resources --verbs=list --namespaced -o name)
do
for DEF in $(kubectl get --show-kind --ignore-not-found $OBJ -o name)
do
mkdir -p $(dirname $DEF)
kubectl get $DEF -o yaml \
| yq eval 'del(.metadata.resourceVersion, .metadata.uid, .metadata.annotations, .metadata.creationTimestamp, .metadata.selfLink, .metadata.managedFields)' - > $DEF.yaml
done
done
You can store output of deployed kubernetes service by using below command -
kubectl get svc -n -o yaml > svc-output.yaml
For deployments -
kubectl get deploy <deployment-name> -n <your-namespace> -o yaml > deploy-output.yaml
For Pod -
kubectl get pod <pod-name> -n <your-namespace> -o yaml > pod-output.yaml
You can get your secret details using below command -
kubectl get secret -n -o yaml
In order to use update your deployment file by using below command -
kubectl edit deploy -n
Under your pod template add below -
this will go under pod containers section to mount secret volume to container
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
this will go inside your pod template section in deployment
volumes:
- name: foo
secret:
secretName: mysecret
for the 2nd question regarding the secret, this is from the k8s documentation. see https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets for more info.
Create a secret or use an existing one. Multiple pods can reference the same secret.
Modify your Pod definition to add a volume under spec.volumes[]. Name the volume anything, and have a spec.volumes[].secret.secretName field equal to the name of the secret object.
Add a spec.containers[].volumeMounts[] to each container that needs the secret. Specify spec.containers[].volumeMounts[].readOnly = true and spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image and/or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
I have used this and it works fine.
Like mentioned above "--export" is one option to get the manifest corresponding to the kubeernetes objects
But "--export" is considered to be buggy and there is a proposal to deprecate it
Currently the better option is to do "-o yaml" or "-o json" and remove the unnecessary fields
The main difference is "--export" is expected to remove the cluster specific settings (e.g. cluster service IP of a k8s service). But it is found to be inconsistent in this regard
All services
kubectl get service --all-namespaces -o yaml > all-service.yaml
All deployments
kubectl get deploy --all-namespaces -o yaml > all-deployment.yaml
We can get the contents associated with any kind from a Kubernetes cluster through the command line if you have the read access.
kubectl get <kind> <kindname> -n <namespace> -o <yaml or json>
For example, if you want to export a deployment from a namespace follow the below command -
kubectl get deploy mydeploy -n mynamespace -o yaml > mydeploy.yaml
kubectl get deploy mydeploy -n mynamespace -o json > mydeploy.json
To get all yaml file deployments backup (not a specific deployment):
kubectl get deployments -n <namespace> -o yaml > deployments.yaml
for getting all yaml file services backup (not a specific deployment):
kubectl get services -n <namespace> -o yaml > services.yaml
enjoy it.
To get YAML for current running deployment on kubernetes, you can run this command:
kubectl get deployment <deployment_name> -o yaml
To generate YAML for deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> -o yaml
To generate and export the deployment you can run the imperative command.
kubectl create deployment <deployment_name>--image=<image_name> --dry-run=client -o yaml > example.yaml
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
With the command above, any resource defined in Kubernetes can be exported in YAML format.
You can try use kube-dump bash script
With this utility, you can save Kubernetes cluster resources as a pure yaml manifest without unnecessary metadata.
GitHub repository
Review of the utility in blog page
We can get yaml for deployed resources using below command.
kubectl get <resource name> -o yaml
OR
kubectl get <resource name> <name of pod> -o yaml
example:-
kubectl get deploy Nginx -o yaml
above commands will give you yaml output.
if you want to store the output into any file you can use below command.
kubectl get pod nginx -o yaml > Nginx-pod.yaml
above command will redirect you output to Nginx-pod.yaml in your courrent directory.
If you need to view and edit the file use:
kubectl edit service servicename
You can get the yaml files of the resources using this command
kubectl -n <namespace> get <resource type> <resource Name> -o yaml
To get the secret into your pod,
use something like this
env
- valueFrom
secretKeyRef:
name: secret_name
key: key_name
or
envFrom
- secretRef:
name: secret_name
Is only minor difference from #Janos Lenart's answer!
kubectl get deploy deploymentname -o yaml > outputFile.yaml will do
I know it is too old to answer, but hopefully, someone will find it helpful.
We can try below command to fetch a kind export from all namespace -
kubectl get <kind> --all-namespaces --export -o yaml

How to get Kubernetes cluster name from K8s API

As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.

Kubernetes: How do I delete clusters and contexts from kubectl config?

kubectl config view shows contexts and clusters corresponding to clusters that I have deleted.
How can I remove those entries?
The command
kubectl config unset clusters
appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts?
kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g.
kubectl config unset users.gke_project_zone_name
kubectl config unset contexts.aws_cluster1-kubernetes
kubectl config unset clusters.foobar-baz
Side note, if you teardown your cluster using cluster/kube-down.sh (or gcloud if you use Container Engine), it will delete the associated kubeconfig entries. There is also a planned kubectl config rework for a future release to make the commands more intuitive/usable/consistent.
For clusters and contexts you can also do
kubectl config delete-cluster my-cluster
kubectl config delete-context my-cluster-context
There's nothing specific for users though, so you still have to do
kubectl config unset users.my-cluster-admin
Run command below to get all contexts you have:
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* Cluster_Name_1 Cluster_1 clusterUser_resource-group_Cluster_1
Delete context:
$ kubectl config delete-context Cluster_Name_1
Unrelated to question, but maybe a useful resource.
Have a look at kubectx + kubens: Power tools for kubectl.
They make it easy to switch contexts and namespace + have the option to delete
Change context:
kubectx dev-cluster-01
Change namespace:
kubens dev-ns-01
Delete context:
kubectx -d my-context