kubectl create doesn't seem to do anything - kubernetes

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?

By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

Related

Kubernetes NetworkPolicy - Is there a way to identify which NetworkPolicies are applied to Pods

We have 3-4 different NetworkPolicy in our Namespace and they are applied based on Pod Selector. Want to know is there any way from Pod side to know which NetworkPolicy is applied on it?
If POD selector used you can use the simple way
kubectl get pod -l \
$( \
kubectl get netpolicies <netpolicy-name> \
-o jsonpath="{.spec.podSelector.matchLabels}"| \
jq -r 'to_entries|map("\(.key)=\(.value)")[]' \
)
This will get the policy selector and use it as input and list the pods
Any way from Pod side
There is no POD side you can check, however I read somewhere kubectl describe pod-name could show Network Policies I tested not showing at least in minikube
So you can use the above command or describe the networkpolicy itself to get POD selector and get an idea.
kubectl describe networkpolicies <name of policy>
The output of kubectl get network policy should display the pod-selector.
After that you can use kubectl get pod -l key=value to list the pods affected.
you can automate this using a bash script/function.
I would also recommend checking "kubectl np-viewer" which is a kubectl plugin, can be found here. This plugin has what you are asking for out of box.
kubectl np-viewer -p pod-name prints network policies rules affecting a specific pod in the current namespace

Pod is not found when trying to delete, however, can be patched

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

How to list applied Custom Resource Definitions in kubernetes with kubectl

I recently applied this CRD file
https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
With kubectl apply to install this: https://hub.helm.sh/charts/jetstack/cert-manager
I think I managed to apply it successfully:
xetra11#x11-work configuration]$ kubectl apply -f ./helm-charts/certificates/00-crds.yaml --validate=false
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
But now I would like to "see" what I just applied here. I have no idea how to list those definitions or for example remove them if I think they will screw up my cluster somehow.
I was not able to find any information to that here: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#preparing-to-install-a-custom-resource
kubectl get customresourcedefinitions, or kubectl get crd.
You can then use kubectl describe crd <crd_name> to get a description of the CRD. And of course kubectl get crd <crd_name> -o yaml to get the complete definition of the CRD.
To remove you can use kubectl delete crd <crd_name>.
Custom Resources are like any other native Kubernetes resource.
All the basic kubeclt CRUD operations work fine for CRDs. So just use any of the below commands.
kubectl get crd <name of crd>
kubectl describe crd <name of crd>
kubectl get crd <name of crd> -o yaml
First, you can list all your CRD's with kubectl get crd for example:
$ kubectl get crd
NAME CREATED AT
secretproviderclasses.secrets-store.csi.x-k8s.io 2022-07-06
secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io 2022-07-06
This is the list of available CRD's definitions, then you take the name of one and launch a kubectl get <crd_name> to get a list of applied resources from this CRD. For example:
$ kubectl get secretproviderclasses.secrets-store.csi.x-k8s.io
NAME AGE
azure-kv 5d
Note: Use -A to target all namespaces or -n <namespace>
You may arrive here confused about why you see your CRDs in kubectl get api-resources, e.g. this Istio Telemetry resource:
kubectl api-resources --api-group=telemetry.istio.io
NAME SHORTNAMES APIVERSION NAMESPACED KIND
telemetries telemetry telemetry.istio.io/v1alpha1 true Telemetry
but then attempting to kubectl describe them yields an error like
kubectl describe crd Telemetry.telemetry.istio.io
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "Telemetry.telemetry.istio.io" not found
or
kubectl describe crd telemetry.istio.io/v1alpha1
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'kubectl get resource/<resource_name>' instead of 'kubectl get resource resource/<resource_name>'
That's because you must use the plural form of the full name of the CRD. See kubectl get crd for the names, e.g.:
$ kubectl get crd |grep -i telemetry
telemetries.telemetry.istio.io 2022-03-21T08:49:29Z
So kc describe crd telemetries.telemetry.istio.io will work for this CRD.
List the crds (no namespace as crds are cluster scoped):
kubectl get crds
Describe the crd:
kubectl describe crd challenges.acme.cert-manager.io

How to delete all resources from Kubernetes one time?

Include:
Daemon Sets
Deployments
Jobs
Pods
Replica Sets
Replication Controllers
Stateful Sets
Services
...
If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?
Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:
kubectl delete all --all
all refers to all resource types such as pods, deployments, services, etc. --all is used to delete every object of that resource type instead of specifying it using its name or label.
To delete everything from a certain namespace you use the -n flag:
kubectl delete all --all -n {namespace}
Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:
kubectl delete namespace {namespace}
kubectl create namespace {namespace}
Note (thanks #Marcus): all in kubernetes does not refers to every kubernetes object, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use kubectl api-resources to get all resource types, as seen here:
kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
Kubernetes Namespace would be the perfect options for you. You can easily create namespace resource.
kubectl create -f custom-namespace.yaml
$ apiVersion: v1
kind: Namespace
metadata:
name:custom-namespace
Now you can deploy all of the other resources(Deployment,ReplicaSet,Services etc) in that custom namespaces.
If you want to delete all of these resources, you just need to delete custom namespace. by deleting custom namespace, all of the other resources would be deleted. Without it, ReplicaSet might create new pods when existing pods are deleted.
To work with Namespace, you need to add --namespace flag to k8s commands.
For example:
kubectl create -f deployment.yaml --namespace=custom-namespace
you can list all the pods in custom-namespace.
kubectl get pods --namespace=custom-namespace
You can also delete Kubernetes resources with the help of labels attached to it. For example, suppose below label is attached to all resource
metadata:
name: label-demo
labels:
env: dev
app: nginx
now just execute the below commands
deleting resources using app label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l app=nginx
deleting resources using envirnoment label
$ kubectl delete pods,rs,deploy,svc,cm,ing -l env=dev
can also try kubectl delete all --all --all-namespaces
all refers to all resources
--all refers to all resources, including uninitialized ones
--all-namespaces in all all namespaces
First backup your namespace resources and then delete all resources found with the get all command:
kubectl get all --namespace={your-namespace} -o yaml > {your-namespace}.yaml
kubectl delete -f {your-namespace}.yaml
Nevertheless, still some resources exists in your cluster.
Check with
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found --namespace {your-namespace}
If you really want to COMPLETELY delete your namespace, go ahead with:
kubectl delete namespace {your-namespace}
(tested with Client v1.23.1 and Server v1.22.3)
In case if you want to delete all K8S resources in the cluster. Then, easiest way would be to delete the entire namespace.
kubectl delete ns <name-space>
kubectl delete deploy,service,job,statefulset,pdb,networkpolicy,prometheusrule,cm,secret,ds -n namespace -l label
kubectl delete all --all
to delete all the resource in cluster.
after deleting all resources k8's will again relaunch the default services for cluster.

How do you cleanly list all the containers in a kubernetes pod?

I am looking to list all the containers in a pod in a script that gather's logs after running a test. kubectl describe pods -l k8s-app=kube-dns returns a lot of info, but I am just looking for a return like:
etcd
kube2sky
skydns
I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).
Answer
kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}'
Explanation
This gets the JSON object representing the pod. It then uses kubectl's JSONpath to extract the name of each container from the pod.
You can use get and choose one of the supported output template with the --output (-o) flag.
Take jsonpath for example,
kubectl get pods -l k8s-app=kube-dns -o jsonpath={.items[*].spec.containers[*].name} gives you etcd kube2sky skydns.
Other supported output output templates are go-template, go-template-file, jsonpath-file. See http://kubernetes.io/docs/user-guide/jsonpath/ for how to use jsonpath template. See https://golang.org/pkg/text/template/#pkg-overview for how to use go template.
Update: Check this doc for other example commands to list container images: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Quick hack to avoid constructing the JSONpath query for a single pod:
$ kubectl logs mypod-123
a container name must be specified for pod mypod-123, choose one of: [etcd kubesky skydns]
I put some ideas together into the following:
Simple line:
kubectl get po -o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .spec.containers[*]}{"\tname: "}{.name}{"\n\timage: "}{.image}{"\n"}{end}'
Split (for readability):
kubectl get po -o jsonpath='
{range .items[*]}
{"pod: "}
{.metadata.name}
{"\n"}{range .spec.containers[*]}
{"\tname: "}
{.name}
{"\n\timage: "}
{.image}
{"\n"}
{end}'
How to list BOTH init and non-init containers for all pods
kubectl get pod -o="custom-columns=NAME:.metadata.name,INIT-CONTAINERS:.spec.initContainers[*].name,CONTAINERS:.spec.containers[*].name"
Output looks like this:
NAME INIT-CONTAINERS CONTAINERS
helm-install-traefik-sjts9 <none> helm
metrics-server-86cbb8457f-dkpqm <none> metrics-server
local-path-provisioner-5ff76fc89d-vjs6l <none> local-path-provisioner
coredns-6488c6fcc6-zp9gv <none> coredns
svclb-traefik-f5wwh <none> lb-port-80,lb-port-443
traefik-6f9cbd9bd4-pcbmz <none> traefik
dc-postgresql-0 init-chmod-data dc-postgresql
backend-5c4bf48d6f-7c8c6 wait-for-db backend
if you want a clear output of which containers are from each Pod
kubectl get po -l k8s-app=kube-dns \
-o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
To get the output in the separate lines:
kubectl get pods POD_NAME_HERE -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
Output:
base-container
sidecar-0
sidecar-1
sidecar-2
If you use json as output format of kubectl get you get plenty details of a pod. With json processors like jq it is easy to select or filter for certain parts you are interested in.
To list the containers of a pod the jq query looks like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq --raw-output '.items[].spec.containers[].name'
If you want to see all details regarding one specific container try something like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq '.items[].spec.containers[] | select(.name=="etcd")'
Use below command:
kubectl get pods -o=custom-columns=PodName:.metadata.name,Containers:.spec.containers[*].name,Image:.spec.containers[*].image
To see verbose information along with configmaps of all containers in a particular pod, use this command:
kubectl describe pod/<pod name> -n <namespace name>
Use below command to see all the information of a particular pod
kubectl get pod <pod name> -n <namespace name> -o yaml
For overall details about the pod try following command to get the container details as well
kubectl describe pod <podname>
I use this to display image versions on the pods.
kubectl get pods -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{end}{end}' && printf '\n'
It's just a small modification of script from here, with adding new line to start next console command on the new line, removed commas at the end of each line and listing only my pods, without service pods (e.g. --all-namespaces option is removed).
There are enough answers here but sometimes you want to see a deployment object pods' containers and initContainers. To do that;
1- Retrieve the deployment name
kubectl get deployment
2- Retrieve containers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.containers[*].name}'
3- Retrieve initContainers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.initContainers[*].name}'
Easiest way to know the containers in a pod:
kubectl logs -c -n