How to list really all objects of a nonexistant namespace? - kubernetes

Okay, the title is quite mouthful. But it's actually describing the situation.
I deployed a service on GKE in namespace argo-events. Something was wrong with it so I tore it down:
kubectl delete namespace argo-events
Actually, that's already where the problems started (I suspect a connection to the problem described below) and I had to resort to a hack because argo-events got stuck in a Terminating state forever. But the result was as desired - namespace seemed to be gone together with all objects in it.
Because of problems with redeployment I inspected the GKE Object Browser (just looking around - cannot filter for argo-events namespace anymore as it is officially gone) where I stumbled upon two lingering objects in ns argo-events:
argo-events is not listed by kubectl get namespaces. Just confirming that.
And I can find those two objects if I look them up specifically:
$ kubectl get eventbus -n argo-events
NAME AGE
default 17h
$ kubectl get eventsource -n argo-events
NAME AGE
pubsub-event-source 14h
But - I cannot find anything by asking for all objects:
$ kubectl get all -n argo-events
No resources found in argo-events namespace.
So my question is. How can I generically list all lingering objects in argo-events?
I'm asking because otherwise I'd have to inspect the entire Object Browser Tree to maybe find more objects (as I cannot select the namespace anymore).

By using command $ kubectl get all you will only print a few resources like:
pod
service
daemonset
deployment
replicaset
statefulset
job
cronjobs
It won't print all resources which can be found when you will use $ kubectl api-resources.
Example
When create PV from PersistentVolume documentation it won't be listed in $ kubectl get all output, but it will be listed if you will specify this resource.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml
persistentvolume/task-pv-volume created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 3m12s
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 86m
$
If you would like to list all resources from specific namespace you should use command below:
kubectl -n argo-events api-resources --namespaced=true -o name | xargs --verbose -I {} kubectl -n argo-events get {} --show-kind --ignore-not-found
Above solution was presented in Github thread kubectl get all does not list all resources in a namespace. In this thread you might find some additional variations of above command.
In addition, you can also check How to List all Resources in a Kubernetes Namespace article. You can find there method to list resources using function.

Related

pod getting terminated because of ownerReferences pointing to resource in different namespace in kubernetes

Starting kubernetes 1.20 there has been a change regarding ownerReferences and how K8s performs GC.
Basically if a resource in x namespace spins up a pod/job in a y namespace with child having ownerReferences referencing to parent resource in x, K8s terminates the child pod/job.
Reference:
Resolves non-deterministic behavior of the garbage collection controller when ownerReferences with incorrect data are encountered. Events with a reason of OwnerRefInvalidNamespace are recorded when namespace mismatches between child and owner objects are detected. The kubectl-check-ownerreferences tool can be run prior to upgrading to locate existing objects with invalid ownerReferences.
A namespaced object with an ownerReference referencing a uid of a namespaced kind which does not exist in the same namespace is now consistently treated as though that owner does not exist, and the child object is deleted.
A cluster-scoped object with an ownerReference referencing a uid of a namespaced kind is now consistently treated as though that owner is not resolvable, and the child object is ignored by the garbage collector. (#92743, #liggitt) [SIG API Machinery, Apps and Testing]
If we remove the ownerReferences, the resource wont be garbage collected. Is there a way to deal with this situation i.e.; how to make ownerReferences work in multiple namespaces OR let the job/pod clean itself once completed? Thanks.
As per Fix GC uid races and handling of conflicting ownerReferences #92743
namespaces are intended to be independent of each other, so cross-namespace references have not been permitted in things like ownerReferences, secret/configmap volume references, etc.
additionally, granting permissions to namespace a is not generally intended to provide visibility or ability to interact with objects from namespace b (or cause system controllers to interact with objects from namespace b).
and
Update GC cross-namespace note #25091
Cross-namespace owner references are disallowed by design.
So, using ownerReferences for garbage collection across namespaces is not possible by desing.
However, you can emulate multi-namespace GC using labels. You just need to configure those labels when some object creates sub-object.
Alternatively you can delete a namespace to GC all object in that namespace, but that's probably suboptimal solution.
EDIT
$ kubectl label pods owner=my -l region=europe
$ kubectl label pods owner=my -l region=pacific
$ kubectl label svc owner=my -l svc=europe
$ kubectl label svc owner=my -l svc=pacific
$ kubectl label pod kube-proxy-2wpz2 owner=my -n kube-system
$ kubectl label pod kube-proxy-cpqxt owner=my -n kube-system
$ kubectl get pods,svc -l owner=my --show-labels --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE LABELS
default pod/aloha-pod 1/1 Running 0 54d app=aloha,owner=my,region=pacific
default pod/ciao-pod 1/1 Running 0 54d app=ciao,owner=my,region=europe
default pod/hello-pod 1/1 Terminating 0 54d app=hello,owner=my,region=europe
default pod/ohayo-pod 1/1 Running 0 54d app=ohayo,owner=my,region=pacific
kube-system pod/kube-proxy-2wpz2 1/1 Running 2 299d controller-revision-hash=5cf956ffcf,k8s-app=kube-proxy,owner=my,pod-template-generation=1
kube-system pod/kube-proxy-cpqxt 1/1 Running 3 299d controller-revision-hash=5cf956ffcf,k8s-app=kube-proxy,owner=my,pod-template-generation=1
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
default service/europe ClusterIP 10.109.5.102 <none> 80/TCP 54d owner=my,svc=europe
default service/pacific ClusterIP 10.99.255.196 <none> 80/TCP 54d owner=my,svc=pacific
$ kubectl delete pod,svc -l owner=my --dry-run --all-namespaces
pod "aloha-pod" deleted (dry run)
pod "ciao-pod" deleted (dry run)
pod "hello-pod" deleted (dry run)
pod "ohayo-pod" deleted (dry run)
pod "kube-proxy-2wpz2" deleted (dry run)
pod "kube-proxy-cpqxt" deleted (dry run)
service "europe" deleted (dry run)
service "pacific" deleted (dry run)
Alternatively there could be a bash script that deletes all objects whose controller object doesn't exist, based on labels. It could also run inside the cluster with proper service account configured.
There is no straightforward, built-in option to achieve what you want. You should keep owner referenced objects in the same namespace.

List network policy rules kube-proxy on minikube

could you please help me? Where I can find list network policy rules kube-proxy on minikube?
Thanks.
Could you specify what exactly you want to list ? If it comes to listing network policies in all your namespaces, you can use the following command:
kubectl get networkpolicies --all-namespaces
If you want to list all your kube-proxy related resources, you can do it by running:
kubectl get all --all-namespaces | grep kube-proxy
As you can see there are only Pods and they are located in kube-system namespace, so kubectl get pods --namespace kube-system will also list them.
If you want to see their details, run:
kubectl describe pod <kube-proxy-pod-name> -n kube-system
Please let me know if this is what you're looking for.

kubectl create doesn't seem to do anything

I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces

Kubernetes pod gets recreated when deleted

I have started pods with command
$ kubectl run busybox \
--image=busybox \
--restart=Never \
--tty \
-i \
--generator=run-pod/v1
Something went wrong, and now I can't delete this Pod.
I tried using the methods described below but the Pod keeps being recreated.
$ kubectl delete pods busybox-na3tm
pod "busybox-na3tm" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox-vlzh3 0/1 ContainerCreating 0 14s
$ kubectl delete pod busybox-vlzh3 --grace-period=0
$ kubectl delete pods --all
pod "busybox-131cq" deleted
pod "busybox-136x9" deleted
pod "busybox-13f8a" deleted
pod "busybox-13svg" deleted
pod "busybox-1465m" deleted
pod "busybox-14uz1" deleted
pod "busybox-15raj" deleted
pod "busybox-160to" deleted
pod "busybox-16191" deleted
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox-c9rnx 0/1 RunContainerError 0 23s
You need to delete the deployment, which should in turn delete the pods and the replica sets https://github.com/kubernetes/kubernetes/issues/24137
To list all deployments:
kubectl get deployments --all-namespaces
Then to delete the deployment:
kubectl delete -n NAMESPACE deployment DEPLOYMENT
Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. If NAMESPACE is default, leave off the -n option altogether.
In some cases it could also be running due to a job or daemonset.
Check the following and run their appropriate delete command.
kubectl get jobs
kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
Instead of trying to figure out whether it is a deployment, deamonset, statefulset... or what (in my case it was a replication controller that kept spanning new pods :)
In order to determine what it was that kept spanning up the image I got all the resources with this command:
kubectl get all
Of course you could also get all resources from all namespaces:
kubectl get all --all-namespaces
or define the namespace you would like to inspect:
kubectl get all -n NAMESPACE_NAME
Once I saw that the replication controller was responsible for my trouble I deleted it:
kubectl delete replicationcontroller/CONTROLLER_NAME
If your pod has name like name-xxx-yyy, it could be controlled by a replicasets.apps named name-xxx, you should delete that replicaset first before deleting the pod:
kubectl delete replicasets.apps name-xxx
Obviously something is respawning the pod. While a lot of the other answers have you looking at everything (replica sets, jobs, deployments, stateful sets, ...) to find what may be respawning the pod, you can instead just look at the pod to see what spawned it. For example do:
$ kubectl describe pod $mypod | grep 'Controlled By:'
Controlled By: ReplicaSet/foobar
This tells you exactly what created the pod. You can then go and delete that.
Look out for stateful sets as well
kubectl get sts --all-namespaces
to delete all the stateful sets in a namespace
kubectl --namespace <yournamespace> delete sts --all
to delete them one by one
kubectl --namespace ag1 delete sts mssql1
kubectl --namespace ag1 delete sts mssql2
kubectl --namespace ag1 delete sts mssql3
This will provide information about all the pods,deployments, services and jobs
in the namespace.
kubectl get pods,services,deployments,jobs
pods can either be created by deployments or jobs
kubectl delete job [job_name]
kubectl delete deployment [deployment_name]
If you delete the deployment or job then restart of the pods can be stopped.
Many answers here tells to delete a specific k8s object, but you can delete multiple objects at once, instead of one by one:
kubectl delete deployments,jobs,services,pods --all -n <namespace>
In my case, I'm running OpenShift cluster with OLM - Operator Lifecycle Manager. OLM is the one who controls the deployment, so when I deleted the deployment, it was not sufficient to stop the pods from restarting.
Only when I deleted OLM and its subscription, the deployment, services and pods were gone.
First list all k8s objects in your namespace:
$ kubectl get all -n openshift-submariner
NAME READY STATUS RESTARTS AGE
pod/submariner-operator-847f545595-jwv27 1/1 Running 0 8d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/submariner-operator-metrics ClusterIP 101.34.190.249 <none> 8383/TCP 8d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/submariner-operator 1/1 1 1 8d
NAME DESIRED CURRENT READY AGE
replicaset.apps/submariner-operator-847f545595 1 1 1 8d
OLM is not listed with get all, so I search for it specifically:
$ kubectl get olm -n openshift-submariner
NAME AGE
operatorgroup.operators.coreos.com/openshift-submariner 8d
NAME DISPLAY VERSION
clusterserviceversion.operators.coreos.com/submariner-operator Submariner 0.0.1
Now delete all objects, including OLMs, subscriptions, deployments, replica-sets, etc:
$ kubectl delete olm,svc,rs,rc,subs,deploy,jobs,pods --all -n openshift-submariner
operatorgroup.operators.coreos.com "openshift-submariner" deleted
clusterserviceversion.operators.coreos.com "submariner-operator" deleted
deployment.extensions "submariner-operator" deleted
subscription.operators.coreos.com "submariner" deleted
service "submariner-operator-metrics" deleted
replicaset.extensions "submariner-operator-847f545595" deleted
pod "submariner-operator-847f545595-jwv27" deleted
List objects again - all gone:
$ kubectl get all -n openshift-submariner
No resources found.
$ kubectl get olm -n openshift-submariner
No resources found.
After taking an interactive tutorial I ended up with a bunch of pods, services, deployments:
me#pooh ~ > kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/kubernetes-bootcamp-5c69669756-lzft5 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-n947m 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-s2jhl 1/1 Running 0 43s
pod/kubernetes-bootcamp-5c69669756-v8vd4 1/1 Running 0 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37s
me#pooh ~ > kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default kubernetes-bootcamp 4 4 4 4 1h
docker compose 1 1 1 1 1d
docker compose-api 1 1 1 1 1d
kube-system kube-dns 1 1 1 1 1d
To clean up everything, delete --all worked fine:
me#pooh ~ > kubectl delete pods,services,deployments --all
pod "kubernetes-bootcamp-5c69669756-lzft5" deleted
pod "kubernetes-bootcamp-5c69669756-n947m" deleted
pod "kubernetes-bootcamp-5c69669756-s2jhl" deleted
pod "kubernetes-bootcamp-5c69669756-v8vd4" deleted
service "kubernetes" deleted
deployment.extensions "kubernetes-bootcamp" deleted
That left me with (what I think is) an empty Kubernetes cluster:
me#pooh ~ > kubectl get pods,services,deployments
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m
In some cases the pods will still not go away even when deleting the deployment. In that case to force delete them you can run the below command.
kubectl delete pods podname --grace-period=0 --force
When the pod is recreating automatically even after the deletion of the pod manually, then those pods have been created using the Deployment.
When you create a deployment, it automatically creates ReplicaSet and Pods. Depending upon how many replicas of your pod you mentioned in the deployment script, it will create those number of pods initially.
When you try to delete any pod manually, it will automatically create those pod again.
Yes, sometimes you need to delete the pods with force. But in this case force command doesn’t work.
Instead of removing NS you can try removing replicaSet
kubectl get rs --all-namespaces
Then delete the replicaSet
kubectl delete rs your_app_name
The root cause for the question asked was the deployment/job/replicasets spec attribute strategy->type which defines what should happen when the pod will be destroyed (either implicitly or explicitly). In my case, it was Recreate.
As per #nomad's answer, deleting the deployment/job/replicasets is the simple fix to avoid experimenting with deadly combos before messing up the cluster as a novice user.
Try the following commands to understand the behind the scene actions before jumping into debugging :
kubectl get all -A -o name
kubectl get events -A | grep <pod-name>
In my case I deployed via a YAML file like kubectl apply -f deployment.yaml and the solution appears to be to delete via kubectl delete -f deployment.yaml
Firstly list the deployments
kubectl get deployments
After that delete the deployment
kubectl delete deployment <deployment_name>
If you have a job that continues running, you need to search the job and delete it:
kubectl get job --all-namespaces | grep <name>
and
kubectl delete job <job-name>
You can do kubectl get replicasets check for old deployment based on age or time
Delete old deployment based on time if you want to delete same current running pod of application
kubectl delete replicasets <Name of replicaset>
I also faced the issue, I have used below command to delete deployment.
kubectl delete deployments DEPLOYMENT_NAME
but still pods was recreating, So I crossed check the Replica Set by using below command
kubectl get rs
then edit the replicaset to 1 to 0
kubectl edit rs REPICASET_NAME
With deployments that have stateful sets (or services, jobs, etc.) you can use this command:
This command terminates anything that runs in the specified <NAMESPACE>
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all
And forceful
kubectl -n <NAMESPACE> delete replicasets,deployments,jobs,service,pods,statefulsets --all --cascade=true --grace-period=0 --force
There is basically two ways to remove PODS
kubectl scale --replicas=0 deploy name_of_deployment.
This will set the number of replica to 0 and hence it will not restart the pods again.
Use helm to uninstall the chart which you have implemented in your pipeline.
Do not delete the deployment directly, instead use helm to uninstall the chart which will remove all objects it created.
The fastest solution for me was installing Lens IDE and removing the service under de DEPLOYMENTS tab. Just delete from this tab and the replica will be deleted too.
Best regards
Kubernetes always works in the format like:
deployments >>> replicasets >>> pods
first edit deployment with 0 replicas and then scale deployment with desired replicas(run below command).You will see new replicaset has been created and pods will also run with desired count.
*
IN-Linux:~ anuragmanikkame$ kubectl scale deploy tomcat -n
dev-namespace --replicas=2 deployment.extensions/tomcat scaled
I experienced a similar problem: after deleting the deployment (kubectl delete deploy <name>), the pods kept "Running" and where automatically re-created after deletion (kubectl delete po <name>).
It turned out that the associated replica set was not deleted automatically for some reason, and after deleting that (kubectl delete rs <name>), it was possible to delete the pods.
This has happened to me with some broken 'helm' installs. You might have a bit of a messed up deployment. If none of the previous suggestions work, look for a daemonset and delete that.
eg
kubectl get daemonset --namespace
then delete daemonset
kubectl delete daemonset --namespace <NAMESPACE> --all --force
then try to delete the pods.
kubectl delete pod --namespace <NAMESPACE> --all --force
Check if pods are gone.
kubectl get pods --all-namespaces
In my case I use these below
kubectl get all --all-namespaces
kubectl delete deployment statefulset-deploymentnament(choose your deployment name)
kubectl delete sts -n default(choose your namespace) --all
kubectl get pods --all-namespaces
Problem got resolved

How do you cleanly list all the containers in a kubernetes pod?

I am looking to list all the containers in a pod in a script that gather's logs after running a test. kubectl describe pods -l k8s-app=kube-dns returns a lot of info, but I am just looking for a return like:
etcd
kube2sky
skydns
I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).
Answer
kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}'
Explanation
This gets the JSON object representing the pod. It then uses kubectl's JSONpath to extract the name of each container from the pod.
You can use get and choose one of the supported output template with the --output (-o) flag.
Take jsonpath for example,
kubectl get pods -l k8s-app=kube-dns -o jsonpath={.items[*].spec.containers[*].name} gives you etcd kube2sky skydns.
Other supported output output templates are go-template, go-template-file, jsonpath-file. See http://kubernetes.io/docs/user-guide/jsonpath/ for how to use jsonpath template. See https://golang.org/pkg/text/template/#pkg-overview for how to use go template.
Update: Check this doc for other example commands to list container images: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Quick hack to avoid constructing the JSONpath query for a single pod:
$ kubectl logs mypod-123
a container name must be specified for pod mypod-123, choose one of: [etcd kubesky skydns]
I put some ideas together into the following:
Simple line:
kubectl get po -o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .spec.containers[*]}{"\tname: "}{.name}{"\n\timage: "}{.image}{"\n"}{end}'
Split (for readability):
kubectl get po -o jsonpath='
{range .items[*]}
{"pod: "}
{.metadata.name}
{"\n"}{range .spec.containers[*]}
{"\tname: "}
{.name}
{"\n\timage: "}
{.image}
{"\n"}
{end}'
How to list BOTH init and non-init containers for all pods
kubectl get pod -o="custom-columns=NAME:.metadata.name,INIT-CONTAINERS:.spec.initContainers[*].name,CONTAINERS:.spec.containers[*].name"
Output looks like this:
NAME INIT-CONTAINERS CONTAINERS
helm-install-traefik-sjts9 <none> helm
metrics-server-86cbb8457f-dkpqm <none> metrics-server
local-path-provisioner-5ff76fc89d-vjs6l <none> local-path-provisioner
coredns-6488c6fcc6-zp9gv <none> coredns
svclb-traefik-f5wwh <none> lb-port-80,lb-port-443
traefik-6f9cbd9bd4-pcbmz <none> traefik
dc-postgresql-0 init-chmod-data dc-postgresql
backend-5c4bf48d6f-7c8c6 wait-for-db backend
if you want a clear output of which containers are from each Pod
kubectl get po -l k8s-app=kube-dns \
-o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
To get the output in the separate lines:
kubectl get pods POD_NAME_HERE -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
Output:
base-container
sidecar-0
sidecar-1
sidecar-2
If you use json as output format of kubectl get you get plenty details of a pod. With json processors like jq it is easy to select or filter for certain parts you are interested in.
To list the containers of a pod the jq query looks like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq --raw-output '.items[].spec.containers[].name'
If you want to see all details regarding one specific container try something like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq '.items[].spec.containers[] | select(.name=="etcd")'
Use below command:
kubectl get pods -o=custom-columns=PodName:.metadata.name,Containers:.spec.containers[*].name,Image:.spec.containers[*].image
To see verbose information along with configmaps of all containers in a particular pod, use this command:
kubectl describe pod/<pod name> -n <namespace name>
Use below command to see all the information of a particular pod
kubectl get pod <pod name> -n <namespace name> -o yaml
For overall details about the pod try following command to get the container details as well
kubectl describe pod <podname>
I use this to display image versions on the pods.
kubectl get pods -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{end}{end}' && printf '\n'
It's just a small modification of script from here, with adding new line to start next console command on the new line, removed commas at the end of each line and listing only my pods, without service pods (e.g. --all-namespaces option is removed).
There are enough answers here but sometimes you want to see a deployment object pods' containers and initContainers. To do that;
1- Retrieve the deployment name
kubectl get deployment
2- Retrieve containers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.containers[*].name}'
3- Retrieve initContainers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.initContainers[*].name}'
Easiest way to know the containers in a pod:
kubectl logs -c -n