Can I use kubectl to find deployments that have affinities set? - kubernetes

I would like to validate, that deployments, which have Pod- and NodeAffinities (+AntiAffinity) are configured according to internal guidelines.
Is there a possibility to get deployments (or Pods) using kubectl and limit the result to Objects, that have such an affinity configured?
I have played around with the jsonpath output, but was unsuccessful so far.

hope you are enjoying your Kubernetes journey !
If you need to use affinities (especially with preferredDuringSchedulingIgnoredDuringExecution (explications below)) and just want to just "find" deployments that actually have affinities, you can use this:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity"
NAME AFFINITIES
nginx-deployment <none>
nginx-deployment-vanilla <none>
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
Every <none> pattern indicates that there is no affinity in the deployment.
However, with affinities, if you want to get only the deployments that have affinities without the deployments that don't have affinities, use this:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" | grep -v "<none>"
NAME AFFINITIES
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
And if you just want the names of the deployments that have affinities, consider using this little script:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" --no-headers | grep -v "<none>" | awk '{print $1}'
nginx-deployment-with-affinities
But, do not forget that nodeSelector is the simplest way to constrain Pods to nodes with specific labels. (more info here: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). Also remember that (according to the same link) the requiredDuringSchedulingIgnoredDuringExecution type of node Affinity functions like nodeSelector, but with a more expressive syntax !
So If you don't need preferredDuringSchedulingIgnoredDuringExecution when dealing with affinities consider using nodeSelector !
After reading the above link, if you want to deal with nodeSelector you can use the same mechanic I used before:
❯ k get deploy -o custom-columns=NAME:".metadata.name",NODE_SELECTOR:".spec.template.spec.nodeSelector"
NAME NODE_SELECTOR
nginx-deployment map[test-affinities:test]
nginx-deployment-vanilla <none>

Related

Kubernetes NetworkPolicy - Is there a way to identify which NetworkPolicies are applied to Pods

We have 3-4 different NetworkPolicy in our Namespace and they are applied based on Pod Selector. Want to know is there any way from Pod side to know which NetworkPolicy is applied on it?
If POD selector used you can use the simple way
kubectl get pod -l \
$( \
kubectl get netpolicies <netpolicy-name> \
-o jsonpath="{.spec.podSelector.matchLabels}"| \
jq -r 'to_entries|map("\(.key)=\(.value)")[]' \
)
This will get the policy selector and use it as input and list the pods
Any way from Pod side
There is no POD side you can check, however I read somewhere kubectl describe pod-name could show Network Policies I tested not showing at least in minikube
So you can use the above command or describe the networkpolicy itself to get POD selector and get an idea.
kubectl describe networkpolicies <name of policy>
The output of kubectl get network policy should display the pod-selector.
After that you can use kubectl get pod -l key=value to list the pods affected.
you can automate this using a bash script/function.
I would also recommend checking "kubectl np-viewer" which is a kubectl plugin, can be found here. This plugin has what you are asking for out of box.
kubectl np-viewer -p pod-name prints network policies rules affecting a specific pod in the current namespace

Kubernetes API - gets list of Nodes hosting specific deployment/pods

Say I have a 5 node cluster of kafka and a kubernetes cluster of 100 nodes.
Now, I want to find all 5 nodes (of 100) which is hosting kafka pod. So something like:
kubectl get nodes --selector="deployment.kafka"
I dont think thats possible, what you can do is select your pods based on labels and get the node name, As #Krishna said in his comment, so the command will be
kubectl get pods -n NAMESPACE_NAME -l app=kafka -o wide | awk '{print $7}'
app=kafka is the label on the pods, it might be different in your case

How to list really all objects of a nonexistant namespace?

Okay, the title is quite mouthful. But it's actually describing the situation.
I deployed a service on GKE in namespace argo-events. Something was wrong with it so I tore it down:
kubectl delete namespace argo-events
Actually, that's already where the problems started (I suspect a connection to the problem described below) and I had to resort to a hack because argo-events got stuck in a Terminating state forever. But the result was as desired - namespace seemed to be gone together with all objects in it.
Because of problems with redeployment I inspected the GKE Object Browser (just looking around - cannot filter for argo-events namespace anymore as it is officially gone) where I stumbled upon two lingering objects in ns argo-events:
argo-events is not listed by kubectl get namespaces. Just confirming that.
And I can find those two objects if I look them up specifically:
$ kubectl get eventbus -n argo-events
NAME AGE
default 17h
$ kubectl get eventsource -n argo-events
NAME AGE
pubsub-event-source 14h
But - I cannot find anything by asking for all objects:
$ kubectl get all -n argo-events
No resources found in argo-events namespace.
So my question is. How can I generically list all lingering objects in argo-events?
I'm asking because otherwise I'd have to inspect the entire Object Browser Tree to maybe find more objects (as I cannot select the namespace anymore).
By using command $ kubectl get all you will only print a few resources like:
pod
service
daemonset
deployment
replicaset
statefulset
job
cronjobs
It won't print all resources which can be found when you will use $ kubectl api-resources.
Example
When create PV from PersistentVolume documentation it won't be listed in $ kubectl get all output, but it will be listed if you will specify this resource.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml
persistentvolume/task-pv-volume created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Available manual 3m12s
$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 86m
$
If you would like to list all resources from specific namespace you should use command below:
kubectl -n argo-events api-resources --namespaced=true -o name | xargs --verbose -I {} kubectl -n argo-events get {} --show-kind --ignore-not-found
Above solution was presented in Github thread kubectl get all does not list all resources in a namespace. In this thread you might find some additional variations of above command.
In addition, you can also check How to List all Resources in a Kubernetes Namespace article. You can find there method to list resources using function.

Kubernetes: list all pods and its nodes

I have 3 nodes, running all kinds of pods. I would like to have a list of nodes and pods, for an example:
NODE1 POD1
NODE1 POD2
NODE2 POD3
NODE3 POD4
How can this please be achieved?
Thanks.
You can do that with custom columns:
kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces
or just:
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces
kubectl has a simple yet useful extended output format that you can use like
kubectl get pod -o wide
so while custom formats provided in other answers are good, this might be a handy shortcut.
You can use kubectl get pods --all-namespaces to list all the pods from all namespaces and kubectl get nodes for listing all nodes.
The following command does more or less what you wanted. However, it's more of a jq trick than kubectl trick:
kubectl get pod --all-namespaces -o json | jq '.items[] | .spec.nodeName + " " + .status.podIP'
Not exactly as you wanted cause it describe much more, but you can use
kubectl describe nodes
it will expose each pod per node in the cluster with the following info
Namespace | Name | CPU Requests | CPU Limits | Memory Requests |
Memory Limits
This gets you: "nodeName namespace pod" across the cluster:
kubectl get pods --all-namespaces --output 'jsonpath={range .items[*]}{.spec.nodeName}{" "}{.metadata.namespace}{" "}{.metadata.name}{"\n"}{end}'
Maybe the answers are a little bit old, now you can simply launch this:
kubectl get pods -o wide

How do you cleanly list all the containers in a kubernetes pod?

I am looking to list all the containers in a pod in a script that gather's logs after running a test. kubectl describe pods -l k8s-app=kube-dns returns a lot of info, but I am just looking for a return like:
etcd
kube2sky
skydns
I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).
Answer
kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}'
Explanation
This gets the JSON object representing the pod. It then uses kubectl's JSONpath to extract the name of each container from the pod.
You can use get and choose one of the supported output template with the --output (-o) flag.
Take jsonpath for example,
kubectl get pods -l k8s-app=kube-dns -o jsonpath={.items[*].spec.containers[*].name} gives you etcd kube2sky skydns.
Other supported output output templates are go-template, go-template-file, jsonpath-file. See http://kubernetes.io/docs/user-guide/jsonpath/ for how to use jsonpath template. See https://golang.org/pkg/text/template/#pkg-overview for how to use go template.
Update: Check this doc for other example commands to list container images: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Quick hack to avoid constructing the JSONpath query for a single pod:
$ kubectl logs mypod-123
a container name must be specified for pod mypod-123, choose one of: [etcd kubesky skydns]
I put some ideas together into the following:
Simple line:
kubectl get po -o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .spec.containers[*]}{"\tname: "}{.name}{"\n\timage: "}{.image}{"\n"}{end}'
Split (for readability):
kubectl get po -o jsonpath='
{range .items[*]}
{"pod: "}
{.metadata.name}
{"\n"}{range .spec.containers[*]}
{"\tname: "}
{.name}
{"\n\timage: "}
{.image}
{"\n"}
{end}'
How to list BOTH init and non-init containers for all pods
kubectl get pod -o="custom-columns=NAME:.metadata.name,INIT-CONTAINERS:.spec.initContainers[*].name,CONTAINERS:.spec.containers[*].name"
Output looks like this:
NAME INIT-CONTAINERS CONTAINERS
helm-install-traefik-sjts9 <none> helm
metrics-server-86cbb8457f-dkpqm <none> metrics-server
local-path-provisioner-5ff76fc89d-vjs6l <none> local-path-provisioner
coredns-6488c6fcc6-zp9gv <none> coredns
svclb-traefik-f5wwh <none> lb-port-80,lb-port-443
traefik-6f9cbd9bd4-pcbmz <none> traefik
dc-postgresql-0 init-chmod-data dc-postgresql
backend-5c4bf48d6f-7c8c6 wait-for-db backend
if you want a clear output of which containers are from each Pod
kubectl get po -l k8s-app=kube-dns \
-o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
To get the output in the separate lines:
kubectl get pods POD_NAME_HERE -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
Output:
base-container
sidecar-0
sidecar-1
sidecar-2
If you use json as output format of kubectl get you get plenty details of a pod. With json processors like jq it is easy to select or filter for certain parts you are interested in.
To list the containers of a pod the jq query looks like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq --raw-output '.items[].spec.containers[].name'
If you want to see all details regarding one specific container try something like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq '.items[].spec.containers[] | select(.name=="etcd")'
Use below command:
kubectl get pods -o=custom-columns=PodName:.metadata.name,Containers:.spec.containers[*].name,Image:.spec.containers[*].image
To see verbose information along with configmaps of all containers in a particular pod, use this command:
kubectl describe pod/<pod name> -n <namespace name>
Use below command to see all the information of a particular pod
kubectl get pod <pod name> -n <namespace name> -o yaml
For overall details about the pod try following command to get the container details as well
kubectl describe pod <podname>
I use this to display image versions on the pods.
kubectl get pods -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{end}{end}' && printf '\n'
It's just a small modification of script from here, with adding new line to start next console command on the new line, removed commas at the end of each line and listing only my pods, without service pods (e.g. --all-namespaces option is removed).
There are enough answers here but sometimes you want to see a deployment object pods' containers and initContainers. To do that;
1- Retrieve the deployment name
kubectl get deployment
2- Retrieve containers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.containers[*].name}'
3- Retrieve initContainers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.initContainers[*].name}'
Easiest way to know the containers in a pod:
kubectl logs -c -n