Say I have a 5 node cluster of kafka and a kubernetes cluster of 100 nodes.
Now, I want to find all 5 nodes (of 100) which is hosting kafka pod. So something like:
kubectl get nodes --selector="deployment.kafka"
I dont think thats possible, what you can do is select your pods based on labels and get the node name, As #Krishna said in his comment, so the command will be
kubectl get pods -n NAMESPACE_NAME -l app=kafka -o wide | awk '{print $7}'
app=kafka is the label on the pods, it might be different in your case
Related
I would like to validate, that deployments, which have Pod- and NodeAffinities (+AntiAffinity) are configured according to internal guidelines.
Is there a possibility to get deployments (or Pods) using kubectl and limit the result to Objects, that have such an affinity configured?
I have played around with the jsonpath output, but was unsuccessful so far.
hope you are enjoying your Kubernetes journey !
If you need to use affinities (especially with preferredDuringSchedulingIgnoredDuringExecution (explications below)) and just want to just "find" deployments that actually have affinities, you can use this:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity"
NAME AFFINITIES
nginx-deployment <none>
nginx-deployment-vanilla <none>
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
Every <none> pattern indicates that there is no affinity in the deployment.
However, with affinities, if you want to get only the deployments that have affinities without the deployments that don't have affinities, use this:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" | grep -v "<none>"
NAME AFFINITIES
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
And if you just want the names of the deployments that have affinities, consider using this little script:
❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" --no-headers | grep -v "<none>" | awk '{print $1}'
nginx-deployment-with-affinities
But, do not forget that nodeSelector is the simplest way to constrain Pods to nodes with specific labels. (more info here: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity). Also remember that (according to the same link) the requiredDuringSchedulingIgnoredDuringExecution type of node Affinity functions like nodeSelector, but with a more expressive syntax !
So If you don't need preferredDuringSchedulingIgnoredDuringExecution when dealing with affinities consider using nodeSelector !
After reading the above link, if you want to deal with nodeSelector you can use the same mechanic I used before:
❯ k get deploy -o custom-columns=NAME:".metadata.name",NODE_SELECTOR:".spec.template.spec.nodeSelector"
NAME NODE_SELECTOR
nginx-deployment map[test-affinities:test]
nginx-deployment-vanilla <none>
We have a k8s cluster with 10 workers. we run hundreds of pods in the cluster. we want to avoid running pods with default service account.
Need to find out the pods that are running with default service account. am able to find the number of pods using default service account with grep command but also need the pod name and the image it is using. Let us know your thoughts
In Case if you want to use just kubectl without jq :
needed to print both namespace and the pod name
kubectl get pods --all-namespaces -o jsonpath='{range .items[?(#.spec.serviceAccountName == "default")]}{.metadata.namespace} {.metadata.name}{"\n"}{end}' 2>/dev/null
i have added 2>/dev/null to avoid printing whole json template in case if no field was found
I used the below command to identify the pods from each namespace that is using default service account
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.serviceAccountName?=="default") | "\(.metadata.namespace) \(.metadata.name)"' | cut -d'"' -f2 | sort
if you are using k9s you can also :pod then e the pod to see which service account it is associated with
Is there a --sort-by option that allows you to sort by nodes which have the greatest number of pods running?
how about this way:
kubectl get nodes -o=custom-columns=Pods:.status.capacity.pods,NAME:metadata.name | sort -nr
or kubectl get nodes --sort-by=.status.capacity.pods
I want to know how I can check how many containers are currently running in my cluster? is there any command which shows me all the running containers in the cluster, not in a specific namespace. and how I can get the info about how many container per day get's run in my whole cluster?
You need to sum up all running containers in all pods. Try the following command.
kubectl get pod --all-namespaces | awk '{print $3}' | awk -F/ '{s+=$1} END {print s}'
Get all pods from all namespace :
kubectl get po --all-namespaces
Then you can have the number of containers in the READY column.
You can find some more info in the official doc
You can get pods by nodes and phase:
kubectl get po --all-namespaces=true --no-headers -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name,STATUS:.status.phase --sort-by='.metadata.name'
hope this helps
I have 3 nodes, running all kinds of pods. I would like to have a list of nodes and pods, for an example:
NODE1 POD1
NODE1 POD2
NODE2 POD3
NODE3 POD4
How can this please be achieved?
Thanks.
You can do that with custom columns:
kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces
or just:
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name --all-namespaces
kubectl has a simple yet useful extended output format that you can use like
kubectl get pod -o wide
so while custom formats provided in other answers are good, this might be a handy shortcut.
You can use kubectl get pods --all-namespaces to list all the pods from all namespaces and kubectl get nodes for listing all nodes.
The following command does more or less what you wanted. However, it's more of a jq trick than kubectl trick:
kubectl get pod --all-namespaces -o json | jq '.items[] | .spec.nodeName + " " + .status.podIP'
Not exactly as you wanted cause it describe much more, but you can use
kubectl describe nodes
it will expose each pod per node in the cluster with the following info
Namespace | Name | CPU Requests | CPU Limits | Memory Requests |
Memory Limits
This gets you: "nodeName namespace pod" across the cluster:
kubectl get pods --all-namespaces --output 'jsonpath={range .items[*]}{.spec.nodeName}{" "}{.metadata.namespace}{" "}{.metadata.name}{"\n"}{end}'
Maybe the answers are a little bit old, now you can simply launch this:
kubectl get pods -o wide