Hide Completed and other finished pods by default - kubernetes

In my professional environment it is common for "completed" pods to outnumber active ones and they often clutter the output of kubectl get pods like so:
$ kubectl get pods
finished-pod-38163 0/1 Completed 2m
errored-pod-83023 0/1 Error 2m
running-pod-20899 1/1 Running 2m
I can filter them out using --show-all=false:
$ kubectl get pods --show-all=false
running-pod-20899 1/1 Running 2m
However I would prefer not to have to type out --show-all=false every time I want to see my running pods. Is it possible to configure kubectl to disable --show-all by default rather than having it enabled by default?
From kubectl get pods --help:
-a, --show-all=true: When printing, show all resources (default show all pods
including terminated one.)
I know I could create some shell alias kgetpo, but this would remove support for tab-completion so I'd prefer native solutions if they exist.

You can try something like this:
kubectl get pods --field-selector=status.phase==Running

Related

How to list containers and its info running a pod using kubectl command

When we run kubectl get pod => it is listing the count of containers running inside a pod and restart count. So I am not sure which container gets restarted. Either I need to login to UI or using kubectl describe pods.
NAME READY STATUS RESTARTS AGE
test-pod 2/2 Running 5 14h
But I need to see each container names and its restart count using kubectl command somethings as below.
NAME STATUS RESTARTS AGE
container-1 Running 2 14h
container-2 Running 3 14h
It would be helpful if someone helps me on this. Thanks in advance!
You can try something like this:
kubectl get pods <pod-name> -o jsonpath='{.spec.containers[*].name} {.status.containerStatuses[*].restartCount} {.status.containerStatuses[*].state}'
You will get in result container-name, restartCount and state.
Then you will be able to format it in such way as you need.

Kubernetes replicate pod modification to other pods

I have a k8s cluster with 3 nodes.
With kubectl command i enter in a pod shell and make some file editing:
kubectl exec --stdin --tty <pod-name> -- /bin/bash
at this point i have one pod wit correct editing and other 2 replicas with old file.
My question is:
There is a kubectl commend for, starting from a specific pod, overwrite current replicas in cluster for create n equals pods?
Hope to be clear
So many thanks in advance
Manuel
You can use a kubectl plugin called: kubectl-tmux-exec.
All information on how to install and use this plugin can be found on GitHub: predatorray/kubectl-tmux-exec.
As described in the How to Install Dependencies documentation.
The plugin needs the following programs:
gnu-getopt(1)
tmux(1)
I've created a simple example to illustrate you how it works.
Suppose I have a web Deployment and want to create a sample-file file inside all (3) replicas.
$ kubectl get deployment,pods --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/web 3/3 3 3 19m app=web
NAME READY STATUS RESTARTS AGE LABELS
pod/web-96d5df5c8-5gn8x 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-95r4c 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-wc9k5 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
I have the kubectl-tmux_exec plugin installed, so I can use it:
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-tmux_exec
$ kubectl tmux-exec -l app=web bash
After running the above command, Tmux will be opened and we can modify multiple Pods simultaneously:

How to get cron job information through k8s selector

I'm trying to get information for a cron job so I can grab the current release of service.
So when I run kubectl get pods I get:
NAME READY STATUS RESTARTS AGE
cron-backfill-1573451940-jlwwj 0/1 Completed 0 33h
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
When I run kubectl get pods --selector=job-name=cron-backfill I get:
No resources found in test namespace.
But when I run kubectl get pods --selector=app=test-pod I get:
NAME READY STATUS RESTARTS AGE
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
which is what I want. I figured since the first pod is a cron job there must be some other command used to check for those, but no luck.
I tried looking through the k8s docs here https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ but can't find something that seems to work.
You need to
kubectl describe pods cron-backfill-1573451940-jlwwj
And then you can see the Labels: part
EX:
Labels: app=<app-name>
controller-uid=<xxxxxxxxxx>
job-name=cron-backfill-1573451940-jlwwj
release=<release-name>
Final you can use following command to get your pods:
kubectl get pods --selector=job-name=cron-backfill-1573451940-jlwwj
Hope this may help you, Guy!

Dont delete pods in rolling back a deployment

I would like to perform rolling back a deployment in my environment.
Command:
kubectl rollout undo deployment/foo
Steps which are perform:
create pods with old configurations
delete old pods
Is there a way to not perform last step - for example - developer would like to check why init command fail and debug.
I didn't find information about that in documentation.
Yes it is possible, before doing rollout, first you need to remove labels (corresponding to replica-set controlling that pod) from unhealthy pod. This way pod won't belong anymore to the deployment and even if you do rollout, it will still be there. Example:
$kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
sleeper 1/1 1 1 47h
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 5m46s pod-template-hash=d75b55fc9,run=sleeper
$kubectl label pod sleeper-d75b55fc9-87k5k pod-template-hash- run-
pod/sleeper-d75b55fc9-87k5k labeled
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 6m34s <none>
sleeper-d75b55fc9-swkj9 1/1 Running 0 3s pod-template-hash=d75b55fc9,run=sleeper
So what happens here, we have a pod sleeper-d75b55fc9-87k5k which belongs to sleeper deployment, we remove all labels from it, deployment detects that pod "has gone" so it creates a new one sleeper-d75b55fc9-swkj9, but the old one is still there and ready for debugging. Only pod sleeper-d75b55fc9-swkj9 will be affected by rollout.

How to get list of terminated and running pods list using Kubectl command

I want to see the list of terminated and running pods details in Kubernetes.
Below command only shows the running pods, somehow I want to see the history of all pods so far terminated.
$ ./kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
POD1 1/1 Running 0 3d 10.333.33.333 node123
POD2 1/1 Running 0 4d 10.333.33.333 node121
POD3 1/1 Running 0 1m 10.333.33.333 node124
I expect the terminated pods list using kubectl command
Since v1.10 kubectl prints terminated pods by default:
--show-all (which only affected pods and only for human
readable/non-API printers) is now defaulted to true and deprecated.
The flag determines whether pods in a terminal state are displayed. It
will be inert in 1.11 and removed in a future release.
If running older versions than v1.10, you still need to use the --show-all flag.