How to list containers and its info running a pod using kubectl command - kubernetes

When we run kubectl get pod => it is listing the count of containers running inside a pod and restart count. So I am not sure which container gets restarted. Either I need to login to UI or using kubectl describe pods.
NAME READY STATUS RESTARTS AGE
test-pod 2/2 Running 5 14h
But I need to see each container names and its restart count using kubectl command somethings as below.
NAME STATUS RESTARTS AGE
container-1 Running 2 14h
container-2 Running 3 14h
It would be helpful if someone helps me on this. Thanks in advance!

You can try something like this:
kubectl get pods <pod-name> -o jsonpath='{.spec.containers[*].name} {.status.containerStatuses[*].restartCount} {.status.containerStatuses[*].state}'
You will get in result container-name, restartCount and state.
Then you will be able to format it in such way as you need.

Related

Kubectl : No resource found even tough there are pods running in the namespace

I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)

Kubernetes replicate pod modification to other pods

I have a k8s cluster with 3 nodes.
With kubectl command i enter in a pod shell and make some file editing:
kubectl exec --stdin --tty <pod-name> -- /bin/bash
at this point i have one pod wit correct editing and other 2 replicas with old file.
My question is:
There is a kubectl commend for, starting from a specific pod, overwrite current replicas in cluster for create n equals pods?
Hope to be clear
So many thanks in advance
Manuel
You can use a kubectl plugin called: kubectl-tmux-exec.
All information on how to install and use this plugin can be found on GitHub: predatorray/kubectl-tmux-exec.
As described in the How to Install Dependencies documentation.
The plugin needs the following programs:
gnu-getopt(1)
tmux(1)
I've created a simple example to illustrate you how it works.
Suppose I have a web Deployment and want to create a sample-file file inside all (3) replicas.
$ kubectl get deployment,pods --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/web 3/3 3 3 19m app=web
NAME READY STATUS RESTARTS AGE LABELS
pod/web-96d5df5c8-5gn8x 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-95r4c 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
pod/web-96d5df5c8-wc9k5 1/1 Running 0 19m app=web,pod-template-hash=96d5df5c8
I have the kubectl-tmux_exec plugin installed, so I can use it:
$ kubectl plugin list
The following compatible plugins are available:
/usr/local/bin/kubectl-tmux_exec
$ kubectl tmux-exec -l app=web bash
After running the above command, Tmux will be opened and we can modify multiple Pods simultaneously:

How to get cron job information through k8s selector

I'm trying to get information for a cron job so I can grab the current release of service.
So when I run kubectl get pods I get:
NAME READY STATUS RESTARTS AGE
cron-backfill-1573451940-jlwwj 0/1 Completed 0 33h
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
When I run kubectl get pods --selector=job-name=cron-backfill I get:
No resources found in test namespace.
But when I run kubectl get pods --selector=app=test-pod I get:
NAME READY STATUS RESTARTS AGE
test-pod-66df8ccd5f-jvmkp 1/1 Running 0 16h
which is what I want. I figured since the first pod is a cron job there must be some other command used to check for those, but no luck.
I tried looking through the k8s docs here https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ but can't find something that seems to work.
You need to
kubectl describe pods cron-backfill-1573451940-jlwwj
And then you can see the Labels: part
EX:
Labels: app=<app-name>
controller-uid=<xxxxxxxxxx>
job-name=cron-backfill-1573451940-jlwwj
release=<release-name>
Final you can use following command to get your pods:
kubectl get pods --selector=job-name=cron-backfill-1573451940-jlwwj
Hope this may help you, Guy!

Dont delete pods in rolling back a deployment

I would like to perform rolling back a deployment in my environment.
Command:
kubectl rollout undo deployment/foo
Steps which are perform:
create pods with old configurations
delete old pods
Is there a way to not perform last step - for example - developer would like to check why init command fail and debug.
I didn't find information about that in documentation.
Yes it is possible, before doing rollout, first you need to remove labels (corresponding to replica-set controlling that pod) from unhealthy pod. This way pod won't belong anymore to the deployment and even if you do rollout, it will still be there. Example:
$kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
sleeper 1/1 1 1 47h
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 5m46s pod-template-hash=d75b55fc9,run=sleeper
$kubectl label pod sleeper-d75b55fc9-87k5k pod-template-hash- run-
pod/sleeper-d75b55fc9-87k5k labeled
$kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
sleeper-d75b55fc9-87k5k 1/1 Running 0 6m34s <none>
sleeper-d75b55fc9-swkj9 1/1 Running 0 3s pod-template-hash=d75b55fc9,run=sleeper
So what happens here, we have a pod sleeper-d75b55fc9-87k5k which belongs to sleeper deployment, we remove all labels from it, deployment detects that pod "has gone" so it creates a new one sleeper-d75b55fc9-swkj9, but the old one is still there and ready for debugging. Only pod sleeper-d75b55fc9-swkj9 will be affected by rollout.

Hide Completed and other finished pods by default

In my professional environment it is common for "completed" pods to outnumber active ones and they often clutter the output of kubectl get pods like so:
$ kubectl get pods
finished-pod-38163 0/1 Completed 2m
errored-pod-83023 0/1 Error 2m
running-pod-20899 1/1 Running 2m
I can filter them out using --show-all=false:
$ kubectl get pods --show-all=false
running-pod-20899 1/1 Running 2m
However I would prefer not to have to type out --show-all=false every time I want to see my running pods. Is it possible to configure kubectl to disable --show-all by default rather than having it enabled by default?
From kubectl get pods --help:
-a, --show-all=true: When printing, show all resources (default show all pods
including terminated one.)
I know I could create some shell alias kgetpo, but this would remove support for tab-completion so I'd prefer native solutions if they exist.
You can try something like this:
kubectl get pods --field-selector=status.phase==Running