Get multiple time restarted pod list in kubernate - kubernetes

From the below pods, how can we get a list of pods which have been restarted more than 2 times. How can we get in a single line query ?
xx-5f6df977d7-4gtxj 3/3 Running 0 6d21h
xx-5f6df977d7-4rvtg 3/3 Running 0 6d21h
pkz-ms-profile-df9fdc4f-2nqvw 1/1 Running 0 76d
push-green-95455c5c-fmkr7 3/3 Running 3 15d
spice-blue-77b7869847-6md6w 2/2 Running 0 19d
bang-blue-55845b9c68-ht5s5 1/3 Running 2 8m50s
mum-blue-6f544cd567-m6lws 2/2 Running 3 76d

Use:
kubectl get pods | awk '{if($4>2)print$1}'
Use -n "NameSpace" if required to fetch pods on the basis of a namespace.
For example:
kubectl get pods -n kube-system | awk '{if($4>2)print$1}'
where $1, $4 : Depends on which column pod name is present , column on which filter is to be done respectively
Note: awk will work in linux whereas

Actually is not possible to use field selector to get this result, as mentioned in this github open issue.
You can use kubectl with the option -o jsonpath to get the container name that whas restart more than 2 times. Example:
kubectl get pods -o jsonpath='{.items[*].status.containerStatuses[?(#.restartCount>=2)].name}'

Related

Listing Pods with name and field selector

I need to list down all the pods with status complete and with given name.
user#host:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
App1-something 1/1 Running 570 2d
App2-something 1/1 completed 597 2d
App3-something 1/1 completed 570 2d
App4-something 1/1 Running 597 2d
Using Field Selector i can list completed pods, but not able to find the correct command to list down required pod with specific name
Something to get below output
App3-something 1/1 completed 570 2d
kubectl get pod --field-selector=status.phase==Succeeded and pod name is App3-something
You can use comma to add multiple conditions like --field-selector=metadata.name=app3-something,,status.phase=Succeeded.
kubectl get pod --field-selector=metadata.name=App3-something,status.phase=Completed
Reference: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
--
C:\>kubectl get pods
NAME READY STATUS RESTARTS AGE
app1-something 1/1 Running 0 62s
app2-something 1/1 Running 0 56s
app3-something 1/1 Running 0 52s
C:\>kubectl get pod --field-selector=metadata.name=app3-something,,status.phase=Running
NAME READY STATUS RESTARTS AGE
app3-something 1/1 Running 0 57s
Use grep on the output of the kubectl command:
kubectl get pod --field-selector=status.phase==Succeeded | grep -n 'App3-something'

Short hand to select the first pod using kubectl

I'm on a Windows 10 and using WSL.
I have 8 pods inside my namespace:
NAME READY STATUS RESTARTS AGE
app-85b6fd4dc9-4chnq 1/1 Running 0 17m
app-85b6fd4dc9-9c5dc 1/1 Running 0 19m
app-85b6fd4dc9-cth6d 1/1 Running 0 19m
app-85b6fd4dc9-m8pc8 1/1 Running 0 19m
app-85b6fd4dc9-mrsnv 1/1 Running 0 18m
app-85b6fd4dc9-qtdtl 1/1 Running 0 17m
app-85b6fd4dc9-xzmdx 1/1 Running 0 17m
app-85b6fd4dc9-zbft7 1/1 Running 0 19m
And I really need to see the logs with haste. My current pattern is:
kubectl get pods -n my_namespace
[copy NAME of the pod]
kubectl logs --follow pod_name -n my_namespace
# live tail logs here
I want to skip the display of all pods, instead, I want to go straight to the first one available or first one on the list, whichevers applicable. Thanks for answering my noob question
You can try below in PowerShell, tested in PowerShell window 10 bases on output provided in the question.
$var = (kubectl get pods -n my_namespace | Select -First 2 | Select -Last 1 | %{ (-split $_)[0]) ; kubectl logs --follow $var -n my_namespace
I did not verify it against kubectl but it should work or You can check further details at microsoft.powershell.utility
Equelevent command using AWK
export pod=$(kubectl get pods -n namespace | awk 'FNR==2{print $1}' ) | kubectl logs -f $pod -n mypod
Use kubectl auto-completion tools.
There are many available tools available, you need to find the perfect one for you.
https://github.com/bonnefoa/kubectl-fzf
https://github.com/evanlucas/fish-kubectl-completions
Perform TAB to show available options:
$ kubectl logs -f -n kube-system [TAB]kubedb-enterprise-5cc89b87c5-k…
coredns-f9fd979d6-zcrmp (Pod)
coredns-f9fd979d6-zdr9c (Pod)
etcd-kind-control-plane (Pod)
kindnet-gvwcn (Pod)
…and 4 more rows

What is 'AVAILABLE' column in kubernetes daemonsets

I may have a stupid question but could someone explain what "Available" correctly represent in DaemonSets? I checked What is the difference between current and available pod replicas in kubernetes deployment? answer but there are no readiness errors.
In cluster i see below status:
$ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
kube-proxy 6 6 5 6 5 beta.kubernetes.io/os=linux
Why it is showing as 5 instead of 6?
all pods are running perfectly fine without any "readiness" errors or restarts?
$ kubectl get pods -n kube-system | grep kube-proxy
kube-proxy-cv7vv 1/1 Running 0 20d
kube-proxy-kcd67 1/1 Running 0 20d
kube-proxy-l4nfk 1/1 Running 0 20d
kube-proxy-mkvjd 1/1 Running 0 87d
kube-proxy-qb7nz 1/1 Running 0 36d
kube-proxy-x8l87 1/1 Running 0 87d
Could someone tell what can be checked further?
The Available field shows the number of replicas or pods that are ready to accept traffic and passed all the criterion such as readiness or liveness probe or any other condition that verifies that your application is ready to serve the requests coming from user.

How to get number of replicas of StatefulSet Kubernetes

I am trying to autoscale my StatefulSet on Kubernetes. In order to do so, I need to get the current number of pods.
When dealing with deployments:
kubectl describe deployments [deployment-name] | grep desired | awk '{print $2}' | head -n1
This outputs a number, which is the amount of current deployments.
However, when you run
kubectl describe statefulsets
We don't get back as much information. Any idea how I can get the current number of replicas of a stateful set?
kubectl get sts web -n default -o=jsonpath='{.status.replicas}'
This also works with .status.readyReplicas and .status.currentReplicas
From github.com/kubernetes:
// replicas is the number of Pods created by the StatefulSet controller.
Replicas int32
// readyReplicas is the number of Pods created by the StatefulSet controller that have a Ready Condition.
ReadyReplicas int32
// currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version
// indicated by
currentRevision.
CurrentReplicas int32
you should be running one of the below command
master $ kubectl get statefulsets
NAME DESIRED CURRENT AGE
web 4 4 2m
master $
master $ kubectl get sts
NAME DESIRED CURRENT AGE
web 4 4 2m
number of running pods
---------------------
master $ kubectl describe sts web|grep Status
Pods Status: 4 Running / 0 Waiting / 0 Succeeded / 0 Failed
another way
------------
master $ kubectl get sts --show-labels
NAME DESIRED CURRENT AGE LABELS
web 4 4 33s app=nginx
master $ kubectl get po -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 56s
web-1 1/1 Running 0 55s
web-2 1/1 Running 0 30s
web-3 1/1 Running 0 29s
master $ kubectl get po -l app=nginx --no-headers
web-0 1/1 Running 0 2m
web-1 1/1 Running 0 2m
web-2 1/1 Running 0 1m
web-3 1/1 Running 0 1m
master $
master $
master $ kubectl get po -l app=nginx --no-headers | wc -l
4

Kubernetes - does not start the role of master

I'm starting a Kubernetes cluster of 3 nodes (1 master, 2 worker)
Trying to go by steps described in Ansible playbook - https://gitlab.com/LinarNadyrov/gcp/tree/master
Applying playbook steps 1,2,3 consequentially
After than, I connect to master to check status:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
NAME STATUS ROLES AGE VERSION
master NotReady master 17m v1.13.0
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
enter link description here
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-7jc4b 0/1 Pending 0 3h45m
coredns-86c58d9df4-929xf 0/1 Pending 0 3h45m
etcd-officemasterkub 1/1 Running 2 7h26m
kube-apiserver-officemasterkub 1/1 Running 2 7h26m
kube-controller-manager-officemasterkub 1/1 Running 2 7h26m
kube-flannel-ds-5jhbx 0/1 Pending 0 7h20m
kube-flannel-ds-wqfvs 0/1 Pending 0 7h20m
kube-proxy-gmngj 1/1 Running 2 7h27m
kube-proxy-ppbqp 1/1 Running 1 7h20m
kube-proxy-r2rn6 1/1 Running 1 7h20m
kube-scheduler-officemasterkub 1/1 Running 2 7h26m
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Status is NotReady
Could anyone help me with it?
What's the problem? What should be done to fix it? Maybe I missed something?
Thanx in advance!
Линар Надыров the problem here is with your flannel yaml file. You did not specify any resources in the DaemonSet so there are no flannel pods spawning.
I did not check any further as it was enough reason why is this issue occurring. You can use this yaml if this is for testing purposes. Or edit your accordingly to the provided example.
In your file change the line 43 to:
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
You can find more about DaemonSets here.