How to get number of replicas of StatefulSet Kubernetes - kubernetes

I am trying to autoscale my StatefulSet on Kubernetes. In order to do so, I need to get the current number of pods.
When dealing with deployments:
kubectl describe deployments [deployment-name] | grep desired | awk '{print $2}' | head -n1
This outputs a number, which is the amount of current deployments.
However, when you run
kubectl describe statefulsets
We don't get back as much information. Any idea how I can get the current number of replicas of a stateful set?

kubectl get sts web -n default -o=jsonpath='{.status.replicas}'
This also works with .status.readyReplicas and .status.currentReplicas
From github.com/kubernetes:
// replicas is the number of Pods created by the StatefulSet controller.
Replicas int32
// readyReplicas is the number of Pods created by the StatefulSet controller that have a Ready Condition.
ReadyReplicas int32
// currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version
// indicated by
currentRevision.
CurrentReplicas int32

you should be running one of the below command
master $ kubectl get statefulsets
NAME DESIRED CURRENT AGE
web 4 4 2m
master $
master $ kubectl get sts
NAME DESIRED CURRENT AGE
web 4 4 2m
number of running pods
---------------------
master $ kubectl describe sts web|grep Status
Pods Status: 4 Running / 0 Waiting / 0 Succeeded / 0 Failed
another way
------------
master $ kubectl get sts --show-labels
NAME DESIRED CURRENT AGE LABELS
web 4 4 33s app=nginx
master $ kubectl get po -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 56s
web-1 1/1 Running 0 55s
web-2 1/1 Running 0 30s
web-3 1/1 Running 0 29s
master $ kubectl get po -l app=nginx --no-headers
web-0 1/1 Running 0 2m
web-1 1/1 Running 0 2m
web-2 1/1 Running 0 1m
web-3 1/1 Running 0 1m
master $
master $
master $ kubectl get po -l app=nginx --no-headers | wc -l
4

Related

kubectl status.phase=Running return wrong results

When I run:
kubectl get pods --field-selector=status.phase=Running
I see:
NAME READY STATUS RESTARTS AGE
k8s-fbd7b 2/2 Running 0 5m5s
testm-45gfg 1/2 Error 0 22h
I don't understand why this command gives me pod that are in Error status?
According to K8S api, there is no such thing STATUS=Error.
How can I get only the pods that are in this Error status?
When I run:
kubectl get pods --field-selector=status.phase=Failed
It tells me that there are no pods in that status.
Using the kubectl get pods --field-selector=status.phase=Failed command you can display all Pods in the Failed phase.
Failed means that all containers in the Pod have terminated, and at least one container has terminated in failure (see: Pod phase):
Failed - All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
In your example, both Pods are in the Running phase because at least one container is still running in each of these Pods.:
Running - The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
You can check the current phase of Pods using the following command:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
Let's check how this command works:
$ kubectl get pods
NAME READY STATUS
app-1 1/2 Error
app-2 0/1 Error
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
app-1 Running
app-2 Failed
As you can see, only the app-2 Pod is in the Failed phase. There is still one container running in the app-1 Pod, so this Pod is in the Running phase.
To list all pods with the Error status, you can simply use:
$ kubectl get pods -A | grep Error
default app-1 1/2 Error
default app-2 0/1 Error
Additionally, it's worth mentioning that you can check the state of all containers in Pods:
$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[*].state}{"\n"}{end}'
app-1 {"terminated":{"containerID":"containerd://f208e2a1ff08c5ce2acf3a33da05603c1947107e398d2f5fbf6f35d8b273ac71","exitCode":2,"finishedAt":"2021-08-11T14:07:21Z","reason":"Error","startedAt":"2021-08-11T14:07:21Z"}} {"running":{"startedAt":"2021-08-11T14:07:21Z"}}
app-2 {"terminated":{"containerID":"containerd://7a66cbbf73985efaaf348ec2f7a14d8e5bf22f891bd655c4b64692005eb0439b","exitCode":2,"finishedAt":"2021-08-11T14:08:50Z","reason":"Error","startedAt":"2021-08-11T14:08:50Z"}}
You can simply grep the Error pods using the
kubectl get pods --all-namespces | grep Error
Remove all error pods from the cluster
kubectl delete pod `kubectl get pods --namespace <yournamespace> | awk '$3 == "Error" {print $1}'` --namespace <yournamespace>
Mostly Pod failures return explicit error states that can be observed in the status field
Error :
Your pod is crashed, it was able to schedule on node successfully but crashed after that. To debug it more you can use different methods or commands
kubectl describe pod <Pod name > -n <Namespace>
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#my-pod-is-crashing-or-otherwise-unhealthy
Here is an overkill go-template based attempt:
kubectl get pods -o go-template='{{range $index, $element := .items}}{{range .status.containerStatuses}}{{range .state }}{{if .reason }}{{if (eq .reason "Error") }}{{$element.metadata.name}} {{$element.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}{{end}}'
job1-stn45 default
My pod status:
k get pod
NAME READY STATUS RESTARTS AGE
foo 1/1 Running 1 2d11h
nginx-0 1/1 Running 3 5d10h
nginx-2 1/1 Running 3 5d10h
nginx-1 1/1 Running 3 5d10h
job1-stn45 0/1 Error 0 113m
update-test-27145740-82z7s 0/1 ImagePullBackOff 0 96m
update-test-27145500-7f2l9 0/1 ImagePullBackOff 0 5h36m

How are pods in kube-system namespace managed?

I'm trying to understand how kubernetes works, so I tried to do this operation for my minikube:
~ kubectl delete pod --all -n kube-system
pod "coredns-f9fd979d6-5n4b6" deleted
pod "etcd-minikube" deleted
pod "kube-apiserver-minikube" deleted
pod "kube-controller-manager-minikube" deleted
pod "kube-proxy-879lg" deleted
pod "kube-scheduler-minikube" deleted
It's okay. Pods deleted as wish. But if I do kubectl get pods -n kube-system I will see:
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-5d25r 1/1 Running 0 50s
etcd-minikube 1/1 Running 0 50s
kube-apiserver-minikube 1/1 Running 0 50s
kube-controller-manager-minikube 1/1 Running 0 50s
kube-proxy-nlw69 1/1 Running 0 43s
kube-scheduler-minikube 1/1 Running 0 49s
Okay. I thought it's ReplicaSet or DaemonSet:
➜ ~ kubectl get ds -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 18m
➜ ~ kubectl get rs -n kube-system
NAME DESIRED CURRENT READY AGE
coredns-f9fd979d6 1 1 1 18m
It is true for coredns and kube-proxy. But what about others (apiserver, etcd, controller and scheduler)? Why are they still alive?
The control plane pods are run as static Pods - static Pods are not managed by the control plane controllers like e.g. DaemonSet and ReplicaSet. Static pods are instead managed by the Kubelet daemon on the local node directly.

Short hand to select the first pod using kubectl

I'm on a Windows 10 and using WSL.
I have 8 pods inside my namespace:
NAME READY STATUS RESTARTS AGE
app-85b6fd4dc9-4chnq 1/1 Running 0 17m
app-85b6fd4dc9-9c5dc 1/1 Running 0 19m
app-85b6fd4dc9-cth6d 1/1 Running 0 19m
app-85b6fd4dc9-m8pc8 1/1 Running 0 19m
app-85b6fd4dc9-mrsnv 1/1 Running 0 18m
app-85b6fd4dc9-qtdtl 1/1 Running 0 17m
app-85b6fd4dc9-xzmdx 1/1 Running 0 17m
app-85b6fd4dc9-zbft7 1/1 Running 0 19m
And I really need to see the logs with haste. My current pattern is:
kubectl get pods -n my_namespace
[copy NAME of the pod]
kubectl logs --follow pod_name -n my_namespace
# live tail logs here
I want to skip the display of all pods, instead, I want to go straight to the first one available or first one on the list, whichevers applicable. Thanks for answering my noob question
You can try below in PowerShell, tested in PowerShell window 10 bases on output provided in the question.
$var = (kubectl get pods -n my_namespace | Select -First 2 | Select -Last 1 | %{ (-split $_)[0]) ; kubectl logs --follow $var -n my_namespace
I did not verify it against kubectl but it should work or You can check further details at microsoft.powershell.utility
Equelevent command using AWK
export pod=$(kubectl get pods -n namespace | awk 'FNR==2{print $1}' ) | kubectl logs -f $pod -n mypod
Use kubectl auto-completion tools.
There are many available tools available, you need to find the perfect one for you.
https://github.com/bonnefoa/kubectl-fzf
https://github.com/evanlucas/fish-kubectl-completions
Perform TAB to show available options:
$ kubectl logs -f -n kube-system [TAB]kubedb-enterprise-5cc89b87c5-k…
coredns-f9fd979d6-zcrmp (Pod)
coredns-f9fd979d6-zdr9c (Pod)
etcd-kind-control-plane (Pod)
kindnet-gvwcn (Pod)
…and 4 more rows

Get multiple time restarted pod list in kubernate

From the below pods, how can we get a list of pods which have been restarted more than 2 times. How can we get in a single line query ?
xx-5f6df977d7-4gtxj 3/3 Running 0 6d21h
xx-5f6df977d7-4rvtg 3/3 Running 0 6d21h
pkz-ms-profile-df9fdc4f-2nqvw 1/1 Running 0 76d
push-green-95455c5c-fmkr7 3/3 Running 3 15d
spice-blue-77b7869847-6md6w 2/2 Running 0 19d
bang-blue-55845b9c68-ht5s5 1/3 Running 2 8m50s
mum-blue-6f544cd567-m6lws 2/2 Running 3 76d
Use:
kubectl get pods | awk '{if($4>2)print$1}'
Use -n "NameSpace" if required to fetch pods on the basis of a namespace.
For example:
kubectl get pods -n kube-system | awk '{if($4>2)print$1}'
where $1, $4 : Depends on which column pod name is present , column on which filter is to be done respectively
Note: awk will work in linux whereas
Actually is not possible to use field selector to get this result, as mentioned in this github open issue.
You can use kubectl with the option -o jsonpath to get the container name that whas restart more than 2 times. Example:
kubectl get pods -o jsonpath='{.items[*].status.containerStatuses[?(#.restartCount>=2)].name}'

kubernetes pending pod priority

I have the following pods on my kubernetes (1.18.3) cluster:
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 14m
pod2 1/1 Running 0 14m
pod3 0/1 Pending 0 14m
pod4 0/1 Pending 0 14m
pod3 and pod4 cannot start because the node has capacity for 2 pods only. When pod1 finishes and quits, then the scheduler picks either pod3 or pod4 and starts it. So far so good.
However, I also have a high priority pod (hpod) that I'd like to start before pod3 or pod4 when either of the running pods finishes and quits.
So I created a priorityclass can be found in the kubernetes docs:
kind: PriorityClass
metadata:
name: high-priority-no-preemption
value: 1000000
preemptionPolicy: Never
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
I've created the following pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: hpod
labels:
app: hpod
spec:
containers:
- name: hpod
image: ...
resources:
requests:
cpu: "500m"
memory: "500Mi"
limits:
cpu: "500m"
memory: "500Mi"
priorityClassName: high-priority-no-preemption
Now the problem is that when I start the high prio pod with kubectl apply -f hpod.yaml, then the scheduler terminates a running pod to allow the high priority pod to start despite I've set 'preemptionPolicy: Never'.
The expected behaviour would be to postpone starting hpod until a currently running pod finishes. And when it does, then let hpod start before pod3 or pod4.
What am I doing wrong?
Prerequisites:
This solution was tested on Kubernetes v1.18.3, docker 19.03 and Ubuntu 18.
Also text editor is required (i.e. sudo apt-get install vim).
In Kubernetes documentation under How to disable preemption you can find Note:
Note: In Kubernetes 1.15 and later, if the feature NonPreemptingPriority is enabled, PriorityClasses have the option to set preemptionPolicy: Never. This will prevent pods of that PriorityClass from preempting other pods.
Also under Non-preempting PriorityClass you have information:
The use of the PreemptionPolicy field requires the NonPreemptingPriority feature gate to be enabled.
Later if you will check thoses Feature Gates info, you will find that NonPreemptingPriority is false, so as default it's disabled.
Output with your current configuration:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 32s
nginx-normal-2 1/1 Running 0 32s
$ kubectl apply -f prio.yaml
pod/nginx-priority created$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-normal-2 1/1 Running 0 48s
nginx-priority 1/1 Running 0 8s
To enable preemptionPolicy: Never you need to apply --feature-gates=NonPreemptingPriority=true to 3 files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
To check if this feature-gate is enabled you can check by using commands:
ps aux | grep apiserver | grep feature-gates
ps aux | grep scheduler | grep feature-gates
ps aux | grep controller-manager | grep feature-gates
For quite detailed information, why you have to edit thoses files please check this Github thread.
$ sudo su
# cd /etc/kubernetes/manifests/
# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
Use your text editor to add feature gate to those files
# vi kube-apiserver.yaml
and add - --feature-gates=NonPreemptingPriority=true under spec.containers.command like in example bellow:
spec:
containers:
- command:
- kube-apiserver
- --feature-gates=NonPreemptingPriority=true
- --advertise-address=10.154.0.31
And do the same with 2 other files. After that you can check if this flags were applied.
$ ps aux | grep apiserver | grep feature-gates
root 26713 10.4 5.2 565416 402252 ? Ssl 14:50 0:17 kube-apiserver --feature-gates=NonPreemptingPriority=true --advertise-address=10.154.0.31
Now you have redeploy your PriorityClass.
$ kubectl get priorityclass
NAME VALUE GLOBAL-DEFAULT AGE
high-priority-no-preemption 1000000 false 12m
system-cluster-critical 2000000000 false 23m
system-node-critical 2000001000 false 23m
$ kubectl delete priorityclass high-priority-no-preemption
priorityclass.scheduling.k8s.io "high-priority-no-preemption" deleted
$ kubectl apply -f class.yaml
priorityclass.scheduling.k8s.io/high-priority-no-preemption created
Last step is to deploy pod with this PriorityClass.
TEST
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 4m4s
nginx-normal-2 1/1 Running 0 18m
$ kubectl apply -f prio.yaml
pod/nginx-priority created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 5m17s
nginx-normal-2 1/1 Running 0 20m
nginx-priority 0/1 Pending 0 67s
$ kubectl delete po nginx-normal-2
pod "nginx-normal-2" deleted
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-normal 1/1 Running 0 5m55s
nginx-priority 1/1 Running 0 105s