Configure starting index of StatefulSet's pods in Kubernetes - kubernetes

As we all know from the documentation:
For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, that is unique over the Set.
Example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zookeeper-np
labels:
app: zookeeper-np
spec:
serviceName: zoo-svc-np
replicas: 3
selector:
matchLabels:
app: zookeeper-np
template:
metadata:
labels:
app: zookeeper-np
spec:
containers:
- name: zookeeper-np
image: zookeeper:3.5.6
ports:
- containerPort: 30005
- containerPort: 30006
- containerPort: 30007
env:
- name: ZOO_MY_ID
value: "1"
The above manifest will create three pods like below:
NAME READY STATUS STARTS AGE
pod/zookeeper-np-0 1/1 Running 0 203s
pod/zookeeper-np-1 1/1 Running 0 137s
pod/zookeeper-np-2 1/1 Running 0 73s
Question:
Is there a way to configure this starting index (0) to start from any other integer (e.g 1,2,3 etc.)?
For example, to start the above pod indices from 3 so that the pods created will look like something below:
NAME READY STATUS STARTS AGE
pod/zookeeper-np-3 1/1 Running 0 203s
pod/zookeeper-np-4 1/1 Running 0 137s
pod/zookeeper-np-5 1/1 Running 0 73s
Or if there is only one replica of the statefulset, its ordinal is not zero but some other integer (e.g 12), so that the only pod created will be named as pod/zookeeper-np-12?

Unfortunately, it's impossible, according to source code 0...n is just slice index of replicas slice. See the last argument to newVersionedStatefulSetPod

Related

Two Kubernetes Deployments with exactly the same pod labels

Let's say I have two deployments which are exactly the same apart from deployment name:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d2
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
Since these two deployments have the same selectors and the same pod template, I would expect to see three pods. However, six pods are created:
# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-dkpk7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-nz7wf 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-vdtfr 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nqmq7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nzrlc 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-qgjkn 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
Why is that?
Consider this: The pods are not directly managed by a deployment, but a deployment manages a ReplicaSet.
This can be validated using
kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-d-5b686ccd46 3 3 3 74s
nginx-d2-7c76fbbbcb 3 3 0 74s
You choose which pods to consider for a replicaset or deployment by specifying the selector. In addition to that each deployment adds its own label to be able to discriminate which pods are managed by its own replicaset and which are managed by other replicasets.
You can inspect this as well:
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-7j4md 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-9j7tx 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-zt4ls 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-ddcr2 1/1 Running 0 75s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-fhvm7 1/1 Running 0 79s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-q99ww 1/1 Running 0 83s app=mynginx,pod-template-hash=5b686ccd46
These are added to the replicaset as match labels:
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
pod-template-hash: 5b686ccd46
Since even these are identical you can inspect the pods and see that there is an owner reference as well:
kubectl get pod nginx-d-5b686ccd46-7j4md -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-28T14:53:17Z"
generateName: nginx-d-5b686ccd46-
labels:
app: mynginx
pod-template-hash: 5b686ccd46
name: nginx-d-5b686ccd46-7j4md
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-d-5b686ccd46
uid: 7eb8fdaf-bfe7-4647-9180-43148a036184
resourceVersion: "556"
More information on this can be found here: https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/
So a deployment (and replicaset) can disambiguate which pods are managed by which and each ensure the desired number of replicas.

How to identify unhealthy pods in a statefulset

I have a StatefulSet with 6 replicas.
All of a sudden StatefulSet thinks there are 5 ready replicas out if 6. When I look at the pod status all 6 pods are ready with all the readiness checks passed 1/1.
Now I am trying to find logs or status that shows which pod is unhealthy as per the StatefulSet, so I could debug further.
Where can I find information or logs for the StatefulSet that could tell me which pod is unhealthy? I have already checked the output of describe pods and describe statefulset but none of them show which pod is unhealthy.
So lets say you created next statefulset:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
user: anurag
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
user: anurag # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 6 # by default is 1
template:
metadata:
labels:
user: anurag # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi
Result is:
kubectl get StatefulSet web -o wide
NAME READY AGE CONTAINERS IMAGES
web 6/6 8m31s nginx k8s.gcr.io/nginx-slim:0.8
What we can also check StatefulSet's status in:
kubectl get statefulset web -o yaml
status:
collisionCount: 0
currentReplicas: 6
currentRevision: web-599978b754
observedGeneration: 1
readyReplicas: 6
replicas: 6
updateRevision: web-599978b754
updatedReplicas: 6
As per Debugging a StatefulSet, you can list all the pods which belong to a current StatefulSet using labels.
$ kubectl get pods -l user=anurag
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 13m
web-1 1/1 Running 0 12m
web-2 1/1 Running 0 12m
web-3 1/1 Running 0 12m
web-4 1/1 Running 0 12m
web-5 1/1 Running 0 11m
Here, at this point, if any of your pods aren't available- you will definitely see that. And next debugging is Debug Pods and ReplicationControllers including checks if you have enough sufficient resources to start all these pods and etc etc.
Describe problematic pod (kubectl describe pod web-0) should give you an answer why that happened in the very end in Events section.
For example, if you will use origin yaml as it is for this example from statefulset components - you will have an error and any of your pods will up and running. (The reason is storageClassName: "my-storage-class" )
The exact error and understanding what is happening comes from describing problematic pod... that's how it works.
kubectl describe pod web-0
vents:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 31s (x2 over 31s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

How to set fixed pods names in kubernetes

I want to maintain different configuration for each pod, so planning to fetch properties from spring cloud config based on pod name.
Ex:
Properties in cloud
PodName1.property1 = "xxx"
PodName2.property1 ="yyy";
Property value will be different for each pod. Planning to fetch properties from cloud ,based on container name Environment.get("current pod name"+ " propertyName").
So I want to set fixed hostname/pod name
If the above is not possible, is there any alternative ?
You can use statefulsets if you want fixed pod names for your application.
e.g.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # this will be used as prefix in pod name
spec:
serviceName: "nginx"
replicas: 2 # specify number of pods that should be running
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
This template will create 2 pods of nginx in default namespace with names as following:
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
A basic example can be found here.

Updating kubernetes deployment will create new pod

I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.
Kubernetes is always going to recreate your pods in case you change/create env vars.
Lets check this together creating a deployment without any env var on it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let's check and note these pod names so we can compare later:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
Now let's edit this deployment and include one env var making the manifest look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
After we finish the edition, let's check our pod names and see if it changed:
$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
As we can see, all pod names changed. Let's check if our env var got applied:
$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place.
If you would like to create a new pod, then you need to create a new deployment for that. By design deployments are managing the replicas of pods that belong to them.

Ignite-web agent pod is not spin even after successful create command

I am deploying web agent via web-agent-deployment.yaml. So I ran the below command
root#ip-10-11.x.x.:~/ignite# kubectl create -f web-agent-deployment.yaml
deployment web-agent created
But still no web-agent pod spin at all. Please check below command output.
root#ip-10-10-11.x.x:~/ignite# kubectl get pods -n ignite
NAME READY STATUS RESTARTS AGE
ignite-cluster-6qhmf 1/1 Running 0 2h
ignite-cluster-lpgrt 1/1 Running 0 2h
as per official Doc, it should come Like blow.
$ kubectl get pods -n ignite
NAME READY STATUS RESTARTS AGE
web-agent-5596bd78c-h4272 1/1 Running 0 1h
This is my web-agent file:-
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: web-agent
namespace: ignite
spec:
selector:
matchLabels:
app: web-agent
replicas: 1
template:
metadata:
labels:
app: web-agent
spec:
serviceAccountName: ignite-cluster
containers:
- name: web-agent
image: apacheignite/web-agent
resources:
limits:
cpu: 500m
memory: 500Mi
env:
- name: DRIVER_FOLDER
value: "./jdbc-drivers"
- name: NODE_URI
value: ""https://10.11.X.Y:8080"" #my One of worker Node IP
- name: SERVER_URI
value: "http://frontend.web-console.svc.cluster.local"
- name: TOKENS
value: ""
- name: NODE_LOGIN
value: web-agent
- name: NODE_PASSWORD
value: password