Pod naming in google cluster - kubernetes

I created a deployment like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: scs-db-sink
spec:
selector:
matchLabels:
app: scs-db-sink
replicas: 1
template:
metadata:
labels:
app: scs-db-sink
spec:
nodeSelector:
cloud.google.com/gke-nodepool: service-pool
containers:
- name: scs-db-sink
image: 'IMAGE_NAME'
imagePullPolicy: Always
ports:
- containerPort: 1068
kubectl get pods shows me that the pod is running:
scs-db-sink-74c4b6cd6b-tchm9 1/1 Running 0 16m
Question:
How can I setup the pod name to be scs-db-sink-0 and increase to scs-db-sink-1 when I scale up?
Thanks

Deployments pods are named as <replicaset-name>-<random-suffix> where replicaset name is <deployment-name>-<random-suffix>. Here, replicaset is created automatically by deployment. So, you can't achieve your expected name with deployment.
However, you can use Statefulset in this case. Statefulset's pods are named as you specified. Check about Statefulset here.

Related

Kubernetes non specific spec.selector does not prevent Kubernetes from working correctly

I've experienced a surprising behavior when playing around with Kubernetes and I wanted to know if there is any good explanation behind it.
I've noticed that when two Kubernetes deployments are created with the same labels, and with the same spec.selector, the deployments still function correctly, even though using the same selector "should" cause them to be confused regarding which pods is related to each one.
Example configurations which present this -
example_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
extra_label: one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
example_deployment_2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-2
labels:
app: nginx
extra_label: two
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I expected the deployments not to work correctly, since they will select pods from each other and assume it is theirs.
The actual result is that the deployments seem to be created correctly, but entering the deployment from k9s returns all of the pods. This is true for both deployments.
Can anyone please shed light regarding why this is happening? Is there additional internal filtering in Kubernetes to to prevent pods which were not really created by the deployment from being associated with it?
I'll note that I've seen this behavior in AWS and have reproduced it in Minikube.
When you create a K8S Deployment, K8S creates a ReplicaSet to manage the pods, then this ReplicaSet creates the pods based on the number of replicas provided or patched by the hpa. Addition to the provided labels and annotations you provide, the ReplicaSet add ownerReferences which contains its name and uid, so even if you have 4 pods with the same labels, each two pods will have a different ownerReferences used by the ReplicaSet to manage them:
apiVersion: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: <replicaset name>
uid: <replicaset uid>
...

StatefulSet: Longer rolling update lead Version mismatching

Application is deployed on K8s using StatefulSet because of stateful in nature. There is around 250+ pods are running and HPA has been implemented on it too that can scale upto 400 pods.
When new deployment occurs, it takes longer time (~ 10-15m) to update all pods in Rolling Update fashion.
Problem: End user get response from 2 version of pods until all pods are replaced with new revision.
I am googling for an architecture where overall deployment time can be reduced and getting the best possible solutions to use BLUE/GREEN strategy but it has bunch of impact with integrated services like monitoring, logging, telemetry etc because of 2 naming conventions.
Ideally I am looking for a solutions like maxSurge for Deployment in which firstly new pods are created and then traffic are shifted to it at a time but in case of StatefulSet, it won't support maxSurge with RollingUpdate strategy & controller will delete and recreate each Pod in the StatefulSet based on ordinal index from bigger to smaller.
The solution is to do a partitioning rolling update along with a canary deployment.
Let’s suppose we have the statefulset workload defined by the following yaml file:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.20"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.20"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.20"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.20
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
You could patch the statefulset to create a partition, and change the image and version label for the remaining pods: (In this case, since there are only 3 pods, the last one will be the one that will change its image.)
$ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.21"}]'
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/labels/version", "value":"1.21"}]'
At this point, you have a pod with the new image and version label ready to use, but since the version label is different, the traffic is still going to the other two pods. If you change the version in the yaml file and apply the new configuration, the rollout will be transparent, since there is already a pod ready to migrate the traffic:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
version: "1.21"
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
version: "1.21"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # Label selector that determines which Pods belong to the StatefulSet
# Must match spec: template: metadata: labels
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx # Pod template's label selector
version: "1.21"
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
$ kubectl apply -f file-name.yaml
Once traffic is migrated to the pod containing the new image and version label, you should patch again the statefulset and remove the partition with the command kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}'
Note: You will need to be very careful with the size of the partition, since the remaining pods will handle the whole traffic for some time.

Can a Deployment controller control Pods that weren't created by it?

Say I have a pod YAML such as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.1
And a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.1
ports:
- containerPort: 80
Now I first create the Pod:
$ kubectl apply -f pod.yaml
And only then the Deployment:
$ kubectl apply -f deployment.yaml
I thought that, since the pod.yaml metadata includes a app: nginx selector, the Deployment controller will only create 2 nginx:1.17.1 pods, but I see that all 3 are created. Why is that?
In addition to creating the app: nginx label, Deployment controller also added the pod-template-hash label for each pod that was created.
If we check labels for running pods, we can see pod-template-hash=5d5dd5dd49 label for my-deployment pods:
$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
my-deployment-5d5dd5dd49-9tbcx 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-b88f4 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
my-deployment-5d5dd5dd49-x7n8q 1/1 Running 0 55s app=nginx,pod-template-hash=5d5dd5dd49
nginx 1/1 Running 0 62s app=nginx
According to the official documentation:
The pod-template-hash label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the PodTemplate of the
ReplicaSet and using the resulting hash as the label value that is
added to the ReplicaSet selector, Pod template labels, and in any
existing Pods that the ReplicaSet might have.
This is why the changes from the Deployment didn't apply to a single pod with only the app: nginx label.

How to set fixed pods names in kubernetes

I want to maintain different configuration for each pod, so planning to fetch properties from spring cloud config based on pod name.
Ex:
Properties in cloud
PodName1.property1 = "xxx"
PodName2.property1 ="yyy";
Property value will be different for each pod. Planning to fetch properties from cloud ,based on container name Environment.get("current pod name"+ " propertyName").
So I want to set fixed hostname/pod name
If the above is not possible, is there any alternative ?
You can use statefulsets if you want fixed pod names for your application.
e.g.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web # this will be used as prefix in pod name
spec:
serviceName: "nginx"
replicas: 2 # specify number of pods that should be running
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
This template will create 2 pods of nginx in default namespace with names as following:
kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 1m
web-1 1/1 Running 0 1m
A basic example can be found here.

Updating kubernetes deployment will create new pod

I have an existing kubernetes deployment which is running fine. Now I want to edit it with some new environment variables which I will use in the pod.
Editing a deployment will delete and create new pod or it will update the existing pod.
My requirement is I want to create a new pod whenever I edit/update the deployment.
Kubernetes is always going to recreate your pods in case you change/create env vars.
Lets check this together creating a deployment without any env var on it:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Let's check and note these pod names so we can compare later:
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-56db997f77-9mpjx 1/1 Running 0 8s
nginx-deployment-56db997f77-mgdv9 1/1 Running 0 8s
nginx-deployment-56db997f77-zg96f 1/1 Running 0 8s
Now let's edit this deployment and include one env var making the manifest look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: STACK_GREETING
value: "Hello from the MARS"
ports:
- containerPort: 80
After we finish the edition, let's check our pod names and see if it changed:
$ kubectl get pod
nginx-deployment-5b4b68cb55-9ll7p 1/1 Running 0 25s
nginx-deployment-5b4b68cb55-ds9kb 1/1 Running 0 23s
nginx-deployment-5b4b68cb55-wlqgz 1/1 Running 0 21s
As we can see, all pod names changed. Let's check if our env var got applied:
$ kubectl exec -ti nginx-deployment-5b4b68cb55-9ll7p -- sh -c 'echo $STACK_GREETING'
Hello from the MARS
The same behavior will occur if you change the var or even remove it. All pods need to be removed and created again for the changes to take place.
If you would like to create a new pod, then you need to create a new deployment for that. By design deployments are managing the replicas of pods that belong to them.