Why labels are mentioned three times in a single deployment - kubernetes

I've gone over the following docomentation page: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
The example deployment yaml is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
We can see here three different times where the label app: nginx is mentioned.
Why do we need each of them? I had a hard time understanding it from the official documentation.

The first label is for deployment itself, it gives label for that particular deployment. Lets say you want to delete that deployment then you run following command:
kubectl delete deployment -l app=nginx
This will delete the entire deployment.
The second label is selector: matchLabels which tells the resources(service etc) to match the pod according to label. So lets say if you want to create the service which has all the pods having labels of app=nginx then you provide following definition:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
The above service will look for the matchLabels and bind pods which have label app: nginx assigned to them
The third label is podTemplate labels, the template is actually podTemplate. It describe the pod that it is launched. So lets say you have two replica deployment and k8s will launch 2 pods with the label specified in template: metadata: labels. This is subtle but important difference, so you can have the different labels for deployment and pods generated by that deployment.

First label:
It is deployment label which is used to select deployment. You can use below command using first label:
kubectl get deployment -l app=nginx
Second Label:
It is not a label . It is label selector to select pod with labels nginx. It is used by ReplicaSet.
Third Label:
It is pod label to identify pods. It is used by ReplicaSet to maintain desired num of replica and for that label selector is used.
Also it is used to selects pod with below command:
kubectl get pods -l app=nginx

As we know it, the labels are to identify the resources,
First label identifies the Deployment itself
Third one is falls under the Pod template section. So, this one is specific to the Pod.
Second one i.e the matchLabels is used to tell Services, ReplicaSet and other resources to act on the resources on the specified label conditions.
While first and third ones are label assignment to Deployment and Pods respectively, the second one is matching condition expression rather than assignment.
Though all 3 have same labels in the real world examples, First one can be different than second and third ones. But, second and third one usually to be identical as the second is the conditional expression that acts upon third one.

.metadata.labels is for labeling the deployment object itself, you don't necessarily need it, but like other answers said, it helps you organize objects.
.spec.selector tells the deployment(under the hood it is the ReplicaSet object) how to find the pods to manage. For your example, it will manage pods with label app: nginx.
But how do you tell the ReplicaSet controller to create pods with that label in the first place? You define that in the pod template, .spec.template.metadata.labels.

Related

Deployment matchLabels and template labels and the DRY principle

In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels? If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full. If that's true, why does K8s want us to specify the same labels twice? Either I'm missing something or this violates the DRY principle.
The only thing I can think of would be creating a Deployment with matchLabels "key: A" & "key: B" that simultaneously puts existing/un-owned pods that have label "key: A" into the Deployment while at the same time any new pods get label "key: B". But even then, it feels like any label in the template metadata should automatically be in the selector matchLabels.
K8s docs give the following example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
...In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels?
Example when doing canary deployment.
...If they didn't match, any pod created wouldn't match the selector, and I'd imagine K8s would go on creating new pods until every node is full.
Your deployment will not proceed, it will fail with error message "selector" does not match template "labels". No pod will be created.
...it feels like any label in the template metadata should automatically be in the selector matchLabels.
Labels under template.metadata are used for many purposes and not only for deployment, example labels added by CNI pertain to IP on the fly. Labels meant for selector should be minimum and specific.
...In a Deployment, under what circumstances would the matchLabels in the selector not precisely match the template metadata labels?
Labels under spec.selector.matchLabels should match ones under spec.template.metadata.labels. You can have labels under spec.template.metadata.labels that are not present under spec.selector.matchLabels.

Pod is not getting selected by Deployment selector

I have this Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-webserver-nginx
annotations:
description: This is a demo deployment for nginx webserver
labels:
app: deployment-webserver-nginx
spec:
replicas: 3
selector:
matchLabels:
app: deployment-webserver-pods
template:
metadata:
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
My understanding on this Deployment object is that any Pod with app:deployment-webserver-pods label will be selected. Of course, this Deployment object is creating 3 replicas, but I wanted to add one more Pod explicitly like this, so I created a Pod object and had its label as app:deployment-webserver-pods, below is its Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: deployment-webserver-nginx-extra-pod
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 81
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
I even tried to first create this pod with this label, and then created my Deployment and thought that maybe now this explicit Pod will be picked but still that didn't happen.
Doesn't Labels and Selectors work like this?
I know I can scale by deployment to 4 Replicas, but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
From the official docs:
Note: You should not create other Pods whose labels match this
selector, either directly, by creating another Deployment, or by
creating another controller such as a ReplicaSet or a
ReplicationController. If you do so, the first Deployment thinks that
it created these other Pods. Kubernetes does not stop you from doing
this.
As described further in docs, it is not recommended to scale replicas of the deployments using the above approach.
Another important point to note from same section of docs:
If you have multiple controllers that have overlapping selectors, the
controllers will fight with each other and won't behave correctly.
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
The Deployment Controller does not work like that, it listen for Deployment-resources and "drive" them to desired state. That typically means, if any change in the template:-part, then a new ReplicaSet is created with the number of replicas. You cannot add a Pod to a Deployment in another way than changing replicas: - each instance is created from the same Pod-template and is identical.
Doesn't Labels and Selectors work like this?
... but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
Yes, Labels and Selectors are used for many things in Kubernetes, but not for everything. When you create a Deployment with a label, and a Pod with the same label and finally a Service with a selector - then the traffic addressed to that Service will distribute traffic to your instances of your Deployment as well as to your extra Pod.
Example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: deployment-webserver-pods
ports:
- protocol: TCP
port: 80
targetPort: 8080
Labels and Selector are also useful for management when using e.g. kubectl. You can add labels for Teams or e.g. App, then you can select all Deployments or Pods belonging to that Team or App (e.g. if the app consist of App-deployment and a cache-deployment), e.g:
kubectl get pods -l team=myteam,app=customerservice
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
Kubernetes is a system that operates "Declaratively" and not "Imperatively" which means you write down the desired state of the application in the cluster typically through a YAML file, and these declared desired states define all of the pieces of your application.
If a cluster were to configured imperatively like the way you are expecting it to be, it would have been very difficult to understand and replicate how the cluster came to be in that state.
Just to add in the above explanations that if we are trying to manually create pod and manage then what is the purpose of having controllers in K8s.
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
As per your yaml replicas:3 was already set so deployment would not take a new pod as the 4th replica.

service selector vs deployment selector matchlabels

I understand that services use a selector to identify which pods to route traffic to by thier labels.
apiVersion: v1
kind: Service
metadata:
name: svc
spec:
ports:
- name: tcp
protocol: TCP
port: 443
targetPort: 443
selector:
app: nginx
Thats all well and good.
Now what is the difference between this selector and the one of the spec.selector from the deployment. I understand that it is used so that the deployment can match and manage its pods.
I dont understand however why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Thanks in advance
In the Service's spec.selector, you can identify which pods to route traffic to only by their labels.
On the other hand, in the Deployment's spec.selector you have two options to decide on which node the pods will be scheduled on, which are: matchExpressions, matchLabels.
How Deployment uses spec.selector
When a Deployment is changed, a new ReplicaSet is created. The ReplicaSet is responsible to manage the Pods. It uses the spec.selector to know what Pods it should manage.
Example:
If the replicas: 1 is changed in the Deployment to e.g. replicas: 2 a new ReplicaSet is created, and it observes the Pods using spec.selector to match Pods with matching labels. It only see 1 replica initially, but its desired state is now replicas: 2 so it is responsible for creating additionally one Pod from the template in the Deployment.
Selector syntax
There is two ways to declare the labels under the spec.selector in `Deployment.
matchLabels - you declare the labels
matchExpressions - you write an expression for labels
See kubectl explain deployment.spec.selector for full explanation of spec.selector alternatives.
Labels and Selectors
Labels and Selectors is a generic concept in Kubernetes and is used in multiple places. Another example is how you can filter what resources you want to see or use with kubectl. E.g. you can select the Pods for an app with:
kubectl get pod -l=app=myappname
(if your Pods is labelled with app: myappname.
why i need the extra matchLabels declaration and cant just do it like in the service. Whats the use of this semantically?
Because service spec only support equality-based selectors and the deployment is a newer resource that supports two syntax (equality-based and set-based).
The API currently supports two types of selectors: equality-based and set-based. A label selector can be made of multiple requirements which are comma-separated. In the case of multiple requirements, all must be satisfied so the comma separator acts as a logical AND (&&) operator.
Reference
The Service spec uses just the "equality-based" label selector syntax.
Newer resources, such as Job, Deployment, ReplicaSet, and DaemonSet, support set-based requirements...
Reference
My understanding is that earlier the only supported syntax was the equality-based one, like we have on the service spec, and that now, when the resource you are using supports the new syntax, you are required to use matchLabels or matchExpressions.

how to ignore random kubernetes pod name in deployment file

Below is my kubernetes deployment file -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: boxfusenew
labels:
app: boxfusenew
spec:
replicas: 1
template:
metadata:
labels:
app: boxfusenew
spec:
containers:
- image: sk1997/boxfuse:latest
name: boxfusenew
ports:
- containerPort: 8080
In this deployment file under container tag boxfusenew pod name is specified. So I want the pod generated by deployment file should have the boxfusenew name but the deployment is attaching some random value to it as- boxfusenew-5f6f67fc5-kmb7z.
Is it possible to ignore random values in pod name through deployment file??
Not really, unless you create the Pod itself and not a deployment.
According to Kubernetes documentation:
Each object in your cluster has a Name that is unique for that type of resource. Every Kubernetes object also has a UID that is unique across your whole cluster.
For example, you can only have one Pod named myapp-1234 within the same namespace, but you can have one Pod and one Deployment that are each named myapp-1234.
For non-unique user-provided attributes, Kubernetes provides labels and annotations.
If you create a Pod with a specific unique label, you can use this label to query the Pod, so no need of having the exact name.
You can use a jsonpath to query the values that you want from your Pod under that specific deployment. I've created an example that may give you an idea:
kubectl get pods -o=jsonpath='{.items[?(#.metadata.labels.app=="boxfusenew")].metadata.name}'
This would return the name of the Pod which contains the label app=boxfusenew. You can take a look into some other examples of jsonpath here and here.
First what kind of use case that you want to achieve? If you want to simply get available pods belongs to certain deployment you can use label and selector. For example:
kubectl -n <namespace> get po -l <key>=<value>

how to set different environment variables of Deployment replicas in kubernetes

I have 4 k8s pods by setting the replicas of Deployment to 4 now.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
The POD will get items in a database and consume it, the items in database has a column class_name.
now I want one pod only get one class_name's item.
for example pod1 only get item which class_name equals class_name_1, and pod2 only get item which class_name equals class_name_2...
So I want to pass different class_name as environment variables to different Deployment PODs. Can I define it in the yaml file of Deployment?
Or is there any other way to achieve my goal?(like something other than Deployment in k8s)
For distributed job processing Deployments are not very good, because they don't have any type of ordering or consistent pod hostnames. You'd better use StatefulSet for it, because they have consistent naming, like pod-0, pod-1, pod-2. You can rely on that hostname index.
For example, if your class_name_idx - is the index of class name in class names list, num_replicas - is the number of replicas in StatefulSet and pod_idx - is the index of pod in StatefulSet, then pod should run the job only if: class_name_idx % num_replicas == pod_idx.
Unfortunately number of StatefulSet replicas cannot be obtained within the pod dynamically using Downward API, so you can either hardcode it or use Kubernetes API to obtain it from cluster.
Neither Deployment nor anything else won't help to achieve your goal. Your goal is some kind of logic and it should be implemented via code in your application.
Since the Deployment is some instances of the same application the only thing that might be useful for you is: using multiple deployments, each for its own task. The first could get class_name_1 item, while other class_name_2, class_name_3 etc. But it is not a good idea
I would not recommend this approach, but the closest thing to do what you want is using the stateful-set and use the pod name as the index.
When you deploy a stateful set, the pods will be named after their statefulset name, in the following sample:
apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
type: NodePort
ports:
- port: 8080
name: web
selector:
app: kuard
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kuard
spec:
serviceName: "kuard"
replicas: 3
selector:
matchLabels:
app: kuard
template:
metadata:
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
ports:
- containerPort: 8080
name: web
The pods created by the statefulset will be named as:
kuard-0
kuard-1
kuard-2
This way you could either, name the stateful-set according to the classes, i.e: class-name and the pod created will be class-name-0 and you can replace the _ by -. Or just strip the name out to get the index at the end.
To get the name just read the environment variable HOSTNAME
This naming is consistent, so you can make sure you always have 0, 1, 2, 3 after the name. And if the 2 goes down, it will be recreated.
Like I said, I would not recommend this approach because you tie the infrastructure to your code, and also can't scale(if needed) because each service are unique and adding new instances would get new ids.
A better approach would be using one deployment for each class and pass the proper values as environment variables.