how to set different environment variables of Deployment replicas in kubernetes - deployment

I have 4 k8s pods by setting the replicas of Deployment to 4 now.
apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
The POD will get items in a database and consume it, the items in database has a column class_name.
now I want one pod only get one class_name's item.
for example pod1 only get item which class_name equals class_name_1, and pod2 only get item which class_name equals class_name_2...
So I want to pass different class_name as environment variables to different Deployment PODs. Can I define it in the yaml file of Deployment?
Or is there any other way to achieve my goal?(like something other than Deployment in k8s)

For distributed job processing Deployments are not very good, because they don't have any type of ordering or consistent pod hostnames. You'd better use StatefulSet for it, because they have consistent naming, like pod-0, pod-1, pod-2. You can rely on that hostname index.
For example, if your class_name_idx - is the index of class name in class names list, num_replicas - is the number of replicas in StatefulSet and pod_idx - is the index of pod in StatefulSet, then pod should run the job only if: class_name_idx % num_replicas == pod_idx.
Unfortunately number of StatefulSet replicas cannot be obtained within the pod dynamically using Downward API, so you can either hardcode it or use Kubernetes API to obtain it from cluster.

Neither Deployment nor anything else won't help to achieve your goal. Your goal is some kind of logic and it should be implemented via code in your application.
Since the Deployment is some instances of the same application the only thing that might be useful for you is: using multiple deployments, each for its own task. The first could get class_name_1 item, while other class_name_2, class_name_3 etc. But it is not a good idea

I would not recommend this approach, but the closest thing to do what you want is using the stateful-set and use the pod name as the index.
When you deploy a stateful set, the pods will be named after their statefulset name, in the following sample:
apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
type: NodePort
ports:
- port: 8080
name: web
selector:
app: kuard
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kuard
spec:
serviceName: "kuard"
replicas: 3
selector:
matchLabels:
app: kuard
template:
metadata:
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
ports:
- containerPort: 8080
name: web
The pods created by the statefulset will be named as:
kuard-0
kuard-1
kuard-2
This way you could either, name the stateful-set according to the classes, i.e: class-name and the pod created will be class-name-0 and you can replace the _ by -. Or just strip the name out to get the index at the end.
To get the name just read the environment variable HOSTNAME
This naming is consistent, so you can make sure you always have 0, 1, 2, 3 after the name. And if the 2 goes down, it will be recreated.
Like I said, I would not recommend this approach because you tie the infrastructure to your code, and also can't scale(if needed) because each service are unique and adding new instances would get new ids.
A better approach would be using one deployment for each class and pass the proper values as environment variables.

Related

Pod is not getting selected by Deployment selector

I have this Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-webserver-nginx
annotations:
description: This is a demo deployment for nginx webserver
labels:
app: deployment-webserver-nginx
spec:
replicas: 3
selector:
matchLabels:
app: deployment-webserver-pods
template:
metadata:
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
My understanding on this Deployment object is that any Pod with app:deployment-webserver-pods label will be selected. Of course, this Deployment object is creating 3 replicas, but I wanted to add one more Pod explicitly like this, so I created a Pod object and had its label as app:deployment-webserver-pods, below is its Pod definition:
apiVersion: v1
kind: Pod
metadata:
name: deployment-webserver-nginx-extra-pod
labels:
app: deployment-webserver-pods
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 81
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
I even tried to first create this pod with this label, and then created my Deployment and thought that maybe now this explicit Pod will be picked but still that didn't happen.
Doesn't Labels and Selectors work like this?
I know I can scale by deployment to 4 Replicas, but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
From the official docs:
Note: You should not create other Pods whose labels match this
selector, either directly, by creating another Deployment, or by
creating another controller such as a ReplicaSet or a
ReplicationController. If you do so, the first Deployment thinks that
it created these other Pods. Kubernetes does not stop you from doing
this.
As described further in docs, it is not recommended to scale replicas of the deployments using the above approach.
Another important point to note from same section of docs:
If you have multiple controllers that have overlapping selectors, the
controllers will fight with each other and won't behave correctly.
My expectation was that continuously running Deployment Controller will pick this new Pod, and when I do kubectl get deploy then I will see 4 pods running. But that didn't happen.
The Deployment Controller does not work like that, it listen for Deployment-resources and "drive" them to desired state. That typically means, if any change in the template:-part, then a new ReplicaSet is created with the number of replicas. You cannot add a Pod to a Deployment in another way than changing replicas: - each instance is created from the same Pod-template and is identical.
Doesn't Labels and Selectors work like this?
... but I am trying to understand how Pods / other Kubernetes objects are selected using Labels and Selectors.
Yes, Labels and Selectors are used for many things in Kubernetes, but not for everything. When you create a Deployment with a label, and a Pod with the same label and finally a Service with a selector - then the traffic addressed to that Service will distribute traffic to your instances of your Deployment as well as to your extra Pod.
Example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: deployment-webserver-pods
ports:
- protocol: TCP
port: 80
targetPort: 8080
Labels and Selector are also useful for management when using e.g. kubectl. You can add labels for Teams or e.g. App, then you can select all Deployments or Pods belonging to that Team or App (e.g. if the app consist of App-deployment and a cache-deployment), e.g:
kubectl get pods -l team=myteam,app=customerservice
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
Kubernetes is a system that operates "Declaratively" and not "Imperatively" which means you write down the desired state of the application in the cluster typically through a YAML file, and these declared desired states define all of the pieces of your application.
If a cluster were to configured imperatively like the way you are expecting it to be, it would have been very difficult to understand and replicate how the cluster came to be in that state.
Just to add in the above explanations that if we are trying to manually create pod and manage then what is the purpose of having controllers in K8s.
My expectation was that continuously running Deployment Controller
will pick this new Pod, and when I do kubectl get deploy then I will
see 4 pods running. But that didn't happen.
As per your yaml replicas:3 was already set so deployment would not take a new pod as the 4th replica.

Can I have a K8s pod per user/firm?

Is there a way we can have a K8s pod per user/per firm? I realise, per user/per firm grouping is mixing up the business level semantics with infrastructure but say I had this need for regulatory reasons, etc to keep things separate. Then is there a way to create a pod on the fly when a user logs in for the first time and hold this pod reference and route any further requests to the relevant pod which will host a set of containers each running an instance of one of the modules.
Is this even possible?
If possible, what are those identifiers that
can be injected into the pod on the fly that I could use to identify that this is
USER-A-POD vs USER_B_POD or FIRM_A_POD vs FIRM_B_POD ?
Effectively, I need to have a pod template that helps me create identical pods of 1 replica but the only way they differ is they are serving traffic related to one user/one firm only.
Generally, if you want to send traffic to a specific pod say from a Kubernetes Service you would use Labels and Selectors. For example, using the selector app: usera-app in the Service:
apiVersion: v1
kind: Service
metadata:
name: usera-service
spec:
selector:
app: usera-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Then say if the Deployment for your pods, using the label app: usera-app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: usera-deployment
spec:
selector:
matchLabels:
app: usera-app
replicas: 2
template:
metadata:
labels:
app: usera-app
spec:
containers:
- name: myservice
image: nginx
ports:
- containerPort: 80
More info here
How you assign your pods and deployments is up to you and whatever configuration you may use. If you'd like to force create some of the labels in deployments/pods you can take a look at MutatingAdminssionWebhooks.
If you are looking at projects to facilitate all this you can take a look at:
Gatekeeper which is an implementation of the Open Policy Agent for Kubernetes admission. (Still in alpha as of this writing)
Other tools that can help you with attestation and admission mechanism (would have to be adapted for labels):
Kritis
Portieris
Yes, you can create multiple virtual clusters for each user with namespaces.
https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
Namespaces are the way to divide cluster between users.

Why labels are mentioned three times in a single deployment

I've gone over the following docomentation page: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
The example deployment yaml is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
We can see here three different times where the label app: nginx is mentioned.
Why do we need each of them? I had a hard time understanding it from the official documentation.
The first label is for deployment itself, it gives label for that particular deployment. Lets say you want to delete that deployment then you run following command:
kubectl delete deployment -l app=nginx
This will delete the entire deployment.
The second label is selector: matchLabels which tells the resources(service etc) to match the pod according to label. So lets say if you want to create the service which has all the pods having labels of app=nginx then you provide following definition:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
The above service will look for the matchLabels and bind pods which have label app: nginx assigned to them
The third label is podTemplate labels, the template is actually podTemplate. It describe the pod that it is launched. So lets say you have two replica deployment and k8s will launch 2 pods with the label specified in template: metadata: labels. This is subtle but important difference, so you can have the different labels for deployment and pods generated by that deployment.
First label:
It is deployment label which is used to select deployment. You can use below command using first label:
kubectl get deployment -l app=nginx
Second Label:
It is not a label . It is label selector to select pod with labels nginx. It is used by ReplicaSet.
Third Label:
It is pod label to identify pods. It is used by ReplicaSet to maintain desired num of replica and for that label selector is used.
Also it is used to selects pod with below command:
kubectl get pods -l app=nginx
As we know it, the labels are to identify the resources,
First label identifies the Deployment itself
Third one is falls under the Pod template section. So, this one is specific to the Pod.
Second one i.e the matchLabels is used to tell Services, ReplicaSet and other resources to act on the resources on the specified label conditions.
While first and third ones are label assignment to Deployment and Pods respectively, the second one is matching condition expression rather than assignment.
Though all 3 have same labels in the real world examples, First one can be different than second and third ones. But, second and third one usually to be identical as the second is the conditional expression that acts upon third one.
.metadata.labels is for labeling the deployment object itself, you don't necessarily need it, but like other answers said, it helps you organize objects.
.spec.selector tells the deployment(under the hood it is the ReplicaSet object) how to find the pods to manage. For your example, it will manage pods with label app: nginx.
But how do you tell the ReplicaSet controller to create pods with that label in the first place? You define that in the pod template, .spec.template.metadata.labels.

Why should I specify service before deployment in a single Kubernetes configuration file?

I'm trying to understand why kubernetes docs recommend to specify service before deployment in one configuration file:
The resources will be created in the order they appear in the file. Therefore, it’s best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
Does it mean spread pods between kubernetes cluster nodes?
I tested with the following configuration where a deployment is located before a service and pods are distributed between nods without any issues.
apiVersion: apps/v1
kind: Deployment
metadata:
name: incorrect-order
namespace: test
spec:
selector:
matchLabels:
app: incorrect-order
replicas: 2
template:
metadata:
labels:
app: incorrect-order
spec:
containers:
- name: incorrect-order
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: incorrect-order
namespace: test
labels:
app: incorrect-order
spec:
type: NodePort
ports:
- port: 80
selector:
app: incorrect-order
Another explanation is that some environment variables with service URL will not be set for pods in this case. However it also works ok in case a configuration is inside one file like the example above.
Could you please explain why it is better to specify service before the deployment in case of one configuration file? Or may be it is some outdated recommendation.
If you use DNS as service discovery, the order of creation doesn't matter.
In case of Environment Vars (the second way K8S offers service discovery) the order matters, because once that vars are passed to the starting pod, they cannot be modified later if the service definition changes.
So if your service is deployed before you start your pod, the service envvars are injected inside the linked pod.
If you create a Pod/Deployment resource with labels, this resource will be exposed through a service once this last is created (with proper selector to indicate what resource to expose).
You are correct in that it effects the spread among the worker nodes.
Deployments without a Service will simply be scheduled onto the nodes with the least cpu/memory allocation. For instance, a brand new and empty node will get all new pods from a new deployment.
With a Deployment that also has a service the Scheduler tries to spread the pods between nodes, disregarding the cpu/memory load (within limits), to help the Service survive better.
It puzzles me that a Deployment on it's own doesn't cause a optimal spread but it doesn't, not yet at least.
This is the answer from the official documentation:
The resources will be created in the order they appear in the file.
Therefore, it's best to specify the service first, since that will
ensure the scheduler can spread the pods associated with the service
as they are created by the controller(s), such as Deployment.
Kubernetes Documentation/Concepts/Cluster/Administration/Managing Resources

How to configure a Kubernetes Multi-Pod Deployment

I would like to deploy an application cluster by managing my deployment via k8s Deployment object. The documentation has me extremely confused. My basic layout has the following components that scale independently:
API server
UI server
Redis cache
Timer/Scheduled task server
Technically, all 4 above belong in separate pods that are scaled independently.
My questions are:
Do I need to create pod.yml files and then somehow reference them in deployment.yml file or can a deployment file also embed pod definitions?
K8s documentation seems to imply that the spec portion of Deployment is equivalent to defining one pod. Is that correct? What if I want to declaratively describe multi-pod deployments? Do I do need multiple deployment.yml files?
Pagids answer has most of the basics. You should create 4 Deployments for your scenario. Each deployment will create a ReplicaSet that schedules and supervises the collection of PODs for the Deployment.
Each Deployment will most likely also require a Service in front of it for access. I usually create a single yaml file that has a Deployment and the corresponding Service in it. Here is an example for an nginx.yaml that I use:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
name: nginx
targetPort: 80
nodePort: 32756
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
Here some additional information for clarification:
A POD is not a scalable unit. A Deployment that schedules PODs is.
A Deployment is meant to represent a single group of PODs fulfilling a single purpose together.
You can have many Deployments work together in the virtual network of the cluster.
For accessing a Deployment that may consist of many PODs running on different nodes you have to create a Service.
Deployments are meant to contain stateless services. If you need to store a state you need to create StatefulSet instead (e.g. for a database service).
You can use the Kubernetes API reference for the Deployment and you'll find that the spec->template field is of type PodTemplateSpec along with the related comment (Template describes the pods that will be created.) it answers you questions. A longer description can of course be found in the Deployment user guide.
To answer your questions...
1) The Pods are managed by the Deployment and defining them separately doesn't make sense as they are created on demand by the Deployment. Keep in mind that there might be more replicas of the same pod type.
2) For each of the applications in your list, you'd have to define one Deployment - which also makes sense when it comes to difference replica counts and application rollouts.
3) you haven't asked that but it's related - along with separate Deployments each of your applications will also need a dedicated Service so the others can access it.
additional information:
API server use deployment
UI server use deployment
Redis cache use statefulset
Timer/Scheduled task server maybe use a statefulset (If your service has some state in)