Kubernetes set deploment number of replicas based on namespace - kubernetes

I've split our Kubernetes cluster into two different namespaces; staging and production, aiming to have production deployments having two replicas (for rolling deployments, autoscaling comes later) and staging having one single replica.
Other than having one deployment configuration per namespace, I was wondering whether or not we could set the default number of replicas per deployment, per namespace?
When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
If not, is there a recommended approach for this which will prevent the need to have a deployment config per namespace?
One way of doing this would be to scale the deployment up to two replicas, manually, in the production namespace, once it has been created for the first time, but I would prefer to skip any manual steps.

It is not possible to set different number of replicas per namespace in one deployment.
But you can have 2 different deployment files 1 per each namespace, i.e. <your-app>-production.yaml and <your-app>-staging.yaml.
In these descriptions you can determine any custom values and settings that you need.
For an example:
<your-app>-production.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: production #Here is namespace
...
spec:
replicas: 2 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...
<your-app>-staging.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: <your-app>
namespace: staging #Here is namespace
...
spec:
replicas: 1 #Here is the count of replicas of your application
template:
spec:
containers:
- name: <your-app-pod-name>
image: <your-app-image>
...

I don't think you can avoid having two deployments, but you can get rid of the duplicated code by using helm templates (https://docs.helm.sh/chart_template_guide). Then you can define a single deployment yaml and substitute different values when you deploy with an if statement.

When creating the deployment config, if you don't specify the number of replicas, it will default to one. Is there a way of defaulting it to two on the production namespace?
Actually, there are two ways to do it, but both of them involved coding.
Admission Controllers:
This is the recommended way of assigning default values to fields.
While creating objects in Kubernetes, it passes through some admission controllers and one of them is MutatingWebhook.
MutatingWebhook has been upgraded to beta version since v1.9+. This admission controller modifies (mutates) the object before actully created (or modified/deleted), say, assigning default values of some fields and some similar task. You can change the minimum replicas number here.
User Have to implement a admission server to receive requests from kubernetes and give modified object as response accordingly.
Here is a sample admission server implemented by Openshift kubernetes-namespace-reservation.
Deployment Controller:
This is comparatively easier but kind of hacking the deployment procedure.
You can write a Deployment controller which will watch for deployment and if there is any deployment made, it will do some task. Here, you can update the deployment with some minimum values you wish.
You can see the official Sample Pod Controller.
If both of them seems lots to do, it is better to assign fields more carefully each time for each deployment.

Related

In a Kubernetes deployment yaml, why do we have to match the template labels to the deployment labels in the selector?

I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?
Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
I might be missing some key understanding of k8s or yamls.
Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.
Technically, the parameter matchLabels decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different labels than matchLabels. So, the reason might be the uniformity between other Kubernetes resources (like Service where the matchLabels makes more sense).
I recommend reading the blog post matchLabels, labels, and selectors explained in detail, for beginners.
Let's simplify labels, selectors and template labels first.
The Labels in the metadata section are assigned to the deployment itself.
The Labels in the .spec.template section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.
The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in .spec.selector.matchLabels section.
Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used
"Let’s say you have deployment X which creates two pods with label nginx-pods and image nginx and another deployment Y which applies to the pods with the same label nginx-pods but uses images nginx:alpine. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with nginx:alpine image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments .spec.selector.matchLabels"
Because the Deployment.metadata.labels belongs to the Deployment resource, and the Deployment.spec.template.metadata.labels to the Pods which are handled by the Deployment controller. The Deployment controller knows which Pods are belongs to which Deployment based on the labels on the Pod resources.
This is why you have to specify the labels this way.

Kubernetes: Set environment variable in all pods

Is it possible to provide environment variables which will be set in all pods instead of configuring in each pods spec?
If not natively possible in Kubernetes, what would be an efficient method to accomplish it? We have Helm, but that still requires a lot of duplication.
This old answer suggested "PodPreset" which is no longer part of Kubernetes: Kubernetes - Shared environment variables for all Pods
You could do this using a mutating admission webhook to inject the environment variable into the pod manifest.
There are more details on implementing webhooks here.
I am not sure if you can do that for EVERY single pod in the cluster (if that is what you meant), but you CAN do it for every single pod within an application or service.
For example, via a Deployment, you can set a variable within the pod template, and all replicas will carry that value.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 5
template:
metadata:
...
spec:
containers:
- image: nginx
name: nginx
...
env:
- name: VAR_NAME # <---
value: "var_value" # <---
...
In this (edited) example, all 5 replicas of the nginx will have the environment variable VAR_NAME set to the value var_value.
You could also use a configmap (https://kubernetes.io/docs/concepts/configuration/configmap/) or secrets (https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables) to set environments variables from a shared location, depending on your requirements.

kubernetes having different env for one of the replica in service

A Usecase where one of the service must be scaled to 10 pods.
BUT, one of the pod must have different env variables. (kind of doing certain actions like DB actions and triggers handling, don't want 10 triggers to be handled instead of 1 DB change), for example 9 pods have env variable CHANGE=0 but one of the pod has env variable CHANGE=1
Also i am resolving by service name, so changing service name is not what i am looking for.
It sounds like you're trying to solve an issue with your app using Kubernetes.
The reason I say that is because the whole concept of "replicas" is to have identical instances, what you're actually saying is: "I have 10 identical pods but I want 1 of the to be different" and that's not how Kubernetes works.
So, you need to re-think the reason for which you need this environment variable to be different, what do you use it for. If you want to share the details maybe I can help you find an idiomatic way of doing this using Kubernetes.
The easiest way to do what you describe is to have two separate Services. One attaches to any "web" pod:
apiVersion: v1
kind: Service
metadata:
name: myapp-web
spec:
selector:
app: myapp
tier: web
The second attaches to only the master pod(s):
apiVersion: v1
kind: Service
metadata:
name: myapp-master
spec:
selector:
app: myapp
tier: web
role: master
Then have two separate Deployments. One has the single master pod, and one replica; the other has nine server pods. Your administrative requests go to myapp-master but general requests go to myapp-web.
As #omricoco suggests you can come up with a couple of ways to restructure this. A job queue like RabbitMQ will have the property that each job is done once (with retries if a job fails), so one setup is to run a queue like this, allow any server to accept administrative requests, but have their behavior just be to write a job into the queue. Then you can run a worker process (or several) to service these.

Kubectl get deployments, no resources

I've just started learning kubernetes, in every tutorial the writer generally uses "kubectl .... deploymenst" to control the newly created deploys. Now, with those commands (ex kubectl get deploymets) i always get the response No resources found in default namespace., and i have to use "pods" instead of "deployments" to make things work (which works fine).
Now my question is, what is causing this to happen, and what is the difference between using a deployment or a pod? ? i've set the docker driver in the first minikube, it has something to do with this?
First let's brush up some terminologies.
Pod - It's the basic building block for Kubernetes. It groups one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.
Deployment - It is a controller which wraps Pod/s and manages its life cycle, which is to say actual state to desired state. There is one more layer in between Deployment and Pod which is ReplicaSet : A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
Below is the visualization:
Source: I drew it!
In you case what might have happened :
Either you have created a Pod not a Deployment. Therefore, when you do kubectl get deployment you don't see any resources. Note when you create Deployments it in turn creates a ReplicaSet for you and also creates the defined pods.
Or may be you created your deployment in a different namespace, if that's the case, then type this command to find your deployments in that namespace kubectl get deploy NAME_OF_DEPLOYMENT -n NAME_OF_NAMESPACE
More information to clarify your concepts:
Source
Below the section inside spec.template is the section which is supposedly your POD manifest if you were to create it manually and not take the deployment route. Now like I said earlier in simple terms Deployments are a wrapper to your PODs, therefore anything which you see outside the path spec.template is the configuration which you will need to defined on how you want to manage (scaling,affinity, e.t.c) your POD
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Deployment is a controller providing higher level abstraction on top of pods and ReplicaSets. A Deployment provides declarative updates for Pods and ReplicaSets. Deployments internally creates ReplicaSets within which pods are created.
Use cases of deployment is documented here
One reason for No resources found in default namespace could be that you created the deployment in a specific namespace and not in default namespace.
You can see deployments in a specific namespace or in all namespaces via
kubectl get deploy -n namespacename
kubectl get deploy -A

What is the recommended way to get the pods of a Kubernetes deployment?

Especially considering all the asynchronous procedures involved with creating and updating a deployment, I find it difficult to reliably find the current pods associated with the current version of a given deployment.
Currently, I do:
Add unique labels to the deployment's template.
Get the revision number of the deployment.
Get all replica sets with the labels.
Filter them further to find the one with the correct revision number.
Extract the pod template hash from the replica set.
Get all pods with the labels plus the pod template hash.
This is awkward and complex. Besides, I am not sure that (4) and (6) are guaranteed to yield only the wanted objects. But I cannot filter by ownerReferences, can I?
Is there a more robust and simpler way?
When you create Deployment, it creates ReplicaSet, which creates Pods.
ReplicaSet contains "ownerReferences" path which includes the name and the UID of the parent deployment.
Pods contain the same path with the link to the parent ReplicaSet.
Here is an example of ReplicaSet info:
# kubectl get rs nginx-deployment-569477d6d8 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
...
name: nginx-deployment-569477d6d8
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: nginx-deployment
uid: acf5fe8a-5d0e-11e8-b14f-42010a8000fc
...