Kubernetes: what's the difference between Deployment and Replica set? - kubernetes

Both replica set and deployment have the attribute replica: 3, what's the difference between deployment and replica set? Does deployment work via replica set under the hood?
configuration of deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
configuration of replica set
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
Kubernetes Documentation
When to use a ReplicaSet
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.

Deployment resource makes it easier for updating your pods to a newer version.
Lets say you use ReplicaSet-A for controlling your pods, then You wish to update your pods to a newer version, now you should create Replicaset-B, scale down ReplicaSet-A and scale up ReplicaSet-B by one step repeatedly (This process is known as rolling update). Although this does the job, but it's not a good practice and it's better to let K8S do the job.
A Deployment resource does this automatically without any human interaction and increases the abstraction by one level.
Note: Deployment doesn't interact with pods directly, it just does rolling update using ReplicaSets.

A ReplicaSet ensures that a number of Pods is created in a cluster. The pods are called replicas and are the mechanism of availability in Kubernetes.
But changing the ReplicaSet will not take effect on existing Pods, so it is not possible to easily change, for example, the image version.
A deployment is a higher abstraction that manages one or more ReplicaSets to provide a controlled rollout of a new version. When the image version is changed in the Deployment, a new ReplicaSet for this version will be created with initially zero replicas. Then it will be scaled to one replica, after that is running, the old ReplicaSet will be scaled down. (The number of newly created pods, the step size so to speak, can be tuned.)
As long as you don't have a rollout in progress, a deployment will result in a single replicaset with the replication factor managed by the deployment.
I would recommend to always use a Deployment and not a bare ReplicaSet.

One of the differences between Deployments and ReplicaSet is changes made to container are not reflected once the ReplicaSet is created
For example:
This is my replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.2
ports:
- containerPort: 80
I will apply this replicaset using this command
kubectl apply -f replicaset.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
So from pod definition, we can observe that nginx is using 1.13.2. Now let's change image version to 1.14.2 in replicaset.yaml
Again apply changes
kubectl apply -f replicaset.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Now we don't see any changes in Pod and they are still using old image.
Now let us repeat the same with a deployment (deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.2
ports:
- containerPort: 80
I will apply this deployment using this command
kubectl apply -f deployment.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Change the deployment.yaml file with some other version of nginx image
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 5 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I will again apply this deployment using this command
kubectl apply -f deployment.yaml
kubectl get pods
kubectl describe pod <<name_of_pod>>
Now we can see these pods and we can see updated image in the description of pod

TLDR:
Deployment manages -> Replicatset manages -> pod(s) abstraction of
-> container (e.g docker container)
A Deployment in Kubernetes is a higher-level abstraction that represents a set of replicas of your application. It ensures that your desired number of replicas of your application are running and available.
A ReplicaSet, on the other hand, is a lower-level resource that is responsible for maintaining a stable set of replicas of your application. ReplicaSet ensures that a specified number of replicas are running and available.
Example:
Suppose you have a web application that you want to run on 3 replicas for high availability.
You would create a Deployment resource in Kubernetes, specifying the desired number of replicas as 3 pods. The deployment would then create and manage the ReplicaSet, which would in turn create and manage 3 replicas of your web application pod.
In summary, Deployment provides higher-level abstractions for scaling, rolling updates and rolling back, while ReplicaSet provides a lower-level mechanism for ensuring that a specified number of replicas of your application are running.

Related

kubernetes node fail after pod migration running pod

I use the on-premise kubernetes. create with rke.
I created a basic pod on the Kubernetes. this pod running the worker2. I shot down worker2 but this pod can't change a worker. I see too fail screen.
How to solve this problem?
I try to Priority Class but this technology can't solve the problem.
I created a basic pod... I shot down worker2...
When a Pod is terminated it won't restart itself. You use workload controller such as Deployment, StatefulSet to start new Pod when an existing one was terminated. Workload controller will keep the number of pod(s) in the cluster according to the replicas count that you set.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
Assumed that the Deployment created a Pod on "worker2" and you terminated the "worker2"; the Deployment will try to spin up a new Pod on next available (healthy) node.

Updating replica in kubernetes deployment file

I have a deployment file with replicas set to 1. So when I do 'kubectl get ...' I get 1 record each for deployment, replicaset and pod.
Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.
Shouldn't previous deployment be overwritten, hence resulting in a single deployment, and similar for replicaset(2 pods are ok as now replicas is set to 2)?
This is deployment file content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
ports:
- containerPort: 80
...Now I set replicas to 2 in deployment.yaml, apply it and when I run 'kubectl get ..' command, I get 2 records each for deployments, replicaset and pods each.
Can you try kubectl get deploy --field-selector metadata.name=nginx-deployment. You should get just 1 deployment. The number of pods should follow this deployment's replicas.

In Kubernetes how can I have a hard minimum number of pods for Deployment?

On my deployment I can set replicas: 3 but then it only spins up one pod. Its my understanding that kubernetes will then fire up more pods as needed, up to three.
But for the sake of uptime I want to be able to have a minimum of three pods at all times and for it to maybe create more as needed but to never scale down lower than 3.
Is this possible with a Deployment?
It is exactly as you did, you define 3 replicas in your Deployment.
Have you verified that you have enough resources for your Deployments?
Replicas example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3 # <------- The number of your desired replicas
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
K8S will spin up a new pod if it can do it meaning, if you have enough resources, the resources are valid (like nodes, selectors, image, volumes, configuration, and more)
If you have defined 3 replicas and you still getting only 1, examine your deployment or your events.
How to view events
# To view all the events (don't specify pod name or namespace)
kubectl get events
# Get event for specific pod
kubectl describe event <pod name> --namespace <namespace>

Update Node Selector field for PODs on the fly

I have been trying different things around k8s these days. I am wondering about the field nodeSelector in the POD specification.
As I understand we have to assign some labels to the nodes and these labels can further be used in the nodeSelector field part of the POD specification.
Assignment of the node to pods based on nodeSelector works fine. But, after I create the pod, now I want to update/overwrite the nodeSelector field which would deploy my pod to new node based on new nodeSelector label updated.
I am thinking this in the same way it is done for the normal labels using kubectl label command.
Are there any hacks to achieve such a case ?
If this is not possible for the current latest versions of the kubernetes, why should not we consider it ?
Thanks.
While editing deployment manually as cookiedough suggested is one of the options, I believe using kubctl patch would be a better solution.
You can either patch by using yaml file or JSON string, which makes it easier to integrate thing into scripts. Here is a complete reference.
Example
Here's a simple deployment of nginx I used, which will be created on node-1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: node-1
JSON patch
You can patch the deployment to change the desired node as follows:kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "node-2"}}}}}'
YAML patch
By running kubectl patch deployment nginx-deployment --patch "$(cat patch.yaml)", where patch.yaml is prepared as follows:
spec:
template:
spec:
nodeSelector:
kubernetes.io/hostname: node-2
Both will result in scheduler scheduling new pod on requested node, and terminating the old one as soon as the new one is ready.
Remove the nodeSelector line & save it

Is it possible to move the running pods from ReplicationController to a Deployment?

We are using RC to run our workload and want to migrate to Deployment. Is there a way to do that with out causing any impact to the running workload. I mean, can we move these running pods under Deployment?
Like, #matthew-l-daniel answered, the answer is yes. But I am more than 80% certain about it. Because I have tested it
Now whats the process we need to follow
Lets say I have a ReplicationController.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Question: can we move these running pods under Deployment?
Lets follow these step to see if we can.
Step 1:
Delete this RC with --cascade=false. This will leave Pods.
Step 2:
Create ReplicaSet first, with same label as ReplicationController
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
---
So, now these Pods are under ReplicaSet.
Step 3:
Create Deployment Now with same label.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
----
And Deployment will find one ReplicaSet already exists and our job is done.
Now we can check increasing replicas to see if it works.
And It works.
Which way It doesn't work
After deleting ReplicationController, do not create Deployment directly. This will not work. Because, Deployment will find no ReplicaSet, and will create new one with additional label which will not match with your existing Pods
I'm about 80% certain the answer is yes, since they both use Pod selectors to determine whether new instances should be created. The key trick is to use the --cascade=false (the default is true) in kubectl delete, whose help even speaks to your very question:
--cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
By deleting the ReplicationController but not its subordinate Pods, they will continue to just hang out (although be careful, if a reboot or other hazard kills one or all of them, no one is there to rescue them). Creating the Deployment with the same selector criteria and a replicas count equal to the number of currently running Pods should cause a "no action" situation.
I regret that I don't have my cluster in front of me to test it, but I would think a small nginx RC with replicas=3 should be a simple enough test to prove that it behaves as you wish.