What is the relationship between the HPA and ReplicaSet in Kubernetes? - kubernetes

I can't seem to find an answer to this but what is the relationship between an HPA and ReplicaSet? From what I know we define a Deployment object which defines replicas which creates the RS and the RS is responsible for supervising our pods and scale up and down.
Where does the HPA fit into this picture? Does it wrap over the Deployment object? I'm a bit confused as you define the number of replicas in the manifest for the Deployment object.
Thank you!

When we create a deployment it create a replica set and number of pods (that we gave in replicas). Deployment control the RS, and RS controls pods. Now, HPA is another abstraction which give the instructions to deployment and through RS make sure the pods fullfil the respective scaling.
As far the k8s doc: The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). Note that Horizontal Pod Autoscaling does not apply to objects that can't be scaled, for example, DaemonSets.
A brief high level overview is: Basically it's all about controller. Every k8s object has a controller, when a deployment object is created then respective controller creates the rs and associated pods, rs controls the pods, deployment controls rs. On the other hand, when hpa controllers sees that at any moment number of pods gets higher/lower than expected then it talks to deployment.
Read more from k8s doc

Related

How do I reset a deployments scale to use HPA after scaling down manually?

I have to turn off my service in production and turn it on again after a small period (doing a DB migration).
I know I can use kubectl scale deployment mydeployment --replicas=0. This services uses a HorizontalPodAutoscaler (HPA) so how would I go about reseting it to scale according to the HPA?
Thanks in advance :)
As suggested by the # Gari Singh ,HPA will not scale from 0, so once you are ready to reactivate your deployment, just run kubectl scale deployment mydeployment --replicas=1 and HPA will then takeover again.
In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.
Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.
If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.
Refer to this link on Horizontal Pod Autoscaling for detailed more information

Azure Kubernetes Service - can the Cluster Autoscaler get triggered even if I don't set autoscaling explicitly?

I am deploying a service to Azure Kubernetes Service.
The Horizontal Pod Autoscaler scales the number of pods, whereas the Cluster Autoscaler scales the number of nodes based on the number of pending pods. If my understanding is correct, if I don't set up autoscaling in my deployment file, the HPA won't get triggered, and only one pod will run; therefore, the CA won't get triggered either.
My question is - is there a scenario in AKS where the CA would get triggered, even without setting autoscaling in my deployment file?
My question is - is there a scenario in AKS where the CA would get triggered, even without setting autoscaling in my deployment file?
Cluster autoscaler is typically used together with the horizontal pod autoscaler. The Horizontal Pod Autoscaler increases or decreases the number of pods based on application demand, and the cluster autoscaler adjusts the number of nodes as needed to run those additional pods accordingly.
If your deployment does not have the capacity to automatically scale up or down via the HPA, NOR you don't manually increase number of pods to the level where no additional pods can run due to insufficient resource in your nodes then the CA would not be triggered therefore the answer is NO.
You might find this document from official azure docs helpful also.

Kubernetes - Set Pod replication criteria based on memory and cpu usage

I am newbie to Kubernetes world. Please excuse if I am getting anything wrong.
I understand that pod replication is handled by k8s itself. We can also set cpu and memory usage for pods. But is it possible to change replication criteria based on memory and cpu usage? For example if I want to a pod to replicate when its memory/cpu usage reaches 70%.
Can we do it using metrics collected by Prometheus etc ?
You can use horizontal pod autoscaler. From the docs
The Horizontal Pod Autoscaler automatically scales the number of Pods
in a replication controller, deployment, replica set or stateful set
based on observed CPU utilization (or, with custom metrics support, on
some other application-provided metrics). Note that Horizontal Pod
Autoscaling does not apply to objects that can't be scaled, for
example, DaemonSets.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API
resource and a controller. The resource determines the behavior of the
controller. The controller periodically adjusts the number of replicas
in a replication controller or deployment to match the observed
average CPU utilization to the target specified by user
An example from the doc
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods. HPA will increase and decrease the number of replicas to maintain an average CPU utilization across all Pods of 50%.
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

Kubernetes scale down particular pod

I have a Kubernetes deployment which can have multiple replica pods. I wish to horizontally increase and decrease the pods based on some logic in my python application (not custom metrics in hpa).
I have two ways to this:
Using Horizontal Pod Autoscalar and changing minReplicas, maxReplicas though my application by using kubernetes APIs
Directly updating the "/spec/replicas" field in my deployment using the APIs
Both the above things are working for upscale and downscale.
But, when I scale down, I want to remove a particular Pod, and not any other pod.
If I update the minReplicas maxReplicas in HPA, then it randomly deletes a pod.
Same when I update the /spec/replicas field in the deployment.
How can I delete a particular pod while scaling down?
I am not aware of any way to ensure that a particular pod in a ReplicaSet will be deleted during a scale down. You could achieve this behavior with a StatefulSet which will always delete the last pod on scale down.
For example, if we had a StatefulSet foo that was scaled to 3 we would have pods:
foo-0
foo-1
foo-2
And if we scaled the StatefulSet to 2, the controller would delete foo-2. But note that there are other limitations to be aware of with StatefulSet.

How many pods can be configured per deployment in kubernetes?

As per the Kubernetes documentation there is 1:1 correspondence between Deployment and ReplicaSets. Similarly depending on the replicas attribute , a ReplicaSet can manage n number of pods of same nature. Is this a correct understanding ?
Logically (assuming Deployment is a wrapper/Controller) I feel Deployment can have multiple replicaSets and each replicaSet can have multiple Pods (same or different kind). If this statement is correct, can some one share an example K8S template ?
1.) Yes, a Deployment is a ReplicaSet, managed at a higher level.
2.) No, a Deployment can not have multiple ReplicaSets, a Deployment pretty much IS a ReplicaSet. Typically you never use a ReplicaSet directly, Deployment is all you need. And no, you can't have different Pod templates in one Deployment or ReplicaSet. The point of replication is to create copies of the same thing.
As to how many pods can be run per Deployment, the limits aren't really per Deployment, unless specified. Typically you'd either set the wanted number of replicas in the Deployment or you use the Horizontal Pod Autoscaler with a minimum and a maximum number of Pods. And unless Node limits are smaller, the following limits apply:
No more than 100 pods per node
No more than 150000 total pods
https://kubernetes.io/docs/setup/best-practices/cluster-large/
As per the Kubernetes documentation there is 1:1 correspondence between Deployment and ReplicaSets. Similarly depending on the replicas attribute , a ReplicaSet can manage n number of pods of same nature. Is this a correct understanding ?
Yes. It will create no of pods equal to value to the replicas field value.
Deployment manages a replica set, you don't/shouldn't interact with the replica set directly.
Logically (assuming Deployment is a wrapper/Controller) I feel Deployment can have multiple replicaSets and each replicaSet can have multiple Pods (same or different kind). If this statement is correct, can some one share an example K8S template ?
When you do a rolling deployment, it creates a new ReplicaSet with the new pods (updated containers), and scales down the pods running in older replica set.
I guess it does not support running two different ReplicaSets(not deployment updates) with different pod/containers.
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
After the deployment has been updated:
Run:
kubectl describe deployments
Output:
.
.
.
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)