Can Horizontal Pod Scaling work with one node only? - kubernetes

I'm new to Kubernetes, and I have a doubt about horizontal pod autoscaling. Can I apply HPA with just one node ? If so, what are the benefits of HPA using one node only ?
If I use the metrics below, the target says averageUtilization 50% of cpu. Does that imply that I need a new node after the value is reached ?
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Any advice ?

Here are some notes that might help you to sort things out:
Yes, you can use horizontal pod autoscaling on one node only.
The benefit of running multiple pods is parallelism: More instances of your app can handle more load - in that regard it doesn't matter if you run the pods on one or several nodes.
But if you have more pods of your application, you might end up in a situation where you need additional nodes to handle the load.
To determine out how many pods can run on one node, kubernetes uses the concept of resource limits and requests.
HPA will spawn new pods if the actual utilization of your pod hits the target utilization - but it doesn't take care that your node can handle more pods - you need to configure this using resource limits and requests.
Scaling up the nodes of your cluster is not handled by HPA, you need to use the kubernetes cluster autoscaler for that.

Related

Kubernetes: using HPA with metrics from other pods

I have:
deployments of services A and B in k8s
Prometheus stack
I wanna scale service A when metric m1 of service B is changed.
Solutions which I found and not suitable more or less:
I can define HPA for service A with the following part of spec:
- type: Object
object:
metric:
name: m1
describedObject:
apiVersion: api/v1
kind: Pod
name: certain-pod-of-service-B
current:
value: 10k
Technically, it will work. But it's not suitable for dynamic nature of k8s.
Also I can't use pods metric (metrics: - type: Pods pods:) in HPA cause it will request m1 metric for pods of service A (which obviously doesn't have this)
Define custom metric in prometheus-adapter which query m1 metric from pods of service B. It's more suitable, but looks like workaround cause I already have a metric m1
The same for external metrics
I feel that I miss something cause it doesn't seem like a non realistic case :)
So, advise me please how to scale one service by metric of another in k8s?
I decided to provide a Community Wiki answer that may help other people facing a similar issue.
The Horizontal Pod Autoscaler is a Kubernetes feature that allows to scale applications based on one or more monitored metrics.
As we can find in the Horizontal Pod Autoscaler documentation:
The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).
There are three groups of metrics that we can use with the Horizontal Pod Autoscaler:
resource metrics: predefined resource usage metrics (CPU and
memory) of pods and nodes.
custom metrics: custom metrics associated with a Kubernetes
object.
external metrics: custom metrics not associated with a
Kubernetes object.
Any HPA target can be scaled based on the resource usage of the pods (or containers) in the scaling target. The CPU utilization metric is a resource metric, you can specify other resource metrics besides CPU (e.g. memory). This seems to be the easiest and most basic method of scaling, but we can use more specific metrics by using custom metrics or external metrics.
There is one major difference between custom metrics and external metrics (see: Custom and external metrics for autoscaling workloads):
Custom metrics and external metrics differ from each other:
A custom metric is reported from your application running in Kubernetes.
An external metric is reported from an application or service not running on your cluster, but whose performance impacts your Kubernetes application.
All in all, in my opinion it is okay to use custom metrics in the case above,
I did not find any other suitable way to accomplish this task.

GKE node pool with Autoscaling does not scale down

I have a GKE cluster with two nodepools. I turned on autoscaling on one of my nodepools but it does not seem to automatically scale down.
I have enabled HPA and that works fine. It scales the pods down to 1 when I don't see traffic.
The API is currently not getting any traffic so I would expect the nodes to scale down as well.
But it still runs the maximum 5 nodes despite some nodes using less than 50% of allocatable memory/CPU.
What did I miss here? I am planning to move these pods to bigger machines but to do that I need the node autoscaling to work to control the monthly cost.
There are many reasons that can cause CA to not be downscaling successfully. If we resume how this should work normally it will be something like this:
Cluster autoscaler will periodically check (every 10 seconds) utilization of the nodes.
If the utilization factor is less than 0.5 the node will be considered as under utilization.
Then the nodes will be marked for removal and will be monitored for next 10 mins to make sure the utilization factor stays less than 0.5.
If even after 10 mins it stays under utilized then the node would be removed by cluster autoscaler.
If above is not being accomplished, then something else is preventing your nodes to be downscaling. In my experience PDBs needs to be applied to kube-system pods and I would say that could be the reason why; however, there are many reasons why this can be happening, here are reasons that can cause downscaling issues:
1. PDB is not applied to your kube-system pods. Kube-system pods prevent Cluster Autoscaler from removing nodes on which they are running. You can manually add Pod Disruption Budget(PDBs) for the kube-system pods that can be safely rescheduled elsewhere, this can be added with next command:
`kubectl create poddisruptionbudget PDB-NAME --namespace=kube-system --selector app=APP-NAME --max-unavailable 1`
2. Containers using local storage (volumes), even empty volumes. Kubernetes prevents scale down events on nodes with pods using local storage. Look for this kind of configuration that prevents Cluster Autoscaler to scale down nodes.
3. Pods annotated with cluster-autoscaler.kubernetes.io/safe-to-evict: true. Look for pods with this annotation that can be preventing Nodes scaledown
4. Nodes annotated with cluster-autoscaler.kubernetes.io/scale-down-disabled: true. Look for Nodes with this annotation that can be preventing cluster Autoscale. These configurations are the ones I will suggest you check on, in order to make your cluster to be scaling down nodes that are under utilized. -----
Also you can see this page where explains the configuration to prevent the downscales, which can be what is happening to you.

Kubernetes - Set Pod replication criteria based on memory and cpu usage

I am newbie to Kubernetes world. Please excuse if I am getting anything wrong.
I understand that pod replication is handled by k8s itself. We can also set cpu and memory usage for pods. But is it possible to change replication criteria based on memory and cpu usage? For example if I want to a pod to replicate when its memory/cpu usage reaches 70%.
Can we do it using metrics collected by Prometheus etc ?
You can use horizontal pod autoscaler. From the docs
The Horizontal Pod Autoscaler automatically scales the number of Pods
in a replication controller, deployment, replica set or stateful set
based on observed CPU utilization (or, with custom metrics support, on
some other application-provided metrics). Note that Horizontal Pod
Autoscaling does not apply to objects that can't be scaled, for
example, DaemonSets.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API
resource and a controller. The resource determines the behavior of the
controller. The controller periodically adjusts the number of replicas
in a replication controller or deployment to match the observed
average CPU utilization to the target specified by user
An example from the doc
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods. HPA will increase and decrease the number of replicas to maintain an average CPU utilization across all Pods of 50%.
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

How to exclude some containers' metrics in Kubernetes Horizontal Pod Autoscaling

I have a pod running with two containers. The actual application is running in one of the containers (container-app) and the other one is the proxy container (container-proxy). I enabled the Horizontal Pod Autoscaler (HPA) for CPU usage percentage but as it states in HPA documentation, both of the container metrics are put in the calculation.
I want to exclude the CPU metrics of container-proxy from HPA calculation because I want only application container to be the scaling element for the pod.
Is there any way to exclude some containers metrics from HPA calculation for multi-container pods?
The cluster autoscaler works on a per-node pool basis. Horizontal Pod Autoscaler monitors CPU utilization of the pods and scales the number of replicas automatically. It provides immediate efficiency and capacity when needed, operates within user-defined minimum/maximum bounds, and allows users to set it and forget it. The design of the horizontal autoscaler is for pods not for the individual container.
HPA calculates pod cpu utilization as total cpu usage of all containers in pod divided by total request. It does not exclude containers metrics from HPA calculation if multiple containers are inside the pod.
Kubernetes 1.20+ supports container metrics, so as to target the utilisation per container, which would allow excluding a specific container of a pod from being considered.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics
type: ContainerResource
containerResource:
name: cpu
container: application
target:
type: Utilization
averageUtilization: 60
Its an alpha feature though, so not available without turning on alpha features in Kubernetes.

Does HorizontalPodAutoscaler make sense when there is only one Deployment on GKE (Google Container Engine) Kubernetes cluster?

I have a "homogeneous" Kubernetes setup. By this I mean that I am only running instances of a single type of pod (an http server) with a load balancer service distributing traffic to them.
By my reasoning, to get the most out of my cluster (edit: to be concrete -- getting the best average response times to http requests) I should have:
At least one pod running on every node: Not having a pod running on a node, means that I am paying for the node and not having it ready to serve a request.
At most one pod running on every node: The pods are threaded http servers so they can maximize utilization of a node, so running multiple pods on a node does not net me anything.
This means that I should have exactly one pod per node. I achieve this using a DaemonSet.
The alternative way is to configure a Deployment and apply a HorizontalPodAutoscaler to it and have Kubernetes handle the number of pods and pod to node mapping. Is there any disadvantage of my approach in comparison to this?
My evaluation is that the HorizontalPodAutoscaler is relevant mainly in heterogeneous situations, where one HorizontalPodAutoscaler can scale up a Deployment at the expense of another Deployment. But since I have only one type of pod, I would have only one Deployment and I would be scaling up that deployment at the expense of itself, which does not make sense.
HorizontalPodAutoscaler is actually a valid solution for your needs. To address your two concerns:
1. At least one pod running on every node
This isn't your real concern. The concern is underutilizing your cluster. However, you can be underutilizing your cluster even if you have a pod running on every node. Consider a three-node cluster:
Scenario A: pod running on each node, 10% CPU usage per node
Scenario B: pod running on only one node, 70% CPU usage
Even though Scenario A has a pod on each node the cluster is actually being less utilized than in Scenario B where only one node has a pod.
2. At most one pod running on every node
The Kubernetes scheduler tries to spread pods around so that you don't end up with multiple pods of the same type on a single node. Since in your case the other nodes should be empty, the scheduler should have no problems starting the pods on the other nodes. Additionally, if you have the pod request resources equivalent to the node's resources, that will prevent the scheduler from scheduling a new pod on a node that already has one.
Now, you can achieve the same effect whether you go with DaemonSet or HPA, but I personally would go with HPA since I think it fits your semantics better, and would also work much better if you eventually decide to add other types of pods to your cluster
Using a DamonSet means that the pod has to run on every node (or some subset). This is a great fit for something like a logger or a metrics collector which is per-node. But you really just want to use available cluster resources to power your pod as needed, which matches up better with the intent of HPA.
As an aside, I believe GKE supports cluster autoscaling, so you should never be paying for nodes that aren't needed.