Calculating memory requests and limits in Kubernetes - kubernetes

We have a couple of clusters running on GKE and up until now I've only been maintaining a CPU request/limit for pods. We've recently run into issues where the cluster autoscaling isn't responding when pods begin to be evicted for low memory, and we can visibly see in the GKE console that there is memory pressure on at least one of the nodes.
I was hoping someone could tell me: is there some sort of calculation that we can make as a starting point for how much memory we should request/limit per pod of each of our services, or is that was more trial/error? Is there some statistic service that can track what's being used in the cluster now?
Thanks!

There is no magic trick for calculating limits. You need to start with reasonable limits and refine using trial and error.
I can suggest a video from YouTube that explains quite well a method to refine your limits: https://youtu.be/-lsJyni7EQA
Basically it suggests to start with low limits and load test your application (one pod instance) until it breaks.
Than, raise the limits and load test again until you find good values.

Related

kubernetes resource requests and limits adjustments

I have a k8s cluster in GKE with node autoscaler turned on. I want to maximize the resource utilization, and have applied all the suggestion on requests/limit changes recommended by GKE. At this moment there is 4 nodes as shown in the image below. They all uses n2-standard-2 i.e. 4 GB of memory per vCPU.
Memory request to allocatable ration is quite high compared to CPU request/allocatable.
Wondering if any other machine machine type that better suits my case. or any other resource optimization recommendation?
In GKE You can select custom compute sizes.
We find most workloads work best in 1:4 vCPU to Memory ratio (Hence the default). But it's possible to support other workload types. For your workload it looks like 1:2 for vCPU to Memory would be appropriate.
Also, it's hard to know exactly what sort of resource limit to set. You should look into generating some load for your cluster and using VPA to get a suggestion made by GKE cluster to be able to right size the limits.

Allocatable resources not taken into account on one Kubernetes node

I have a K8s cluster on premise with five nodes running 1.19.8. For some time now one of those nodes gets a lot more pressure than the other four.
That node has double the amount of CPU cores available. But the same amount of memory as the others.
When I describe it, I see the same amount of memory and the same pod limit of 110.
Last night I had a look at it and it was running more than 130 pods! Sometimes it has nearly 98 percent of memory used whilst other nodes have 60 to 70 % and way fewer pods assigned. But it still gets assigned new ones.
At some point then it gets a SystemOOM and the OS starts killing processes...
Has anyone any idea what could go wrong here? Why this is happening? Or where I could start to check?
Thanks in advance!

Set cpu requests in K8s for fluctuating load

I have a service deployed in Kubernetes and I am trying to optimize the requested cpu resources.
For now, I have deployed 10 instances and set spec.containers[].resources.limits.cpu to 0.1, based on the "average" use. However, it became obvious that this average is rather useless in practice because under constant load, the load increases significantly (to 0.3-0.4 as far as I can tell).
What happens consequently, when multiple instances are deployed on the same node, is that this node is heavily overloaded; pods are no longer responsive, are killed and restarted etc.
What is the best practice to find a good value? My current best guess is to increase the requested cpu to 0.3 or 0.4; I'm looking at Grafana visualizations and see that the pods on the heavily loaded node(s) converge there under continuous load.
However, how can I know if they would use more load if they could before becoming unresponsive as the node is overloaded?
I'm actually trying to understand how to approach this in general. I would expect an "ideal" service (presuming it is CPU-focused) to use close to 0.0 when there is no load, and close to 1.0 when requests are constantly coming in. With that assumption, should I set the cpu.requests to 1.0, taking a perspective where actual constant usage is assumed?
I have read some Kubernetes best practice guides, but none of them seem to address how to set the actual value for cpu requests in practice in more depth than "find an average".
Basically come up with a number that is your lower acceptable bound for how much the process runs. Setting a request of 100m means that you are okay with a lower limit of your process running 0.1 seconds for every 1 second of wall time (roughly). Normally that should be some kind of average utilization, usually something like a P99 or P95 value over several days or weeks. Personally I usually look at a chart of P99, P80, and P50 (median) over 30 days and use that to decide on a value.
Limits are a different beast, they are setting your CPU timeslice quota. This subsystem in Linux has some persistent bugs so unless you've specifically vetted your kernel as correct, I don't recommend using it for anything but the most hostile of programs.
In a nutshell: Main goal is to understand how much traffic a pod can handle and how much resource it consumes to do so.
CPU limits are hard to understand and can be harmful, you might want
to avoid them, see static policy documentation and relevant
github issue.
To dimension your CPU requests you will want to understand first how much a pod can consume during high load. In order to do this you can :
disable all kind of autoscaling (HPA, vertical pod autoscaler, ...)
set the number of replicas to one
lift the CPU limits
request the highest amount of CPU you can on a node (3.2 usually on 4cpu nodes)
send as much traffic as you can on the application (you can achieve simple Load Tests scenarios with locust for example)
You will eventually end up with a ratio clients-or-requests-per-sec/cpu-consumed. You can suppose the relation is linear (this might not be true if your workload complexity is O(n^2) with n the number of clients connected, but this is not the nominal case).
You can then choose the pod resource requests based on the ratio you measured. For example if you consume 1.2 cpu for 1000 requests per second you know that you can give each pod 1 cpu and it will handle up to 800 requests per second.
Once you know how much a pod can consume under its maximal load, you can start setting up cpu-based autoscaling, 70% is a good first target that can be refined if you encounter issues like latency or pods not autoscaling fast enough. This will avoid your nodes to run out of cpu if the load increases.
There are a few gotchas, for example single-threaded applications are not able to consume more than a cpu. Thus if you give it 1.5 cpu it will run out of cpu but you won't be able to visualize it from metrics as you'll believe it still can consume 0.5 cpu.

Latency penalty for using istio or other sidecar proxy

I am playing with an idea of using istio for some of the features, however I find it hard to find any reasonable estimates of the latency it adds to every call. 1ms for every service call seems like a lot, especially once there are 10 services involved in a chain, each having request&response passing through istio.
Has anybody measured latency penalty for having sidecar proxy introduced?
This post does a benchmark of Istio and Linkerd to compare both service meshes in different aspects like CPU, memory or latency.
Istio provides performance numbers, including latency, here: https://istio.io/docs/concepts/performance-and-scalability/#latency-for-istio-hahahugoshortcode-s2-hbhb
From the page...
The default configuration of Istio 1.1 adds 8ms to the 90th percentile latency of the data plane over the baseline.

Spiky kubernetes HPA with metric number of pubsub unacked messsages

Currently we have a pipeline of data streaming: api call -> google pub/sub -> BigQuery. The number of api call will depend on the traffic on the website.
We create a kubernetes deployment (in GKE) for ingesting data from pub/sub to BigQuery. This deployment have a horizontal pod autoscaler (HPA) with with metricName: pubsub.googleapis.com|subscription|num_undelivered_messages and targetValue: "5000". This structure able to autoscale when the traffic have a sudden increase. However, it will cause a spiky scaling.
What I meant by spiky is as follows:
The number of unacked messages will go up more than the target value
The autoscaler will increase the number of pods
Since the number of unacked will slowly decrease, but since it is still above target value the autoscaler will still increase the number of pods --> this happen until we hit the max number of pods in the autoscaler
The number of unacked will decrease until it goes below target and it will stay very low
The autoscaler will reduce the number of pods to the minimum number of pods
The number of unacked messages will increase again and will go similar situation with (1) and it will go into a loop/cycle of spikes
Here are the chart when it goes spiky (the traffic is going up but it is stable and non-spiky):
The spiky number of unacknowledged message in pub/sub
We set an alarm in stackdriver if the number of unacknowledged message is more than 20k, and in this situation it will always triggered frequently.
Is there a way so that the HPA become more stable (non-spiky) in this case?
Any comment, suggestion, or answer is well appreciated.
Thanks!
I've been dealing with the same behavior. What I ended up doing is smoothing the num_undelivered_messages using a moving average. I set up a k8s cron that publishes the average of the last 20 mins of time series data to a custom metric every minute. Then configured the HPA to respond to the custom metric.
This worked pretty good but not perfect. I observed that as soon as the average converges on the actual value, the HPA will scale the service down too low. So I ended up just adding a constant, so the custom metric is just average + constant. I found for my specific case a value of 25,000 worked well.
With this, and after dialing in the targetAverageValue, the autoscaling has been very stable.
I'm not sure if this is due to a defect or just the nature of the num_undelivered_messages metric at very high loads.
Edit:
I used the stackdriver/monitoring golang packages. There is a straightforward way to aggregate the time series data; see here under 'Aggregating data' https://cloud.google.com/monitoring/custom-metrics/reading-metrics
https://cloud.google.com/monitoring/custom-metrics/creating-metrics