Meaning of ADX Cache utilization more than 100% - azure-data-factory

We see Cache utilization dashboard for an ADX cluster on Azure portal, but at times I have noticed that this utilization shows up to be more than 100%. I am trying to understand how to interpret it. Say , for example , if cache utilization shows up as 250% , does it mean that 100% of memory cache is utilized and then beyond that 150% disk cache is being utilized?

as explained in the documentation for the Cache Utilization metric:
[this is the] Percentage of allocated cache resources currently in use by the cluster.
Cache is the size of SSD allocated for user activity according to the defined cache policy.
An average cache utilization of 80% or less is a sustainable state for a cluster.
If the average cache utilization is above 80%, the cluster should be scaled up to a storage optimized pricing tier or scaled out to more instances. Alternatively, adapt the cache policy (fewer days in cache).
If cache utilization is over 100%, the size of data to be cached, according to the caching policy, is larger that the total size of cache on the cluster.

Utilization > 100% means that there's not enough room in the (SSD) cache to hold all the data that the policy indicates should be cached. If auto-scale is enabled then the cluster will be scaled-out as a result.
The cache applies an LRU eviction policy, so that even when utilization exceeds 100% query performance will be as good as possible (though, of course, if queries constantly reference data more than what the cache can hold some performance degradation will be observed.)

Related

How to setup Kubernetes HPA to scale based on maximum available memory in a given pod?

I’d like to autoscale the pods not based on the average memory, but rather based on largest amount of available memory in a given pod.
Example:
Let’s say the target maximum available memory is 50%.
If we have 7 pods already and 6 of them have 90% occupied memory, but a single pod with 40% occupied memory, that’d satisfy my criteria and we won’t need to upscale. But the moment that last pod goes below 50% available memory we’ll upscale.
I know it’s not a wise criteria for scaling in case majority of case, but in my particular circumstance, it fits.

How is CPU usage calculated in Grafana?

Here's an image taken from Grafana which shows the CPU usage, requests and limits, as well as throttling, for a pod running in a K8s cluster.
As you can see, the CPU usage of the pod is very low compared to the requests and limits. However, there is around 25% of CPU throttling.
Here are my questions:
How is the CPU usage (yellow) calculated?
How is the ~25% CPU throttling (green) calculated?
How is it possible that the CPU throttles when the allocated resources for the pod are so much higher than the usage?
Extra info:
The yellow is showing container_cpu_usage_seconds_total.
The green is container_cpu_cfs_throttled_periods_total / container_cpu_cfs_periods_total

Can I set the pod to use max request CPU from the beginning?

I am using Openshift 4, CPU Request: 0.2, Limit 0.4.
From the monitoring, I can see the CPU usage started from 0.1, and increased gradually. Is it because that there is a machanisim to prevent over reserve the CPU usage?
Can I setup that the pod to use the max request CPU from the beginning, and adapt to Limit as fast as possible?
The max limit is already available from the beginning (presuming that the node has the CPU available to give). OCP is using CFS to enforce that limit, and CFS doesn't have anything that gradually kicks in, CFS only has one thing it considers: the configured limit.
As for why you are seeing this in your monitoring, I'm not sure. But my first guess would be that that graph is using a moving average. (And thus, since it's a moving average it will converge towards the actual usage.)

Choosing the compute resources of the nodes in the cluster with horizontal scaling

Horizontal scaling means that we scale by adding more machines into the pool of resources. Still, there is a choice of how much power (CPU, RAM) each node in the cluster will have.
When cluster managed with Kubernetes it is extremely easy to set any CPU and memory limit for Pods. How to choose the optimal CPU and memory size for cluster nodes (or Pods in Kubernetes)?
For example, there are 3 nodes in a cluster with 1 vCPU and 1GB RAM each. To handle more load there are 2 options:
Add the 4th node with 1 vCPU and 1GB RAM
Add to each of the 3 nodes more power (e.g. 2 vCPU and 2GB RAM)
A straightforward solution is to calculate the throughput and cost of each option and choose the cheaper one. Are there any more advanced approaches for choosing the compute resources of the nodes in a cluster with horizontal scalability?
For this particular example I would go for 2x vCPU instead of another 1vCPU node, but that is mainly cause I believe running OS for anything serious on a single vCPU is just wrong. System to behave decently needs 2+ cores available, otherwise it's too easy to overwhelm that one vCPU and send the node into dust. There is no ideal algorithm for this though. It will depend on your budget, on characteristics of your workloads etc.
As a rule of thumb, don't stick to too small instances as you have a bunch of stuff that has to run on them always, regardless of their size and the more node, the more overhead. 3x 4vCpu+16/32GB RAM sounds like nice plan for starters, but again... it depends on what you want, need and can afford.
The answer is related to such performance metrics as latency and throughput:
Latency is a time interval between sending request and receiving response.
Throughput is a request processing rate (requests per second).
Latency has influence on throughput: bigger latency = less throughput.
If a business transaction consists of multiple sequential calls of the services that can't be parallelized, then compute resources (CPU and memory) has to be chosen based on the desired latency value. Adding more instances of the services (horizontal scaling) will not have any positive influence on the latency in this case.
Adding more instances of the service increases throughput allowing to process more requests in parallel (if there are no bottlenecks).
In other words, allocate CPU and memory resources so that service has desired response time and add more service instances (scale horizontally) to handle more requests in parallel.

On AWS RDS Postgres, what could cause disk latency to go up while iOPs / throughput go down?

I'm investigating an approximately 3 hour period of increased query latency on a production Postgres RDS instance (m4.xlarge, 400 GiB of gp2 storage).
The driver seems to be a spike in both read and write disk latencies: I see them going from a baseline of ~0.0005 up to a peak of 0.0136 write latency / 0.0081 read latency.
I also see an increase in disk queue depth from a baseline of around 2, to a peak of 14.
When there's a spike in disk latencies, I generally expect to see an increase in data being written to disk. But read iOPS, write iOPS, read throughput, and write throughput all went down (by approximately 50%) during the time when latency was elevated.
I also have server-side metrics on the total query volume I'm sending (measured in both queries per second and amount of data written: this is a write-heavy workload), and those metrics were flat during this time period.
I'm at a loss for what to investigate next. What are possible reasons that disk latency could increase while iOPs go down?