I'm trying to get the average of some prometheus metric (kafka_commit_latency) per kubernetes pod. My approach was to get the sum of kafka_commit_latency and to divide it by the number of kubernetes pods for my application, so here are the variables I derived and the overall expression:
Sum of desired metric (kafka commit latencies) across my application: sum(kafka_consumer_commit_latency_avg_seconds{application="my_app"})
No. of kubernetes pods for my application:
sum(node_namespace_pod:kube_pod_info:{pod=~".*my_app.*"})
Overall expression:
sum(kafka_consumer_commit_latency_avg_seconds{application="my_app"})/sum(node_namespace_pod:kube_pod_info:{pod=~".*my_app.*"})
but the main issue here is that the two range vectors don't have anything common in the dimension set, so how can this division be made?
For binary operators, you can use grouping modifiers. In your case, it would be ON without label list since you want to disregard all labels.
sum(kafka...) / ON() sum(node_...)
Related
I want to monitor the cpu usage of kafka container, but the graph is chopped up into different pieces. There seem to be gaps in the graph and after each gap a different colored line follows. The time range is last 30 days. For the exporter we use danielqsj/kafka-exporter:v1.4.2
The promql query used to create this graph is:
rate(container_cpu_usage_seconds_total{container="cp-kafka-broker"}[1m])
Can I merge these lines into one continual? If so, with what promql expression/dashboard configuration?
This happens when at least 1 of the labels that are attached to the metric changes. The rate function keeps all the original labels from the underline time series. In Prometheus, each time series is uniquely identified by the metric name container_cpu_usage_seconds_total and any labels (key-value pairs) attached to the metric (container, for instance). This is why Grafana uses different colors because they are different time series.
If you want to get a single series in Grafana you can aggregate using the sum operator:
sum(rate(container_cpu_usage_seconds_total{container="cp-kafka-broker"}[1m]))
which by default will not keep any of the original labels.
I'm logging custom metric data into AWS Cloudwatch and trying to graph it. I assumed that Dimensions in Cloudwatch were metadata for enriching my data, but it seems that once you add dimensions you can no longer query across different combinations of dimensions. So for one I don't really see the point of dimensions as any unique combination is basically just a new metric. But more importantly, is there a way to log one set of data with different labels or dimensions and then slice and dice that data (e.g., in Grafana).
To make it more concrete, I am logging cache load times in my application. I have one metric called "cache-miss", with several dimensions, for example:
the cached collection
the customer associated with the cached data
I want to several different graphs:
Total cache misses (i.e., ignore dimensions, just see a count over time)
Total cache misses per collection (aggregate by first dimension)
Total cache misses per customer (aggregate by second dimension)
Is there some way to achieve this with Cloudwatch metrics and/or Grafana (or alternate tool)?
As you have mentioned - https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html :
CloudWatch treats each unique combination of dimensions as a separate metric, even if the metrics have the same metric name. You can only retrieve statistics using combinations of dimensions that you specifically published. When you retrieve statistics, specify the same values for the namespace, metric name, and dimension parameters that were used when the metrics were created.
So if you have pushed Total cache misses with 2 dimensions, you can query this metric only with 2 dimensions. So you really can't just see a count over time.
Possible workarounds:
CloudWatch math - see example in CloudWatch does not aggregate across dimensions for your custom metrics
in theory also Grafana 7+ transformation feature https://grafana.com/blog/2020/06/11/new-in-grafana-7.0-data-transformations-for-all-visualizations-that-support-queries/
Or you can switch from the CloudWatch to better TSDB for your use case.
We have configured to use 2 metrics for HPA
CPU Utilization
App specific custom metrics
When testing, we observed the scaling happening, but calculation of no.of replicas is not very clear. I am not able to locate any documentation on this.
Questions:
Can someone point to documentation or code on the calculation part?
Is it a good practice to use multiple metrics for scaling?
Thanks in Advance!
From https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work
If multiple metrics are specified in a HorizontalPodAutoscaler, this calculation is done for each metric, and then the largest of the desired replica counts is chosen. If any of those metrics cannot be converted into a desired replica count (e.g. due to an error fetching the metrics from the metrics APIs), scaling is skipped.
Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the --horizontal-pod-autoscaler-downscale-stabilization-window flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values
Currently we have a pipeline of data streaming: api call -> google pub/sub -> BigQuery. The number of api call will depend on the traffic on the website.
We create a kubernetes deployment (in GKE) for ingesting data from pub/sub to BigQuery. This deployment have a horizontal pod autoscaler (HPA) with with metricName: pubsub.googleapis.com|subscription|num_undelivered_messages and targetValue: "5000". This structure able to autoscale when the traffic have a sudden increase. However, it will cause a spiky scaling.
What I meant by spiky is as follows:
The number of unacked messages will go up more than the target value
The autoscaler will increase the number of pods
Since the number of unacked will slowly decrease, but since it is still above target value the autoscaler will still increase the number of pods --> this happen until we hit the max number of pods in the autoscaler
The number of unacked will decrease until it goes below target and it will stay very low
The autoscaler will reduce the number of pods to the minimum number of pods
The number of unacked messages will increase again and will go similar situation with (1) and it will go into a loop/cycle of spikes
Here are the chart when it goes spiky (the traffic is going up but it is stable and non-spiky):
The spiky number of unacknowledged message in pub/sub
We set an alarm in stackdriver if the number of unacknowledged message is more than 20k, and in this situation it will always triggered frequently.
Is there a way so that the HPA become more stable (non-spiky) in this case?
Any comment, suggestion, or answer is well appreciated.
Thanks!
I've been dealing with the same behavior. What I ended up doing is smoothing the num_undelivered_messages using a moving average. I set up a k8s cron that publishes the average of the last 20 mins of time series data to a custom metric every minute. Then configured the HPA to respond to the custom metric.
This worked pretty good but not perfect. I observed that as soon as the average converges on the actual value, the HPA will scale the service down too low. So I ended up just adding a constant, so the custom metric is just average + constant. I found for my specific case a value of 25,000 worked well.
With this, and after dialing in the targetAverageValue, the autoscaling has been very stable.
I'm not sure if this is due to a defect or just the nature of the num_undelivered_messages metric at very high loads.
Edit:
I used the stackdriver/monitoring golang packages. There is a straightforward way to aggregate the time series data; see here under 'Aggregating data' https://cloud.google.com/monitoring/custom-metrics/reading-metrics
https://cloud.google.com/monitoring/custom-metrics/creating-metrics
Use case:
I have 10 Kubernetes nodes (consider them as VMs) which have between 7 and 14 allocatable CPU cores which can be requested by Kubernetes pods. Therefore I'd like to show a table which shows the
Allocatable CPU cores
The requested CPU cores
The ratio of requested / allocatable CPU cores
grouped by node.
The problem
Creating the table for the first 2 requirements was easy. I simply created a table in Grafana and added these two metrics:
sum(kube_pod_container_resource_requests_cpu_cores) by (node)
sum(kube_node_status_allocatable_cpu_cores) by (node)
However I was struggling with the third one. I tried this query, but it didn't return any data apparently:
sum(kube_pod_container_resource_requests_cpu_cores / kube_node_status_allocatable_cpu_cores) by (node)
Question
How can I achieve a calculation of two different metrics in a group by statement in my given example?
The issue here is that the two have different labels, so you need to aggregate away the extras:
sum by (node)(kube_pod_container_resource_requests_cpu_cores)
/
sum by (node)(kube_node_status_allocatable_cpu_cores)