I have a service metric that returns either some positive value, or 0 in case of failure.
I want to count how many seconds my service was failing during some time period.
E.g. the expression:
service_metric_name == 0
gives me a dashed line in Grafana:
line_of_downtime
Is there any way to count how many seconds my service was down for the last 2 hours?
I assume the service is either 0 for being down or 1 for being up.
In this case you can calculate an average over a time range. If the result is 0.9, your service has been up for 90% of the time. If you calculated the average over an hour, this would have been 6 minutes downtime out of 60 minutes.
avg_over_time(up{service_metric_name[1h])
This will be a moving average, that is: when your service is down, the value will slowly decrease. Then your service is up, it will slowly increase again.
Related
My problem is as follows: I would like to create a graph of the percentage use of boxes over 24 hours. However, the box.utilization() function is cumulative, so I tried to solve the problem by creating a dataset that collects the values every hour and an event that resets the utilization so that the next hour is not affected by the previous hour's utilization.
(I attach a picture of the graph I created).
Is there a more efficient way?
I have faced the same issue before. Here is how I handled it:
Instead of cumulative utilization, I calculate the maximum hourly utilization. That is, I record the number of seized resource for every minute and get an array of 60 elements. Then divide the maximum number in that array by the total number of resources available. An example:
I have 100 machines
During an hour, maximum of 60 of them were busy
60/100= 60% maximum utilization during that hour
Then I plot these for each hour.
We have a kafka streams application with 3 pods. Application scaling is a heavy operation(because of large state) for us. So, I would like to increase/scale pod only if it absolutely necessary. For example, if the application utilization increases beyond a number for lets say 10 mins.
Again, i don't need to scale up/down my application for sudden burst(a fews seconds) of messages
Looking for something configuration like:
window : 15 mins
avergae cpu : 1000 milli
So, I would like to scale the application is the average cpu over 15 mins window is greater than 1000 milli.
You can take a look into HPA policies.
There is stabilizationWindowSeconds:
StabilizationWindowSeconds is the number of seconds for which past
recommendations should be considered while scaling up or scaling down.
StabilizationWindowSeconds must be greater than or equal to zero and
less than or equal to 3600 (one hour). If not set, use the default
values: - For scale up: 0 (i.e. no stabilization is done)
And the limits CPU average utilization can be set in metric target objects under averageUtilization.
Below is a chart I have in grafana:
My problem is that if my chosen time range is say 5 minutes, the graph wont show only what happened in the last 5 minutes. So in the picture, nothing happened in the past 5 minutes so it's just showing the last points it has. How can I change this so that it goes back to zero if nothing has changed? I'm using a Prometheus counter for this, if that is relevant.
As explained in the Prometheus documentation, a counter value in itself is not of much use. It depends on when your job was last restarted and everything that happened since.
What's interesting about a counter is how much it changed over some period of time. I.e. either the average rate of change per second (e.g. 3 queries per second) or the increase over some time range (e.g. 10K queries in the last hour).
So instead of graphing something like e.g. http_requests, you should graph rate(http_requests[1m]) (the averate number of requests over the previous 1 minute) or increase(http_requests[1h]) (the total number of requests over the past hour). You can play with the range size until you get something which makes sense for your data. But make sure to use a range at least 2x your scrape interval (and ideally more, as Prometheus is somewhat daft in the way it computes rates/increases).
I visualized prometheus histogram buckets as heatmap with grafana, below pic shows the query and the outcome graph, how should i interpret this?
According to my attacker, in total i sent 300 requests in that period exactly, but when i sum those numbers up on above graph i can never get exact 300,
and also looks those numbers are fluctuating with the time elapsing, how should i interpret this graph in a meaningful way?
And if i want those numbers to be the exact request counts locate in each of those bucket in that time window, what should i do?
Oh, for the X-Axis Mode i chose Series and the Value i chose Current.
There are real reasons why you can't always get a precise rate/increase value out of Prometheus. One of them is failed scrapes, i.e. every now and then a scrape will fail or time out due to a slow service, slow Prometheus or network issue.
The other reason is the fact that collected samples are never exactly scrape_interval apart: there will always be a few milliseconds or seconds of delay here and there. So (to take an extreme example) how can you tell the precise increase over the past 1 minute if you only have 2 samples 63 seconds apart? Is it the difference between the two values? Is it that difference adjusted to 60 seconds (i.e. / 63 * 60)?
That being said, Prometheus further boxes itself into a corner by only looking at samples falling strictly within the requested time range. To explain myself: how would a reasonable person calculate the increase of a counter over the last 30 minutes? They would likely take the value of said counter now and the value 30 minutes ago and subtract them. I.e. in PromQL terms (adjusting for counter resets where necessary):
request_duration_bucket - request_duration_bucket offset 30m
What Prometheus does instead (assuming a scrape_interval of 1m and an ideal timeseries with samples spaced exactly 1m apart) is essentially this:
(request_duration_bucket - request_duration_bucket offset 29m) / 29 * 30
I.e. it takes the increase over 29 minutes and extrapolates it to 30. Because of self-imposed limitations, nothing to do with the nature of the problem at hand.
Note that this works fine with counters that increase smoothly and continuously. E.g. if you have a counter that increases by 500 every minute, then taking the increase over 29 minutes and extrapolating to 30 is exactly correct. But for anything that increases in jumps and fits (which is most real-life counters) it will either slightly overestimate the increase if it occurs during the 29 minutes it actually samples (by exactly 1/29) or seriously underestimate it (if the increase occurs in the 1 minute not included in the sampling). This is even worse if you compute a rate/increase over a range covering fewer samples. E.g. if your range only covers 5 samples on average, the overestimate will be 20%, i.e. 1 / (5 - 1) and (each of) your increases will totally disappear 1 minute out of 5.
The only way I've found to work around this limitation is (again, assuming a scrape_interval of 1m) to reverse engineer Prometheus' extrapolation:
increase(request_duration_bucket[31m]) / 31 * 30
But this requires you to be aware of your scrape_interval and adjust for it and is very brittle (if you ever change your scrape_interval all your careful tweaking goes to hell).
Or, if you are OK with your increase falling to zero every time an instance is restarted:
clamp_min(request_duration_bucket - request_duration_bucket offset 30m, 0)
I do actually have a proposed patch to Prometheus to add xrate/xincrease functions that actually behave more as you would expect them to (and as described above) but it doesn't look very likely to be accepted: https://github.com/prometheus/prometheus/issues/3806
Grafana / influxDB: So I have a boolean (0 or 1). I want to know how many minutes the value was 1 during a period I wish to select (several days). Can it be done? Thanks.
It will be tricky to calculate it in the InfluxDB however, Grafana has a 3rd party Discrete panel, where you can calculate it on the app/panel level. Panel calculation will be running in the browser, so it can be slow if you have many datapoints in the selected time period.