How to correctly scrape and query metrics in Prometheus every hour - grafana

I would like Prometheus to scrape metrics every hour and display these hourly scrape events in a table in a Grafana dashboard. I have the global scrape interval set to 1h in the prometheus.yml file. From the prometheus visualizer, it seems like Prometheus scrapes around the 43 minute mark of every hour. However, it also seems like this data is only valid for about 3 minutes: Prometheus graph
My situation, then, is this: In a Grafana table, I set the min step of a query on this metric to 1h, but this causes the table to say that there are no data points. However, if I set the min step to 5 minutes, it displays the hourly scrape events with a timestamp on the 45 minute mark. My guess as to why this happens is that Prometheus starts on the dot of some hour and steps either forward or backward by the min step.
This does achieve what I would like to do, but it also has potential for incorrect behavior if Prometheus ever does something like can been seen at the beginning of the earlier graph. I also know that I can add a time shift, but it seems like it is always relative to the current time rather than an absolute time.
Is it possible to increase the amount of time that the scrape data is valid in Prometheus without having to scrape again every 3 minutes? Or maybe tell Prometheus to scrape at the 00 minute mark of every hour? Or if not, then can I add a relative time shift to the table so that it goes from the 45 minute mark instead of the 00 minute mark?
On a side note, in the above Prometheus graph, the irregular data was scraped after Prometheus was started. I had started Prometheus around 18:30 on the 22nd, but Prometheus didn't scrape until 23:30, and then it scraped at different intervals until it stabilized around 2:43 on the 23rd. Does anybody know why?

Your data disappear because of the staleness strategy implemented in Prometheus. Once a sample has been ingested, the metric is considered stale after 5 minutes. I didn't find any configuration to change that value.
Scraping every hour is not really the philosophy of Prometheus. If your really need to scrape with such a low frequency, it could be a better idea to schedule a job sending the data to a push gateway or using a prom file fed to a node exporter (if it makes sense). You can then scrape this endpoint every 1-2 minutes.
You could also roll your own exporter that memorize the last scrape and scrape anew only if the data age exceeds one hour. (That's the solution I would prefer)
Now, as a quick solution you can request the data over the last hour and average on it. That way, you'll get the last (old) scrape taken into account:
avg_over_time(old_metric[1h])
It should work or have some transient incorrect values if there is some jitters in the scheduling of the scrape.
Regarding the issues you had about late scraping, I suspect the scraping failed at those dates. Prometheus retries only at the next schedule (1h in your case).

If the metric is scraped with intervals exceeding 5 minutes, then Prometheus would return gaps to Grafana because of staleness mechanism. These gaps can be filled with the last raw sample value by wrapping the queried time series into last_over_time function. Just specify the lookbehind window in square brackets, which equals or exceeds the interval between samples. For example, the following query would fill gaps for my_gauge time series with one hour interval between samples:
last_over_time(my_gauge[1h])
See these docs for time durations format, which can be used in square brackets.

Related

Prometheus Alerting expression for metrics that increases once a day

I have a custom counter metric that should increase one time a day at a specific time period. and I have to create an alerting rule that fires if this metric doesn't increase.
Infrastructure: Kubernetes
So for my case, I used the next alerting expression:
increase(my_custom_metric[1d]) == 0
But it seems to be some issues with using it. Suppose the pod restarted and the counter reset, after a couple of hours the counter increased by one, but the value of the expression above still remains 0. What expression should be used to account for such situations?

Uptime of K8s Service over a period of time - Prometheus?

What is the simplest way to find out the Availability of a K8s service over a period of time, lets say 24h. Should I target a pod or find a way to calculate service reachability
I'd recommend to not approach it from a binary (is it up or down) but from a "how long does it take to serve requests" perspective. In other words, phrase your availability in terms of SLOs. You can get a very nice automatically generated SLO-based alter rules from PromTools. One concrete example rule from there, showing the PromQL part:
1 - (
sum(rate(http_request_duration_seconds_bucket{job="prometheus",le="0.10000000000000001",code!~"5.."}[30m]))
/
sum(rate(http_request_duration_seconds_count{job="prometheus"}[30m]))
)
Above captures the ratio of how long it took the service to serve non-500 (non-server errors, that is, assumed good responses) in less than 100ms to overall responses over the last 30 min with http_request_duration_seconds being the histogram, capturing the distribution of the requests of your service.

Storage aggregation is not combining like I would expect

I'm not getting expected results with some metrics I am tracking in Graphite and displaying in Grafana.
For metric like:
bitbucket.commits-per-user.username1.count
bitbucket.commits-per-user.username2.count
I have a retention policy like:
[default_bitbucket]
pattern = ^bitbucket\.
retentions = 1m:30d,1h:2y
I am pulling the data from an api, summarizing by the minute that the commit occurred for the user and adding it at with a timestamp of that minute (rounded down to the whole minute).
The storage-aggregation policy I am using is this:
[count_bitbucket]
pattern = ^bitbucket.*\.count$
xFilesFactor = 0
aggregationMethod = sum
I would expect that, once the timeframe exceeds 30 days, and I were running the metric with the function:
summarize(1d,sum,true)
, I would see commits per hour for whatever time period. However, It seems to be reporting significantly less per day once I move beyond 30 days.
Is there anything I am doing obviously wrong?
Could there be a problem if I don't add metrics for zeros on minutes when there are no commits?
I really appreciate any guidance - I'm fairly new to graphite.

PromQL Requests per minute

I'm trying to create a graph of total POST requests per minute in a graph, but there's this "ramp up" pattern that leads me to believe that I'm not getting the actual total of requests per minute, but getting an accumulative value.
Here is my query:
sum_over_time(django_http_responses_total_by_status_view_method_total{job="django-prod-app", method="POST", view="twitch_webhooks"}[1m])
Here are the "ramp up" patterns over 7days (drop offs indicating a reboot):
What leads me to believe my understanding of sum_over_time() is incorrect is because the existing webhooks should always exist. At the time of the most recent reboot, we have 72k webhook subscriptions, so it doesn't make sense for the value to climb over time, it would make more sense to see a large spike at the start for catching webhooks that were not captured during downtime.
Is this query correct for what I'm trying to achieve?
I am using django-prometheus for exporting.
You want increase rather than sum_over_time, as this is a counter.
If the django_http_responses_total_by_status_view_method_total metrics is a counter, then increase() function must be used for returning the number of requests during the last minute:
increase(django_http_responses_total_by_status_view_method_total[1m])
Note that increase() function in Prometheus can return fractional results even if django_http_responses_total_by_status_view_method_total metric contains only integer values. This is due to implementation details - see this comment and this article for details.
If the django_http_responses_total_by_status_view_method_total metric is a gauge, which shows the number of requests since the previous sample, then sum_over_time() function must be used for returning requests per last minute:
sum_over_time(django_http_responses_total_by_status_view_method_total[1m])

select prometheus alerts newer than a given time

I am working with grafana, trying to show a list of pods that are triggering a custom prometheus alert.
This query do the trick:
sum(ALERTS{alertname="myCustomAlert"}) BY (pod_name)
The problem is, it list all the alerts, and don't seems affected if I change the time interval to see only the ones launched in the last 5 minutes, or last hour
There is any way to limit in time the alert list? Lot of thanks!!
That expression will produce the number of alerts by pod_name firing at the current time (just as you would expect up{instance="foo"} to tell you whether instance foo is up now, whether you're looking at a dashboard that shows the last 5 minutes or the last hour).
If you want to see the values change over time, you could e.g. graph it. Then you'd see it change over time. And when the alert started and stopped firing for each pod.
And if you want the value at some past time, simply set the end time of the Grafana dashboard range to that time. (E.g. if your dashboard was showing the time range between 2 PM and 3 PM on January 1st, then your query would return the alerts firing at 3 PM on January 1st.