Grafana data shown by hour - grafana

I'm using Grafana and I want to see which hours are better to perform operations. So, I want to sum the requests and show the number of requests per hour in, let say, the last week. I mean: how many requests it had from 9:00 to 10:00 despite any day of the last week (and the same for every hour).
My backend is elasticsearch, but I can gather information from a prometheus too.
Does anyone know any way to get these data shown?
The Grafana version I'm using is 7.0.3.
EDIT
I found a possible solution by adding the plugin for hourly heatmaps

Related

What is the criteria used by Grafana's alertmanager to start evaluating a rule?

I have a data source that ingests new data within first 30mins of every hour. The rules need to be run such that once there is new data, they should evaluate and fire if the threshold is exceeded. So roughly, 45th min of every hour.
We are not able to figure out how to do that. Also, on what basis/database column does Grafana decide when to start evaluating? I went through Grafana postgres database, it has the table alert_rule and alert_instance among others.
alert_rule has a column called updated. Is that the basis?
alert_instance has a column called last_eval_time. How is this time decided?
Grafana version: 9.2.2
Current configuration: Evaluate every 1h for 0s.
They are all firing around 31st minute of the hour. Want to understand on what basis is this happening.
Also, if there is a data point that got populated at 25th min, and the rule has fired at 31st min, will this new data point be part of the calculation?
How does Grafana behave when there is no data point available for a defined time window? For ex, consider an alert rule configured to compare between two different time windows, if one of the time window data is not found in the data source, does Grafana looks for the last data point available and pick that? We have been observing some inconsistencies around this. Our understanding is that in this case the rule should not fire.
Thanks!

PostgreSQL delete and aggregate data periodically

I'm developing a sensor monitoring application using Thingsboard CE and PostgreSQL.
Contex:
We collect data every second, such that we can have a real time view of the sensors measurements.
This however is very exhaustive on storage and does not constitute a requirement other than enabling real time monitoring. For example there is no need to check measurements made last week with such granularity (1 sec intervals), hence no need to keep such large volumes of data occupying resources. The average value for every 5 minutes would be perfectly fine when consulting the history for values from previous days.
Question:
This poses the question on how to delete existing rows from the database while aggregating the data being deleted and inserting a new row that would average the deleted data for a given interval. For example I would like to keep raw data (measurements every second) for the present day and aggregated data (average every 5 minutes) for the present month, etc.
What would be the best course of action to tackle this problem?
I checked to see if PostgreSQL had anything resembling this functionality but didn't find anything. My main ideia is to use a cron job to periodically perform the aggregations/deletions from raw data to aggregated data. Can anyone think of a better option? I very much welcome any suggestions and input.

MongoDB Atlas - Understanding timestamps differences between Atlas GUI and the Logs downloaded file

TL;DR
Why there is a difference between the MongoDB Atlas Download Logs GUI and the downloaded logs file regarding the timestamp?
DETAIL:
I'd like to know about the difference between time (date, hour, minutes) between emails from alerts, the Atlas GUI and the time shown in the downloadable logs, as I think is important to be able to locate precisely an event in the log without doubts.
I'll use this example to be able to understand:
Using MongoDB Atlas, I got a Cluster that whose region is AWS / N. Virginia (us-east-1).
I've received an email alert that states an issue occurred at 2020/07/22 12:11 EDT.
I'd like to check the logs to be able to analyze the issue, so I go to Mongo DB Atlas > Clusters > ... > Download Logs and I select the date and time from the email alert as follows:
When I download the logs file, I got a range of dates from 2020-07-22T15:57:55.910+0000 to 2020-07-22T16:27:55.825+0000 I'm trying to understand that difference.
I know I could search in the logs for 16:11 records instead of 12:11 ones but I'd like to understand the difference. Why I didn't get from 12:00 to 12:30 and I got 16:00 to 16:30 instead? Where does that difference comes from?
Thanks in advance.
The 4 hour difference is, surely, your timezone offset. I'm guessing the UI shows times in local time for you and the actual logs are in UTC.
The 00:00 vs 57:55.910 situation has to do with how atlas logs are handled internally - they are basically pulled from the nodes every 5 or so minutes and the pull times are not snapped to grid, so they just happen every 5 minutes since exact time that the cluster or node launched as far as I am aware. When you get the logs you get them in the chunks that the internal tooling uses to retrieve them which are generally not aligned to the times you specify (in UI or using the Atlas API).

Prometheus and Grafana Hourly Rollup Query

I'm trying to build a graph in Grafana that aggregates a metric over hour time periods. So like there'll be aggregation for 11-12, 12-1, etc and from the start of the hour till now.
I've figured out how to aggregate metrics over the last hour of time passed (e.g., if it's 12:11 now, then it'll aggregate from 11:11 to 12:11), but not the way I just described. Does anyone have any ideas? Is it possible to even do what I'm describing?
I haven't done much work with either of these packages before, so my own knowledge is minimalist at best and I haven't found much in online resources either.
Thanks in advance.

Grafana sumarize sum resolution

Grafana 3.1.1 with a whisper backend.
I am trying to use summarize to keep track of how many minutes during the day one if my parameters is set to 1.
Here is the statement that I am using:
summarize(thermostat.living_room.heatstate, '1d', 'sum', false)
It always display much less of a total than expected. To troubleshoot I changed the statement to:
summarize(thermostat.living_room.heatstate, '1h', 'sum', false)
If my time range is 6 hours or less then the summarize works as expected. If the time range is 12 hours or higher the total reported by summarize is much smaller than expected.
It appears to me that if I try to sum for 1 a day period it is not retrieving all of the data points for the day so my data is much smaller than expected. I have not found any information on the grafana site that talks about how to change this behavior.
Thanks,
Louis
It's not Grafana problem - you should set up your retention and aggregation in Graphite properly. Please check Graphite documentation or that summary.
I think you need to keep your points for at least 1 day unaggregated, so, if you sending your temperature once per minute, you should have retention = 1m:1d, ...