Grafana does not show graphs for longer time periods - grafana

We are using Grafana to visualise some times measured with an other application. I get a data point every 5 min.
I also get a nice graph if I only visualise the last 24 or 48h.
for longer time ranges no graph is shown.
I researched a little and found that in the database there are data points each minute. which means I only get one value and 4 time NULL every 5 minutes. For a time range bigger 48h grafana starts to cumulate the values it ends up with only NULL values.
Here are two pictures which show my problem:
Timerange 24h
Timerange 7 days
Are there some settings I can make to avoid this behaviour?
Thank you for your help

Are you using graphite? If so, please make sure you configured xFilesFactor correctly.

Related

Don't see all points in Grafana on lower scales

On lower scale I am obviously seeing several outliers, maximal of which is 18211
if I zoom in then I am starting to see additional outliers
Is it possible to configure Grafana to show all points all the time or aggregate them differently?
Backend is Graphite.
No, this is not possible due to space limitations
For example:
Suppose you have 60 places and you want to fill them with numbers
If the time period is one hour, then in each of these places it will display the metrics stored of every minute
But if you make this interval smaller and convert it to a minute, each of these places will display the metrics stored of every second.

Influxdb - ignore partial intervals in group by

I feel this is a problem all users of influxdb/grafana would encounter. Any time I create a graph that shows aggregations by a time interval then the most recent and oldest intervals are cut short and the ends of the graph show incorrect values. For example, I have data coming in every 10 seconds, so I should get 360 values per hour. I wanted to create a graph showing the number of data points that come in per hour. So I have this query below that does a count by hour and run it over a 24 hour period. The problem I have is that the most recent interval is almost always less than 360 because it's not complete and the oldest interval is usually cut off so it too shows too low a value. This is pretty much always an issue for any graph I create that is grouped by a time interval. Is there a way to just leave out incomplete intervals? I'm happy for a solution in influx or grafana.
SELECT count("wifiStrength") FROM "detailed_data"."water" WHERE $timeFilter GROUP BY time(1h) fill(null)
For anyone who is curious, the data is from a water meter and logs water usage.
Use smarter time ranges in the Grafana, so full hours are selected. See doc, /h is important here, e.g.:

Compare values from now and 2 hours ago and how the difference

As can be see from the chart below we have a sensor that is recording pressure. The pressure has dropped 2 points within 2 hours. If this drops 2 points within 2 hours, this causes us some problems. I would like to create a query that compares the values from now and 2 hours in the past and display the difference. How can i achieve this in influx query language?
You are best off using a derivative and depending on the exact response, either derivateive or non negative to see rate of change:
InfluxDB Functions - Derivative
Youd set the unit to 2h (it defaults to 1s)
Thanks

How to do a distinct count of a metric using graphite datasource in grafana?

I have a metric that shows the state of a server. The values are integers and if the value is 0 (zero) then the server is stable, else it is unstable. And the graph we have is at a minute level. So, I want to show an aggregated value to know how many hours the server is unstable in the selected time range.
Lets say, if I select "Last 7 days" as the time duration...we have get X hours of instability of server.
And one more thing, I have a line graph (time series graph) that shows the state of server...but, the thing is when I select "Last 24 hours or 48 hours" I am getting the graph at a minute level...when I increase the duration to a quarter I am getting the graph for every 5 min or something like that....I understand it's aggregating the values....but does any body know how the grafana is doing the aggregation ??
I have tried "scaleToSeconds" function and "ConsolidateBy" functions and many more to first get the count of non zero value minutes, but no success.
Any help would be greatly appreciated.
Thanks in advance.
There are a few different ways to tackle this, there are 2 places that aggregation happens in this situation:
When you query for a time range longer than your raw retention interval and whisper returns aggregated data. The aggregation method used here is defined in your carbon aggregation configuration.
When Grafana sends a query to Graphite it passes maxDataPoints=<width of graph in pixels>, and Graphite will perform aggregation to return at most that many points (because you don't have enough pixels to render more points than that). The method used for this consolidation is controlled by the consolidateBy function.
It is possible for both of these to be used in the same query if you eg have a panel that queries 3 days worth of data and you store 2 days at 1-minute and 7 days at 5-minute intervals in whisper then you'd have 72 * 60 / 5 = 864 points from the 5-minute archive in whisper, but if your graph is only 500px wide then at runtime that would be consolidated down to 10-minute intervals and return 432 points.
So, if you want to always have access to the count then you can change your carbon configuration to use sum aggregation for those series (and remove the existing whisper files so new ones are created with the new aggregation config), and pass consolidateBy('sum') in your queries, and you'll always get the sum back for each interval.
That said, you can also address this at query time by multiplying the average back out to get a total (assuming that your whisper aggregation config is using average). The simplest way to do that will be to summarize the data with average into buckets that match the longest aggregation interval you'll be querying, then scale those values by that interval to calculate the total number of minutes. Finally, you'll want to use consolidateBy('sum') so that any runtime consolidation will work properly.
consolidateBy(scale(summarize(my.series, '10min', 'avg'), 60), 'sum')
With all of that said, you may want to consider reporting uptime in terms of percentages rather than raw minutes, in which case you can use the raw averages directly.
When you say the value is zero (0), the server is healthy - what other values are reported while the server is unhealthy/unstable? If you're only reporting zero (healthy) or one (unhealthy), for example, then you could use the sumSeries function to get a count across multiple servers.
Some more information is needed here about the types of values the server is reporting in order to give you a better answer.
Grafana does aggregate - or consolidate - data typically by using the average aggregation function. You can override this using the 'sum' aggregation in the consolidateBy function.
To get a running calculation over time, you would most likely have to use the summarize function (also with the sum aggregation) and define the time period, e.g. 1 hour, 1 day, 1 week, and so on. You could take this a step further by combining this with a time template variable so that as the period grows/shrinks, the summarize period will increase/decrease accordingly.

reset chart to 0 in grafan

Below is a chart I have in grafana:
My problem is that if my chosen time range is say 5 minutes, the graph wont show only what happened in the last 5 minutes. So in the picture, nothing happened in the past 5 minutes so it's just showing the last points it has. How can I change this so that it goes back to zero if nothing has changed? I'm using a Prometheus counter for this, if that is relevant.
As explained in the Prometheus documentation, a counter value in itself is not of much use. It depends on when your job was last restarted and everything that happened since.
What's interesting about a counter is how much it changed over some period of time. I.e. either the average rate of change per second (e.g. 3 queries per second) or the increase over some time range (e.g. 10K queries in the last hour).
So instead of graphing something like e.g. http_requests, you should graph rate(http_requests[1m]) (the averate number of requests over the previous 1 minute) or increase(http_requests[1h]) (the total number of requests over the past hour). You can play with the range size until you get something which makes sense for your data. But make sure to use a range at least 2x your scrape interval (and ideally more, as Prometheus is somewhat daft in the way it computes rates/increases).