Compare values from now and 2 hours ago and how the difference - grafana

As can be see from the chart below we have a sensor that is recording pressure. The pressure has dropped 2 points within 2 hours. If this drops 2 points within 2 hours, this causes us some problems. I would like to create a query that compares the values from now and 2 hours in the past and display the difference. How can i achieve this in influx query language?

You are best off using a derivative and depending on the exact response, either derivateive or non negative to see rate of change:
InfluxDB Functions - Derivative
Youd set the unit to 2h (it defaults to 1s)
Thanks

Related

How to do a distinct count of a metric using graphite datasource in grafana?

I have a metric that shows the state of a server. The values are integers and if the value is 0 (zero) then the server is stable, else it is unstable. And the graph we have is at a minute level. So, I want to show an aggregated value to know how many hours the server is unstable in the selected time range.
Lets say, if I select "Last 7 days" as the time duration...we have get X hours of instability of server.
And one more thing, I have a line graph (time series graph) that shows the state of server...but, the thing is when I select "Last 24 hours or 48 hours" I am getting the graph at a minute level...when I increase the duration to a quarter I am getting the graph for every 5 min or something like that....I understand it's aggregating the values....but does any body know how the grafana is doing the aggregation ??
I have tried "scaleToSeconds" function and "ConsolidateBy" functions and many more to first get the count of non zero value minutes, but no success.
Any help would be greatly appreciated.
Thanks in advance.
There are a few different ways to tackle this, there are 2 places that aggregation happens in this situation:
When you query for a time range longer than your raw retention interval and whisper returns aggregated data. The aggregation method used here is defined in your carbon aggregation configuration.
When Grafana sends a query to Graphite it passes maxDataPoints=<width of graph in pixels>, and Graphite will perform aggregation to return at most that many points (because you don't have enough pixels to render more points than that). The method used for this consolidation is controlled by the consolidateBy function.
It is possible for both of these to be used in the same query if you eg have a panel that queries 3 days worth of data and you store 2 days at 1-minute and 7 days at 5-minute intervals in whisper then you'd have 72 * 60 / 5 = 864 points from the 5-minute archive in whisper, but if your graph is only 500px wide then at runtime that would be consolidated down to 10-minute intervals and return 432 points.
So, if you want to always have access to the count then you can change your carbon configuration to use sum aggregation for those series (and remove the existing whisper files so new ones are created with the new aggregation config), and pass consolidateBy('sum') in your queries, and you'll always get the sum back for each interval.
That said, you can also address this at query time by multiplying the average back out to get a total (assuming that your whisper aggregation config is using average). The simplest way to do that will be to summarize the data with average into buckets that match the longest aggregation interval you'll be querying, then scale those values by that interval to calculate the total number of minutes. Finally, you'll want to use consolidateBy('sum') so that any runtime consolidation will work properly.
consolidateBy(scale(summarize(my.series, '10min', 'avg'), 60), 'sum')
With all of that said, you may want to consider reporting uptime in terms of percentages rather than raw minutes, in which case you can use the raw averages directly.
When you say the value is zero (0), the server is healthy - what other values are reported while the server is unhealthy/unstable? If you're only reporting zero (healthy) or one (unhealthy), for example, then you could use the sumSeries function to get a count across multiple servers.
Some more information is needed here about the types of values the server is reporting in order to give you a better answer.
Grafana does aggregate - or consolidate - data typically by using the average aggregation function. You can override this using the 'sum' aggregation in the consolidateBy function.
To get a running calculation over time, you would most likely have to use the summarize function (also with the sum aggregation) and define the time period, e.g. 1 hour, 1 day, 1 week, and so on. You could take this a step further by combining this with a time template variable so that as the period grows/shrinks, the summarize period will increase/decrease accordingly.

reset chart to 0 in grafan

Below is a chart I have in grafana:
My problem is that if my chosen time range is say 5 minutes, the graph wont show only what happened in the last 5 minutes. So in the picture, nothing happened in the past 5 minutes so it's just showing the last points it has. How can I change this so that it goes back to zero if nothing has changed? I'm using a Prometheus counter for this, if that is relevant.
As explained in the Prometheus documentation, a counter value in itself is not of much use. It depends on when your job was last restarted and everything that happened since.
What's interesting about a counter is how much it changed over some period of time. I.e. either the average rate of change per second (e.g. 3 queries per second) or the increase over some time range (e.g. 10K queries in the last hour).
So instead of graphing something like e.g. http_requests, you should graph rate(http_requests[1m]) (the averate number of requests over the previous 1 minute) or increase(http_requests[1h]) (the total number of requests over the past hour). You can play with the range size until you get something which makes sense for your data. But make sure to use a range at least 2x your scrape interval (and ideally more, as Prometheus is somewhat daft in the way it computes rates/increases).

How should i interpret this grafana visualized prometheus histogram buckets heatmap?

I visualized prometheus histogram buckets as heatmap with grafana, below pic shows the query and the outcome graph, how should i interpret this?
According to my attacker, in total i sent 300 requests in that period exactly, but when i sum those numbers up on above graph i can never get exact 300,
and also looks those numbers are fluctuating with the time elapsing, how should i interpret this graph in a meaningful way?
And if i want those numbers to be the exact request counts locate in each of those bucket in that time window, what should i do?
Oh, for the X-Axis Mode i chose Series and the Value i chose Current.
There are real reasons why you can't always get a precise rate/increase value out of Prometheus. One of them is failed scrapes, i.e. every now and then a scrape will fail or time out due to a slow service, slow Prometheus or network issue.
The other reason is the fact that collected samples are never exactly scrape_interval apart: there will always be a few milliseconds or seconds of delay here and there. So (to take an extreme example) how can you tell the precise increase over the past 1 minute if you only have 2 samples 63 seconds apart? Is it the difference between the two values? Is it that difference adjusted to 60 seconds (i.e. / 63 * 60)?
That being said, Prometheus further boxes itself into a corner by only looking at samples falling strictly within the requested time range. To explain myself: how would a reasonable person calculate the increase of a counter over the last 30 minutes? They would likely take the value of said counter now and the value 30 minutes ago and subtract them. I.e. in PromQL terms (adjusting for counter resets where necessary):
request_duration_bucket - request_duration_bucket offset 30m
What Prometheus does instead (assuming a scrape_interval of 1m and an ideal timeseries with samples spaced exactly 1m apart) is essentially this:
(request_duration_bucket - request_duration_bucket offset 29m) / 29 * 30
I.e. it takes the increase over 29 minutes and extrapolates it to 30. Because of self-imposed limitations, nothing to do with the nature of the problem at hand.
Note that this works fine with counters that increase smoothly and continuously. E.g. if you have a counter that increases by 500 every minute, then taking the increase over 29 minutes and extrapolating to 30 is exactly correct. But for anything that increases in jumps and fits (which is most real-life counters) it will either slightly overestimate the increase if it occurs during the 29 minutes it actually samples (by exactly 1/29) or seriously underestimate it (if the increase occurs in the 1 minute not included in the sampling). This is even worse if you compute a rate/increase over a range covering fewer samples. E.g. if your range only covers 5 samples on average, the overestimate will be 20%, i.e. 1 / (5 - 1) and (each of) your increases will totally disappear 1 minute out of 5.
The only way I've found to work around this limitation is (again, assuming a scrape_interval of 1m) to reverse engineer Prometheus' extrapolation:
increase(request_duration_bucket[31m]) / 31 * 30
But this requires you to be aware of your scrape_interval and adjust for it and is very brittle (if you ever change your scrape_interval all your careful tweaking goes to hell).
Or, if you are OK with your increase falling to zero every time an instance is restarted:
clamp_min(request_duration_bucket - request_duration_bucket offset 30m, 0)
I do actually have a proposed patch to Prometheus to add xrate/xincrease functions that actually behave more as you would expect them to (and as described above) but it doesn't look very likely to be accepted: https://github.com/prometheus/prometheus/issues/3806

Is the Rate in the source block a fixed rate?

I have a simple source to sink model and I am merely altering the "Rate" to 6 per hour. I would expect a fixed 6 agents to be generated each hour, but it seems like in the first hour from 0 to 60min, only 3 agents are generated. Similarly in the time 60-120min, only 5 agents were generated.
Is there a warm up period in Anylogic or something like this that explains what is happening?
Another alternative is to just use the interarrival time with a fixed time. This will give you the same results as Felipe's answer, but with one less object, as you will not need the event.
A few important items to note on this approach:
Instead of 6.0, using a parameter would be better. You could call this parameter dArrivalsPerHour. This would make your source block easier to read in the future, and give you some better flexibility. Your interarrival time would be 1.0 / dArrivalsPerHour.
Make sure you divide by at least (1) double. If you did 1/6, java would actually return 0! This is because in Java two integers divided by each other returns an integer, so java just truncates the decimal. If you use a parameter, just set its type to double. Usually to be extra careful against anyone accidentally changing my parameter type to integer in the future, I would still go ahead and use a 1.0.
AnyLogic does not have an arrival at time zero in this approach. The first arrival would be at 0.166 hours. If you want an arrival at time zero, followed by this pattern (it would still be 6 per hour, just shifting when it starts), then you have a couple of options. First, you can use Felipe's approach and set the first occurrence time to zero. An alternative would be to could call an inject On Startup OR after you have finished any initialization code your model has.
Happy Modeling!
The source block doesn't produce exactly 6 agents per hour, it produces agents using a poisson distribution with mean 6 per hour (lambda=6). So the number of agents per hour you get will be random. But the reason why you always get 3 in the first hour and 5 in the second hour is that you have a fixed seed:
You can find that option clicking on your simulation experiment under the randomness tab. If you change to random seed it will produce different agents per hour instead of always 3 and 5.
To produce EXACTLY 6 per hours you need to use an event. But first create a source that generates agents through injection:
And the event running 6 times per hour, adding 1 agent to the source:

Grafana does not show graphs for longer time periods

We are using Grafana to visualise some times measured with an other application. I get a data point every 5 min.
I also get a nice graph if I only visualise the last 24 or 48h.
for longer time ranges no graph is shown.
I researched a little and found that in the database there are data points each minute. which means I only get one value and 4 time NULL every 5 minutes. For a time range bigger 48h grafana starts to cumulate the values it ends up with only NULL values.
Here are two pictures which show my problem:
Timerange 24h
Timerange 7 days
Are there some settings I can make to avoid this behaviour?
Thank you for your help
Are you using graphite? If so, please make sure you configured xFilesFactor correctly.