Grafana aliases for multiples of $__interval - grafana

I want to make a Grafana dashboard with multiple dynamic interval graphs of the same time series. However I want the current intervals to be displayed in the graph legend.
Obviously the multiplication doesn't work in the alias field. Is there any way to have it calculated in the alias?
I've tried naively multiplying it in the alias field and did some googling on dynamic alias calculation, but didn't find anything

Related

How do I sort this scatter plot?

I would like to sort this scatter plot, which is summarized with a Band that includes Minimum, Average, and Maximum.
I would like to sort it in 2 ways:
by Average
by Widest Range (ie difference between Minimum and Maximum values)
Tableau Public workbook
If you can't view this or I'm not allowed to post external resources on stackoverflow, then perhaps you can show me on this screenshot what I would click to get started on the following sort
Also, bonus question, is there a way to create a control for the user to toggle between the 2 sort methods in the same chart? Or do I have to duplicate the chart with a different sort type for each?
One note is that I only have Tableau Public version since I'm evaluating the product. Until I get a paid version, I can't open a workbook file unless you publish it to Tableau Public cloud. But rather than give me the workbook answer, I would just appreciate it if you gave me instructions to do this as this is more of a learning exercise.
Thanks!
Somewhat unfortunately, you'll have to replicate the min,avg,max by creating 3 calculated fields. Tableau cannot operate on the values placed on the view via reference lines.
These calculations might look something like these:
{Fixed [Cwe]: Min([Cvss Score])}
~
{Fixed [Cwe]: Avg([Cvss Score])}
~
{Fixed [Cwe]: Max([Cvss Score])}
In general, from there, you should pretty easily be able to apply them to the view and sort. Average will be easy. The difference between Min and Max will just need a subtracting calculated field to sort by. Once they're on the view, I'd put them as a dimension (column) to verify that the numbers look correct.
Take note that LOD calculations take place before filtering, so you'll want to put the Cvss filter you have there 'on context' by right clicking it and clicking 'add to context'
Here is how I would complete the sorts:
Starting with all the above calculations on 'Rows' and ensuring that they are 'Dimensions' (Blue).
After right clicking "Sort..." on [Sub-Category] on 'Rows'. Select which field to sort by.
From there, the calculated fields can be taken off the rows column. (They were only there in the first place to ensure that you could check that the sorts took place. They don't actually need to have been there in the first place.)

Grafana Singlestat Max not matching Graph with same query

I have a Singlestat panel and Graph panel that use an identical query, (Singlestat & Graph query). And, the Singlestat is set to max (Singlestat setting).
Unfortunately, the graph clearly shows a maximum greater than the max singlestat (714 vs ~800): Singlestat vs Graph. Judging from the sparklines on the Singlestat, it seems like the Singlestat's calculations are less granular than the graph's. Can anyone explain why this would be if they're using the same base query? The other singlestat functions (like Min, Avg, etc.) seem to work fine. It's just max that I'm seeing this issue with.
Note: I reviewed the other Grafana Singlestat vs Graph posts, but this appears to be a different issue.
If you take a look at the first image you linked to, you'll notice there is a Min step input, with a default value of 5m. That's where your lower resolution comes from. You may set that explicitly to your scrape interval (or less, to make sure you don't lose any samples due to jitter in the scrape interval, although that may end up being costly), but if you increase your dashboard range enough you'll:
(a) likely have a singlestat max value that's higher than anything on the graph (because your graph is now lower resolution than the singlestat source data); and
(b) will hit Prometheus' 11K samples limit if you zoom out to a range longer than 11K times the scrape interval.
Your best bet is to use PromQL to calculate the max value to display in your singlestat panel. You'll still have to deal with (a) above (low resolution graph when the range is long) but it's going to be the actual max (as much as the fact that you're actually sampling values at some fixed interval allows) and it's going to be more efficient.
Problem is that given your query -- sum(jvm_thread_count) -- there is no way of putting that into a single PromQL query with max_over_time. You'd have to define a recorded rule (something like instance:jvm_thread_count:sum = sum(jvm_thread_count) and then have your singlestat panel display the results of the max_over_time(instance:jvm_thread_count:sum[$__range_s]) instant query (check the Instant checkbox in your singlestat settings).

How to calculate the average value in a Prometheus query from Grafana

I was trying to create a Prometheus graph on Grafana, but i can't find the function which calculate the average value.
For example , to create a graph for read_latency, the result contain many tags. If there are 3 machine, there will be 3 tag seperately, for machine1, machine2, machine3. Here is a graph(click to show)
Prometheus
I want to combine these three together, so there will be only one tag : machines, and the value is the average of those three.
It seems that Prometheus query function doesn't have something like average(), so I am not sure how to do this.
I used to work on InfluxDB, and the graph can show like (click to show):
influxDB
I think you are searching for the avg() operation. see documentation
Use built-in $__interval variable, where node, name are custom labels (depending on you metrics):
sum(avg_over_time(some_metric[$__interval])) by (node, name)
or fixed value like 1m,1h etc:
sum(avg_over_time(some_metric[1m])) by (node, name)
You can filter using Grafana variables:
sum(avg_over_time(some_metric{cluster=~"$cluster"}[1m])) by (node, name)
Short answer: use avg() function to return the average value across multiple time series. For example, avg(metric) returns the average value for time series with metric name.
Long answer: Prometheus provides two functions for calculating the average:
avg_over_time calculates the average over raw sample stored in the database on the lookbehind window specified in square brackets. The average is calculated independently per each matching time series. For example, avg_over_time(metric[1h]) calculates average values for raw samples over the last hour per each time series with metric name.
avg calculates the average over multiple time series. The average is calculated independently per each point on the graph.
If you need to calculate the average over raw samples across all the time series, which match the given selector, per each time bucket, e.g.:
SELECT
time_bucket('5 minutes', timestamp) AS t,
avg(value)
FROM table
GROUP BY t
Then the following PromQL query must be used:
sum(sum_over_time(metric[$__interval])) / sum(count_over_time(metric[$__interval]))
Do not use avg(avg_over_time(metric[$__interval])), since it returns average of averages, which isn't equal to real average. See this explanation for details.

Combining several data series to the same value in Grafana

I'm looking for a function in Grafana which looks like it should be trivial, but until now I haven't been able to find out how, if at all, it is possible to do.
With the recent templating options, I can easily create my dashboard once, and quickly change the displayed data to look at different subsets of my data, and that's great.
What I'm looking for is a way to combine this functionality to create interactive graphs that show aggregations on different subsets of my data.
E.g., the relevant measurement for me is a "clicks per views" measurement.
For each point in the series, I can calculate this ratio for each state (or node) in code before sending it to the graphite layer, and this is what I've been doing until now.
My problem starts where I want to combine several states together, interactively: I could use the "*" in one of the nodes, and use an aggregate function like "avg" or "sum" to collect the different values covered in the sub-nodes together.
Problem is, I can't just use an average of averages - as the numbers may be calculated on very different sample sizes,the results will be highly inaccurate.
Instead, I'd like to send to the graphite the "raw data" - number of clicks and number of views per state for each point in the series, and have grafana calculate something like "per specified states, aggregate number of clicks AND DIVIDE BY aggregate number of views".
Is there a was to do this? as far as I can tell, the asPercent function doesn't seem to do the trick.
You can use a query like this in edit mode:
SELECT (aggregate_function1(number_of_clicks)/aggregate_function2(number_of_views)) as result
FROM measurement_name
WHERE $timeFilter
GROUP BY time($_interval), state.

How can I control how Jasper Reports combines data for a single value in a time series?

I have a time series and I'd like to:
a) Know how Jasper Reports (or JFreeChart) will combine my data for a single point on the chart by default
and
b) Be able to change how that combination is performed
For instance, let's say that I have samples of data once per second, and my time series is configured for "minute". That means that I have 60 pieces of real data for each single value shown on the chart. I'd like to be able to control how that mapping is done (e.g. average, maximum, etc.).
I looked around for documentation on how to see the default or modify how the plot works, but I wasn't able to find anything. Perhaps my search terms (chart, time series, etc.) were too generic.