Is there a way to force the output of a Stat panel in Grafana to only show values in days - grafana

I'm building a Grafana dashboard with some Stat panels that show average, minimum, and maximum time values (see below) for specific fields in my database. I'm storing the data in seconds and setting the value's units to seconds after which the panel displays the time in weeks, days, hours, etc. For the sake of consistency, I would like everything to be shown in days but I haven't been able to find a way to force units for the output value. If it's possible to do this, could someone please point me to some docs, or something that could show me which configurations to make in my panels?
So far, I've tried (without success):
Configure each panel to use units of days
The result of this was that everything showed up in years, etc.
Configure each panel to create a new field by performing a binary calculation where I converted from seconds to days and then I updated the units to be days
The result was that the values were not changed at all -> instead of showing X days, or whatever, it just showed the value in seconds without the units. I'm not sure what I messed up there, but it didn’t change anything.
I found this link that discusses setting a time range for queries
This didn’t end up being useful for what I was trying to do because it was actually geared towards changing the query to a specific date range rather than the output.
I looked through the transformations documentation, stat panel documentation, and a few other panel documentation pages in an effort to see if there was any information on how to do it but I was unable to find anything on forcing the output value to use a specific unit.
Edit:
So I kept messing around with the dashboard and got a solution that works - i.e. it's a "good-enough" solution (see below) - but, now, I'm curious if it's possible to show the units along with the value without it converting it to some other unit. Does anyone have any ideas about this?
One thing to note is that the data for this image is different than the data for the previous one so I'm expecting an inexact conversion to days.

You can use a custom unit. It is a bit tricky to enter the unit in the UI because of the automatic selection but if you enter i.e. "days" (without the quotes) in the Unit field ans instead of leving the field with tab or mouse click use the scrollbar of the combobox and select the last entry "Custom Unit: days".
Hope it helps. And for the record: I am using Garafana 7.1.4

Related

How do we change the "precision:ms" setting in the Grafana Query Inspector?

I have an InfluxDB database with only x11 data points in it. These data are not displaying correctly (or at least as I would expect) in Grafana when the time between them is shorter than 1ms.
If I insert data points 1 ms apart, then everything works as expected and I see all x11 points at the correct times, as shown below.:
However, if I delete these points and upload new ones but this time one point per 100 μs, then although the data displays correctly in InfluxDB, in Grafana I see only two points in my graph:
It seems like the data is being rounded/binned to the nearest millisecond, an that this is related to the “precision=ms” setting in the query here:
but I cannot find any way to change this setting. What is the correct way to fix this?
You can't configure Grafana to support different time precision for the InfluxDB. It is hardcoded in the source code: https://github.com/grafana/grafana/blob/36fd746c5df1438f27aa33fc74b24be77debc7ff/public/app/plugins/datasource/influxdb/datasource.ts#L364 (It may need to be fixed in multiple places of the source, not only in this one.)
So the correct way to fix it is to code it, which is of course not in the scope of this question.

AlphaVantage: Random data in downloaded adjusted time series

I download adjusted time series from AlphaVantage using the following call (you need to insert your own API key):
https://www.alphavantage.co/query?function=TIME_SERIES_daily_adjusted&symbol=^GDAXI&outputsize=full&apikey=yourAPIkey
Next, I look at one particular (and faulty) data point at date 2003-04-18:
"5. adjusted close": "766464.0000"
Then, I reload the exact same API call and check the same data point again. However, this time there is a different value for adjusted close here! Every time I reload, different value (and always wrong, too). Why is this happening and how do I fix this wrong data?
For those who come across the same problem with AlphaVantage data, I try to answer my own question.
The random data problem only occurs on some (not all) non-trading days. For example, the above date is Good Friday in 2003. I have written a function to filter out all non-trading days from the downloaded AlphaVantage data, and that "fixed" the problem of the random-data days.

how to add a custom value in grafana legend?

There is a graph display elasticsearch index count, see below
I want to add a value: diff = max - min in Legend, how to implement it?
I'm pretty sure you can't, easily. You can hack your way around it by adding yet another query to your graph, something like
max_over_time(my_metric[[[__range_s]]s]) - min_over_time(my_metric[[[__range_s]]s])
Grafana will replace the [[__range_s]] bit with the length of the time range of the current dashboard, e.g. 3600 for the default 1h, so the query actually sent to Prometheus will be
max_over_time(my_metric[3600s]) - min_over_time(my_metric[3600s])
Meaning Prometheus will compute the difference between the max and min separately from Grafana (which does it on top of the samples returned by Prometheus). (It will also compute this difference for the whole time range, not just the most recent sample, which is what you're interested in.) Then you can tweak the display of said time series in Grafana (e.g. by setting line=0, fill=0) so it will not show up on the graph itself, only in the legend. But the legend will then display the current value of the difference, as well as its min, max, avg, which will be quite the crappy UX.
Edit: Or you can add said query to a separate panel (e.g. a table panel), to the right of your graph. That may let you better control the UX, although it still won't be part of the actual legend.
Edit 2: One final thing you could try, that would give you exactly what you want, is to tweak Grafana's graph panel to add a "range" value next to "min", "max" and the bunch. The source code is here, I'm pretty sure it's mostly a copy-pasta job. You likely wouldn't even have to rebuild all of Grafana, you could just package the modified panel as "Tweaked Graph Panel" plugin and drop it into your Grafana deployment's plugins folder. Then, in your dashboard, instead of using "Graph Panel", use "Tweaked Graph Panel".

Can you calculate active users using time series

My atomist client exposes metrics on commands that are run. Each command is a metric with a username element as well a status element.
I've been scraping this data for months without resetting the counts.
My requirement is to show the number of active users over a time period. i.e 1h, 1d, 7d and 30d in Grafana.
The original query was:
count(count({Username=~".+"}) by (Username))
this is an issue because I dont clear the metrics so its always a count since inception.
I then tried this:
count(max_over_time(help_command{job=“Application
Name”,Username=~“.+“}[1w]) -
max_over_time(help_command{job=“Application name”,Username=~“.+“}[1w]
offset 1w) > 0)
which works but only for one command I have about 50 other commands that need to be added to that count.
I tried the:
"{__name__=~".+_command",job="app name"}[1w] offset 1w"
but this is obviously very expensive (timeout in browser) and has issues with integrating max_over_time which doesn't support it.
Any help, am I using the metric in the wrong way. Is there a better way to query... my only option at the moment is the count (format working above for each command)
Thanks in advance.
To start, I will point out a number of issues with your approach.
First, the Prometheus documentation recommends against using arbitrarily large sets of values for labels (as your usernames are). As you can see (based on your experience with the query timing out) they're not entirely wrong to advise against it.
Second, Prometheus may not be the right tool for analytics (such as active users). Partly due to the above, partly because it is inherently limited by the fact that it samples the metrics (which does not appear to be an issue in your case, but may turn out to be).
Third, you collect separate metrics per command (i.e. help_command, foo_command) instead of a single metric with the command name as label (i.e. command_usage{commmand="help"}, command_usage{commmand="foo"})
To get back to your question though, you don't need the max_over_time, you can simply write your query as:
count by(__name__)(
(
{__name__=~".+_command",job=“Application Name”}
-
{__name__=~".+_command",job=“Application name”} offset 1w
) > 0
)
This only works though because you say that whatever exports the counts never resets them. If this is simply because that exporter never restarted and when it will the counts will drop to zero, then you'd need to use increase instead of minus and you'd run into the exact same performance issues as with max_over_time.
count by(__name__)(
increase({__name__=~".+_command",job=“Application Name”}[1w]) > 0
)

Most Performant way to implement time-dependent status

Central to a project I'm working on is a highlighting-mechanic that can be applied to certain items on the website. The idea is, that this highlighted-status is only active for a certain amount of time.
I'm trying to find the most performant way to achieve this (in querying, setting status, checking status and revoking it)
A first approach would be to set simply set a value 'highlighted:true' to the item. This seems to be the most performant way to query for highlighted items. The Drawback I see here, is that there also needs to be stored a date for the highlighting-action, but furthermore there needs to run an interval to check on the highlighted items and potentially revoke their highlighted status. Also the exact moment when the item stops beeing highlighted can't be determined exactly, since its depending on the interval of the check-function.
A second approach would be to mainly store the date of the highlighting-action and run the query against it. It seems that the query of highlighted objects is way less performant, since every item ever is beeing checked, and on top its not just a boolean, but a proper function that throws those differnt date-values around to check if it is still valid. On the upside there is no external cleanup-function neccessary and every highlighting period ends perfectly on time.
Would love to have your input on this. Is there maybe a clever pattern on this?