Show the best / MAX unit count over a 15 minute rolling interval - Tableau - tableau-api

I'm trying to show the best unit count over a 15 minute rolling interval at a specific level of detail (PLC & Point) that will act as a KPI. I think I'm on the right path but I'm currently getting the "an aggregate function is already an aggregation" error and I can't find either a better solution to do the calculation or a work around for the error.
I have created a calculated field to work out the rolling 15 min sum of the counts called 'Rolling 15 mins' and display that alongside each minute of the test window (see screenshot and 'sheet 1' of the Google drive doc) using
WINDOW_SUM(SUM([Unit Count]),-15,0)
Rolling 15 min screenshot / sheet 1
With the calculation 'Rolling 15 Mins' I've tried to show the best or Max rolling 15mins count at the PLC & Point level so that each points best 15 minute count over a test period is clearly visable using an LOD but this is where I'm getting the error, which I now know is due to the heiracrhy of Tableau calculations, but I can't figure another work around.
{ FIXED [PLC New],[PLC & Point (Test Windows)],DATEPART('hour', [Time]),DATEPART('minute', [Time]) : MAX([Rolling 15 Mins]) }
The screenshot from 'sheet 2' below the 'Rolling 15 Mins' is currently displaying the sum of the unit counts of the last 15 PLC points, but this is the level that I would like to display the best / MAX 15mins unit count over the test period.
The level I'd like to display the MAX 15 mins at / sheet 2
Any assistance with this would be much apperciated. Thanks in advance.
Link to Example File (.twbx)

Related

Influxdb - ignore partial intervals in group by

I feel this is a problem all users of influxdb/grafana would encounter. Any time I create a graph that shows aggregations by a time interval then the most recent and oldest intervals are cut short and the ends of the graph show incorrect values. For example, I have data coming in every 10 seconds, so I should get 360 values per hour. I wanted to create a graph showing the number of data points that come in per hour. So I have this query below that does a count by hour and run it over a 24 hour period. The problem I have is that the most recent interval is almost always less than 360 because it's not complete and the oldest interval is usually cut off so it too shows too low a value. This is pretty much always an issue for any graph I create that is grouped by a time interval. Is there a way to just leave out incomplete intervals? I'm happy for a solution in influx or grafana.
SELECT count("wifiStrength") FROM "detailed_data"."water" WHERE $timeFilter GROUP BY time(1h) fill(null)
For anyone who is curious, the data is from a water meter and logs water usage.
Use smarter time ranges in the Grafana, so full hours are selected. See doc, /h is important here, e.g.:

How to get three measures in a single chart?

I want three measures in a single chart.
Time spent by an employee in the office. (Available hours)
Time spent by an employee on productive tools. (Productive Hours)
Percentage of time spent productively. (PH : AH %)
I have used dual axis for 1 and 2 as 2 will always be less than 1.
I want it to look something like this
The problem is when I use blended axis it does not show the "tools hours" as a part of "available hours". So for example if an employee was available for 5 hours out of which he worked on productive tools for 3 hours the blended axis will show total available hours to be 8.
Your available hours is aways greater than your tools hours?
So, you should use a calculated Field where you show the tools hours and the "available hours" - tool hours.
It can also be solver by transforming your data source input.
PS: Yes, you can make formulas (calculated fields) across joins and blendings.

Can I calculate a moving sum on a field in InfluxDB?

I'm trying to understand if it's possible to calculate a 1 month sum of revenue data in one of my measurements. For each day, I would like the sum of the previous 30 days.
Is this possible in InfluxDB or through Grafana's query interface?
A moving average is a moving sum, divided by the number of samples. So if you want a moving sum of the past 30 values:
select 30*moving_average(field_name, 30) from measurement
Edited to add:
As Peter Halicky points out in the comments, this is is not the past 30 days. It's the past 30 data points.
If you will always have data for every single day, it's not an issue.
If you're missing a day's data, you'll still get a 30-sample average, but it'll stretch over 31 days instead of 30.
If you don't actually care about the calendar, but want to know the past 30 days of activity, this is not a problem.
If it is a problem, there are a few work-arounds. One that's probably trickier than it sounds: ensure that there is always an entry for each day.
A more robust way is to have the reporting app do this in two steps. Something like this (haven't worked out all the details, but you get the idea):
find the number of data points in the past 30 days, using a query like select count(field_name) from measurement where time > now() - 30d.
Use this number (call it n) to form the query: select n*moving_average(field_name, n) from measurement where time > now - 30d.
Yes, definitely it's possible.
Just set this part of your query like this:
SELECT sum("value") FROM "YOUR_TAG_NAME"
WHERE $timeFilter GROUP BY time(30d) fill(null)
Just make sure that your dashboard time included Last 30 days (at least).

How to do a distinct count of a metric using graphite datasource in grafana?

I have a metric that shows the state of a server. The values are integers and if the value is 0 (zero) then the server is stable, else it is unstable. And the graph we have is at a minute level. So, I want to show an aggregated value to know how many hours the server is unstable in the selected time range.
Lets say, if I select "Last 7 days" as the time duration...we have get X hours of instability of server.
And one more thing, I have a line graph (time series graph) that shows the state of server...but, the thing is when I select "Last 24 hours or 48 hours" I am getting the graph at a minute level...when I increase the duration to a quarter I am getting the graph for every 5 min or something like that....I understand it's aggregating the values....but does any body know how the grafana is doing the aggregation ??
I have tried "scaleToSeconds" function and "ConsolidateBy" functions and many more to first get the count of non zero value minutes, but no success.
Any help would be greatly appreciated.
Thanks in advance.
There are a few different ways to tackle this, there are 2 places that aggregation happens in this situation:
When you query for a time range longer than your raw retention interval and whisper returns aggregated data. The aggregation method used here is defined in your carbon aggregation configuration.
When Grafana sends a query to Graphite it passes maxDataPoints=<width of graph in pixels>, and Graphite will perform aggregation to return at most that many points (because you don't have enough pixels to render more points than that). The method used for this consolidation is controlled by the consolidateBy function.
It is possible for both of these to be used in the same query if you eg have a panel that queries 3 days worth of data and you store 2 days at 1-minute and 7 days at 5-minute intervals in whisper then you'd have 72 * 60 / 5 = 864 points from the 5-minute archive in whisper, but if your graph is only 500px wide then at runtime that would be consolidated down to 10-minute intervals and return 432 points.
So, if you want to always have access to the count then you can change your carbon configuration to use sum aggregation for those series (and remove the existing whisper files so new ones are created with the new aggregation config), and pass consolidateBy('sum') in your queries, and you'll always get the sum back for each interval.
That said, you can also address this at query time by multiplying the average back out to get a total (assuming that your whisper aggregation config is using average). The simplest way to do that will be to summarize the data with average into buckets that match the longest aggregation interval you'll be querying, then scale those values by that interval to calculate the total number of minutes. Finally, you'll want to use consolidateBy('sum') so that any runtime consolidation will work properly.
consolidateBy(scale(summarize(my.series, '10min', 'avg'), 60), 'sum')
With all of that said, you may want to consider reporting uptime in terms of percentages rather than raw minutes, in which case you can use the raw averages directly.
When you say the value is zero (0), the server is healthy - what other values are reported while the server is unhealthy/unstable? If you're only reporting zero (healthy) or one (unhealthy), for example, then you could use the sumSeries function to get a count across multiple servers.
Some more information is needed here about the types of values the server is reporting in order to give you a better answer.
Grafana does aggregate - or consolidate - data typically by using the average aggregation function. You can override this using the 'sum' aggregation in the consolidateBy function.
To get a running calculation over time, you would most likely have to use the summarize function (also with the sum aggregation) and define the time period, e.g. 1 hour, 1 day, 1 week, and so on. You could take this a step further by combining this with a time template variable so that as the period grows/shrinks, the summarize period will increase/decrease accordingly.

Averaging across a fixed calculation

I am building an overview graph that gives insight into session length in connection to churn.
I am looking to build a graph depicting the average time spent in an app on a per week basis relative to the signup date of all users.
I think I am almost there. I was able to create a graph that shows exactly that except. Except I have a summed up version of all users and not an average.
If I change the row value from SUM to AVG in my graph, the problem that I am facing then is that it takes the average of all users that have been active on that day. Instead, I also want to have not active users being counted as 0, therefore, decreasing the average value. (reflecting the churn aspect of the graph)
days since signup calculation: DATE(([event_timestamp] - [sign_up_date]) /(1000 * 1000 * 60 * 60 * 24))
session length per user: { FIXED [user_pseudo_id], [days since signup] : SUM([engagement_time_msec])}
I expect something in the area of 17k ms as peak instead of 900M as peak (avg instead of a sum).
my current graph