Grafana, how to choose which column for axis in Timechart? - grafana

I am using Grafana to perform a Log to Metric query using an Azure Data Explorer datasource, that gives me a result like this as a table:
This comes from this query:
Log
| where $__timeFilter(TIMESTAMP)
| where eventId == 666
| summarize count() by bin(TIMESTAMP, 15m), Region
| order by TIMESTAMP asc
When rendered as a timechart in AppInsights, it renders perfectly like this:
However in Grafana, this perplexingly renders by the Count_ column, not using the obvious Regional breakout field:
My goal is to get an AppInsight's like timechart with multiple data series, within Grafana

I found my answer! It turns out I was rendering the data as a Table, using the Grafana query wizard here.
Once I changed that to TimeSeries, it all just worked!

Related

Grafana / InfluxDB get last creation date and use an associated field to plot a graph

I have what seems like an achievable problem. But I'm struggling with it and walking into more complexity than I thought.
I've got data similar to the following
value creation_date creation_date_string environment
175000 1646209830086 2022-03-02T08:30:30.086Z integration
175000 1644576234409 2022-02-11T10:43:54.409Z production
175000 1646244958207 2022-03-02T18:15:58.207Z production
In Influx I want to select the newest (aka latest, aka last) creation_date for each environment and then plot its value.
The best I've come up with is
SELECT value, last(creation_date) FROM measurement WHERE $timeFilter GROUP BY environment
Which works but also shows the creation_date on the graph. I'd like to filter the creation_date and keep only the value. i.e. As I only need creation_date to select the row.
Is this possible in InfluxDB or via Grafana?

I can't show to Grafana what time field it should use for chart building

I have a Postgresql DataSource with the following table:
It's kinda logs. All I want is to show on a chart how many successful records (with http_status == 200) do I have per each hour. Sounds simple, right? I wrote this query:
SELECT
count(http_status) AS "suuccess_total_count_per_hour",
date_trunc('hour', created_at) "log_date"
FROM logs
WHERE
http_status = 200
GROUP BY log_date
ORDER BY log_date
It gives me the following result:
Looks good to me. I'm going ahead and trying to put it into Grafana:
Ok, I get it, I have to help Grafana to understand where is the field for time count.
I go to Query Builder and I see that it breaks me query at all. And since that moment I got lost completely. Here is the Query Builder screen:
How to explain to Grafana what do I want? I want just a simple chart like:
Sorry for the rough picture, but I think you got the idea. Thanks for any help.
Your time column (e.g. created_at) should be TIMESTAMP WITH TIME ZONE type*
Use time condition, Grafana has macro so it will be easy, e.g. WHERE $__timeFilter(created_at)
You want to have hourly grouping, so you need to write select for that. Again Grafana has macro: $__timeGroupAlias(created_at,1h,0)
So final Grafana SQL query (not tested, so it may need some minor tweaks):
SELECT
$__timeGroupAlias(created_at,1h,0),
count(*) AS value,
'succcess_total_count_per_hour' as metric
FROM logs
WHERE
$__timeFilter(created_at)
AND http_status = 200
GROUP BY 1
ORDER BY 1
*See Grafana doc: https://grafana.com/docs/grafana/latest/datasources/postgres/
There are documented macros. There are also macros for the case, when your time column is UNIX timestamp.

Grafana - create table with column values with Prometheus (dynamic) property/label data

When I have a Prometheus query resulting in:
my_metric{instance="instance1",job="job",prop_1="ok",prop_2="cancel"} 1
my_metric{instance="instance2",job="job",prop_1="error",prop_2="ok"} 1
How can I create a Grafana table showing:
timestamp | instance1 | ok | cancel
timestamp | instance2 | error | ok
So a Prometheus metric property is mapped to Grafana table column.
OPEN QUESTION: Is it possible to change the value of a tag dynamically? So the 3rd and 4th label (or property) values change over time.
QUESTION 1: The first part of the question is simple: Formatting the prometheus labels/properties in a table is easy. The answer you can find in this description.
How? Just select the 'table' format as shown in the second red box.
QUESTION 2: any idea?

Inaccurate COUNT DISTINCT Aggregation with Date dimension in Google Data Studio

When I aggregate values in Google Data Studio with a date dimension on a PostgreSQL Connector, I see buggy behaviour. The symptom is that performing COUNT(DISTINCT) returns the same value as COUNT():
My theory is that it has something to do with the aggregation on the data occurring after the count has already happened. If I attempt the exact same aggregation on the same data in an exported CSV instead of directly from a PostgreSQL Connector Data Source, the issue does not reproduce:
My PostgreSQL Connector is connecting to Amazon Redshift (jdbc:postgresql://*******.eu-west-1.redshift.amazonaws.com) with the following custom query:
SELECT
userid,
submissionid,
date
FROM mytable
Workaround
If I stop using the default date field for the Date Dimension and aggregate my own dates directly in within the SQL query (date_byweek), the COUNT(DISTINCT) aggregation works as expected:
SELECT
userid,
submissionid,
to_char(date,'YYYY-IW') as date_byweek
FROM mytable
While this workaround solves my immediate problem, it sucks because I miss out on all the date functionality provided by Data Studio (Hierarchy Drill Down, Date Range filtering, etc.). Not to mention reducing my confidence at what else may be "buggy" within the product 😞
How to Reproduce
If you'd like to re-create the issue, using the following data as a PostgreSQL Data Source should suffice:
> SELECT * FROM mytable
userid submissionid
-------- -------------
1 1
2 2
1 3
1 4
3 5
> COUNT(DISTINCT userid) -- ERROR: Returns 5 when data source is PostgreSQL
> COUNT(DISTINCT userid) -- EXPECTED: Returns 3 when data source is CSV (exported from same PostgreSQL query above)
I'm happy to report that as of Sep 17 2020, there's a workaround.
DataStudio added the DATETIME_TRUNC function (see here https://support.google.com/datastudio/answer/9729685?), that allows you to add a custom field that truncs the original date to whatever granularity you want, without causing the distinct bug.
Attempting to set the display granularity in the report still causes the bug (i.e., you'll still set Oct 1 2020 12:00:00 instead of Oct 2020).
This can be solved by creating a SECOND custom field, which just returns the first, and then you can add IT to the report, change the display granularity, and everything will work OK.
I have the same issue with MySQL Connector. But my problem is solved, when I change date field format in DB from DATETIME (YYYY-MM-DD HH:MM:SS) to INT (Unixtimestamp). After connection this table to the Googe Datastudio I set type for this field as Date (YYYYMMDD) and all works, as expected. Hope, this may help you :)
In this Google forum there is a curious solution by Damien Choizit that involves combining your data source with itself. It works well for me.
https://support.google.com/datastudio/thread/13600719?hl=en&msgid=39060607
It says:
I figured out a solution in my case: I used a Blend Data joining twice the same data source with corresponding join key(s), then I specified a data range dimension only on the left side and selected the columns I wanted to CTD aggregate as "dimensions" (and not metric!) on the right side.

Prometheus/Grafana: combining 2 timeseries

assume next 2 prometheus timeseries:
service_deployed{service} timestamp
service_available{service} timestamp
A set of specific metrics with matching labels would be:
service_deployed{service='provision-service'} 12345678.0
service_available{service='provision-service'} 12345900.0
which in effect say that there is a newer 'provision-service' (as its available timestamp is greater than the deployed one).
Now imagine I'd like to present these 2 in one table in Grafana. Something like:
| Service | Deployed | Available |
| provision-service| 12345678.0 | 12345900.0|
Also assume that I cannot use the latest Grafana (>5.0) that seems to be able to combine tables so I'll have to do this using promQL. How would you go about combining these metrics?
Thanks