In multiple VM machine (15+) I use TIG framework (Telegraf, Influxdb and grafana) to monitoring system stats like (CPU, RAM, Disk etc)
So data is exported via telegraf and stored in InfluxDB which is further use as datasource in Grafana.
The problem I m facing is setting up alert on any system metric
In Query section I uses Raw Query like this
Disk
SELECT last(used_percent) AS PercentageUsed FROM "disk" WHERE
"host" =~ /$server$/ AND "path" = '/' AND $timeFilter GROUP BY
time($interval), "host", "path"
CPU
SELECT mean("usage_user") AS "user" FROM "cpu" WHERE ("cpu" =
'cpu-total') AND host =~ /$server$/ AND $timeFilter GROUP BY
time($interval), host ORDER BY asc
It is my requirement to use variable for simmilar stat data of all VM in one graph
But the problem is I am unable to configure any alert on this query due to Error
Template variables are not supported in alert queries
It does not sound like that is possible per this thread.
This means you will either have to have multiple panels - one per template variable value per panel or use regex instead of the template variable.
Related
I have TIG stacks (Grafana for the dashboard and Influx for the database and my collector is Telegraf).
I need an output where the CPU and memory columns of each server are shown in one row. I expect the following output:
Expected result
I wrote the following code in Grafana using the InfluxQL language:
SELECT last("used_percent") as lastmem ,last("usage_system") as lastcpu FROM "mem" ,"cpu"
WHERE ("province" =~ /^$SelectProvince$/)
--AND ("cpu" = 'cpu-total')
AND $timeFilter
GROUP BY "host"
which gives the following output:
Result now
Can anyone advise how I should rewrite the query?
I have a postgres data-source in Grafana that's normalized which restricts my graph-visualization legend to show only the ID (hash) of my record. I want to make this human-readable but the id -> name mapping is in a different datasource/postgres database.
Grafana supports templating-variables which I think could allow me to load my id -> naming reference data but there isn't clear documentation on how to access the label_values as a reference-table within the postgres driver's query editor.
Is there a way to configure the template variable to load reference data (id -> name) & leverage it to translate my metric/legend ids within the grafana postgres driver?
For Example (pseudo-grafana postgres query editor):
SELECT
$__timeGroupAlias(start,$__interval),
animal_names.__value AS metric,
count(dog.chewed_bones) AS “# bones chewed“
FROM animals.dog dog
JOIN $TEMPLATE_VAR_REF_DATA animal_names ON dog.id = animal_names.__text
WHERE $__timeFilter(start_time)
GROUP BY 1,2
ORDER BY 1,2
Closest answer I found is here but doesn't get into details:
johnymachine's comment # https://github.com/grafana/grafana/issues/1032
I realized the github comment meant use a jsonb aggregate function as a variable like in the following solution:
Dashboard Variable (Type Query): select jsonb_object_agg(id,name) from animal_names;
Grafana Postgres Pseudo-Query:
SELECT
$__timeGroupAlias(start,$__interval),
animal_names::jsonb ->> dog.id::text AS metric,
count(dog.chewed_bones) AS “# bones chewed“
FROM animals.dog
WHERE $__timeFilter(start_time)
I am trying to create a filter for a field that contains over 5000 unique values. However, the filter's query is automatically setting a limit of 1000 rows, meaning that the majority of the values do not get displayed in the filter dropdown.
I updated the config.py file inside the 'anaconda3/lib/python3.7/site-packages' directory by increasing the DEFAULT_SQLLAB_LIMIT and QUERY_SEARCH_LIMIT to 6000, however this did not work.
Is there any other config that I need to update?
P.S - The code snippet below shows the json representation of the filter where the issue seems to be coming from.
"query": "SELECT casenumber AS casenumber\nFROM pa_permits_2019\nGROUP BY casenumber\nORDER BY COUNT(*) DESC\nLIMIT 1000\nOFFSET 0"
After using the grep command to find all files containing the text '1000', I found out the the filter limit can be configured through the filter_row_limit in viz.py
We are trying to display the following in Grafana using the Bosun/OpenTSDB data source:
a. Hosts in descending order in terms of Top-n Load
b. Top 10 memory consuming processes
c. Top CPU usage consuming processes
However, we could not find suitable metrics for it.
How can this information be displayed?
Secondly, if the metrics are not available in Bosun/OpenTSDB, then how should you create or define new metrics for them?
Overview
Install the Bosun Grafana App plugin (Github Repo) and then setup the Bosun datasource.
Add a Table Panel, set your datasource to your new Bosun datasource.
Use limit(), sort(), and filter() functions as documented in Bosun's Expression Documentation
Table Example
For example, you could have an expression like the following for a table of top CPU:
$avg_cpu = avg(q("avg:$ds-avg:rate{counter,,1}:os.cpu{host=ny-*}{}", "$start", ""))
sort(limit(sort($avg_cpu, "desc"), 10), "desc")
note: sort is called twice so the table has a default sorting of by value
Graph Example
If you wanted to do a Graph panel instead of a table, you could use the filter():
$cpu = q("avg:$ds-avg:rate{counter,,1}:os.cpu{host=ny-*}{}", "$start", "")
$avg_cpu = avg(q("avg:$ds-avg:rate{counter,,1}:os.cpu{host=ny-*}{}", "$start", ""))
filter($cpu, limit(sort($avg_cpu, "desc"), 10))
I am new to tableau, gone through the site before having this question posted, didn't found answer matching to my question.
I have connection established successfully to Cassandra using "DataStax cassandra ODBC driver 64bit windows", evrything is fine, filled all details like "keyspace name, table name as per documentation available in Datastax site".
But when I drag the available table to canvas it's keep on loading for minutes, what the database guy has told me about the data is it's more millions of data for one day, so we have 6months data and that to data gets updated for every 10 minutes, it;s for a reputed wind energy company.
My client has given me "" CQL used for creating table:
create table abc_data_test.machine_data
(machine_id text, tag text, timestamp timestamp, value double,
PRIMARY KEY((machine_id, tag), timestamp))
WITH CLUSTERING ORDER BY(timestamp DESC)
AND compression = { 'sstable_compression' : 'LZ4Compressor' };"".
Where to keep this code?
I tried to insert in connection page it's giving a error. I am getting a new custom sql error (I placed the code in "new custom sql" ) .
The time is still running, can be seen as:
processing request: connecting to datasource, Elapsed time 87:09
The error from new custom sql is
An error occured while commuicating with the datasource. [DataStax][CassandraODBC] (10) Error while executing a query in Cassandra:33562624: line 1.11 no viable alternative at input '1' (SELECT [TOP]1...)
I'm using windows 10 64bit, DataStax odbc driver 64bit-2.4.1 version,DSE is4.8 and later .
You cannot pass DDL sql into the custom sql box. If the Cassandra connection supports the Initial SQL option, you could pass it there. Then your custom sql would be some sort of select statement. Otherwise, create the table in Cassandra then connect to that table from Tableau.