I use Grafana and CloudWatch
Here is my code in Grafana
SELECT
AVG(CPUUtilization)
FROM
"AWS/EC2"
WHERE
AutoScalingGroupName = 'default'
This query result is 20
But I want multi value in query result, like:
SELECT
AVG(CPU) AS cpu,
30 AS lat,
15 AS lon
FROM
"AWS/EC2"
Example, cloudwatch query => result => 20,30,15
How to sovle this problem?
You are not using SQL, but only Metric Insight query. It doesn't support that. See doc: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-metrics-insights-querylanguage.html
Related
I have written the following Flux query in Grafana, I get two results for value. I would like to filter those two by distinct values by scenario key.
I would expect to have "main_flow" and "persons_end_user" results at the end. How can I achive this, I have tried with distinct() and unique(), but does not seem to work.
I have a PromQL query max_over_time(some_metric_max[1h]) but instead of "1h" I would like to use the selected time range in Grafana. I can't find any variable for that (looking for something like $__interval, but for the selected range)...
Use:
max_over_time(some_metric_max[$__range])
See more details at Grafana documentation here.
unable to get a query in grafana singlestat panel for influxdb -
my queries are from the same field not different field.
Example -
field name = fieldA
query1 = SELECT "fieldA" FROM seriesA WHERE $timeFilter AND ("tag1"='yes')
query2 = SELECT "fieldA" FROM seriesA WHERE $timeFilter
Unable to form this in a single line for the "Single Stat" panel query in Grafana.
Has anyone tried this before without using a math plugin? Thanks!
In my Druid data source, I have a hyperUnique aggregation (ingestion time) on one of the fields.
I am trying to do the equivalent of COUNT(DISTINCT(<hyperunique_field>)) on this aggregated field.
Is it supported in the Calcite Druid Adapter? If so, what is the correct way to go about it?
In plywood, I can do COUNT_DISTINCT. Running this returns 0 counts.
SQL:
select floor("__time" to HOUR) time_bucket,”field_1", count(distinct(“ingestion_time_aggregated_field")) as uniq from “datasource" where "__time" between '2017-01-01 00:00:00' and '2017-01-02 00:00:00' and “field_1" in (‘value_1') and “field_2”='value_2' and “field_3”='value_3' and “field_4”='value_4' group by floor("__time" to HOUR),”field_1" order by floor("__time" to HOUR);
ingestion_time_aggregated_field:
{"name": "ingestion_time_aggregated_field", "type": "hyperUnique","fieldName": “field” }
Complex aggregators are not supported by the calcite-druid adapted. The reason is that HLL is an approximate and not exact so it does not actually answer to the query of unique count.
I m trying to retrieve the page_fans using the following FQL query (Insights Table).
SELECT metric, value FROM insights WHERE object_id=182929845081087 AND metric='page_fans' AND end_time=1334905200 AND period=86400
But I just get blank data , when I make the above query.
{
"data": [
]
}
I m able to retrieve the other metrics by just changing the metric value in the above query.
For Eg. I get the page_engaged_users by just changing the metric value in the above query.
SELECT metric, value FROM insights WHERE object_id=182929845081087 AND metric='page_engaged_users' AND end_time=1334905200 AND period=86400
{
"data": [
{
"metric": "page_engaged_users",
"value": 35
}
]
}
What is wrong with the first query in which I m trying to retrieve the page_fans ??
And I know that I can retrieve page_fans using other ways as well !!
If you look at the insights documentation, notice in the last column that the page_fans metric is only available for the lifetime period. Change your query to period=0 or `period=period('lifetime') and you'll get data.
If you want the new fans added on a given day, use the page_fan_adds metric with period('day').
If you request any other period, you will either get an error or no data (which means someone other than you can request that metric for a different period).
The other problem could be that you are requesting data that is too recent. In your query, you're looking at 2012-04-20, so you should be fine. When I tested this, I got no data if I used a date greater than 2012-09-15 (on 2012-09-18).