Grafana CloudWatch query to reference SERVICE_QUOTA - grafana

According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html, we can perform CloudWatch math expression like this on the AWS CloudWatch console m1/SERVICE_QUOTA(m1)*100 where m1 is a CloudWatch metric.
Is there any way we can do the same thing in the Grafana's CloudWatch query where we reference SERVICE_QUOTA() in the Grafana CloudWatch query Expression field?
Sample screenshot of Grafana CloudWatch query expression

You can use CloudWatch math in Grafana. See doc: https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/#metric-math-expressions
You can find also ready dashboard for this use case: https://grafana.com/grafana/dashboards/12979

Related

MongoDb metrics into Grafana via mongodb plugins

I activated mongodb plugins for cloud grafana, and I cant understood how i cant use query, all examples make failure db.serverStatus().connections
What query I can use for to collection any metrics

Cloud SQL (postgres) - Fetching metrics tagged with BETA

Have been trying to fetch metrics for my Cloud SQL (postgres) instance to get insights into query performance, but I'm unable to find a way to fetch metrics that are in BETA and ALPHA stage.
For example, the metric
database/postgresql/insights/perquery/execution_time is listed in the google cloud metrics page but does not show up in the metrics explorer.
Have tried fetching the metrics using the java sdk which seems to accept/recognise the request and the metric name but does not return any time-series data
Curious to know if BETA/ALPHA metrics needs additional configuration to be enabled?
The SQL metrics became available in the metrics explorer and the SDK after enabling Query Insights in the google cloud console.
Although this looks obvious, would be good to have a note mentioning this in the google metrics page

Sending metrics from kafka to grafana

I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.

Query Stackdriver Uptime Checks

I am trying to query for the Stackdriver Uptime Checks using the google monitoring api. I cannot seem to find anything in their documentation that illustrates how to query for the uptime checks that were set up on stackdriver. Here are some of the docs I have been reading through. You will note that some of the query-able metrics include agent.googleapis.com/agent/uptime but this does not return the uptime checks seen on Stackdriver Uptime Checks. Below I am listing some of the documentation I have been sifting through in case it may be helpful.
Does anyone know how/if this can be done?
Google Python Client Docs
Time Series Query
Metrics
I'm a product manager on the Stackdriver team. Unfortunately, Uptime Check metrics are not currently available via the Stackdriver Metrics API. This is a feature we're actively working to provide. I'll follow-up on this thread when the feature is released.
Thank you for your question and for using Stackdriver!
It's my understanding that this metric can now be externally queried as:
monitoring.googleapis.com/uptime_check/check_passed
You can see it referenced in the sample alerting policy JSON for creating uptime check alerting policies.

Datadog: Slow queries from MongoDB

I have a MongoDB using the database profiler to collect the slowest queries. How can I send this information to Datadog and analyze it in my Datadog dashboard?
Once the datadog is properly installed on your server, you can use the custom metric feature to let datadog read your query result into a custom metric and then use that metric to create a dashboard.
You can find more on custom metric on datadog here
They work with yaml file so be cautious with the formatting of the yaml file that will hold your custom metric.