I activated mongodb plugins for cloud grafana, and I cant understood how i cant use query, all examples make failure db.serverStatus().connections
What query I can use for to collection any metrics
Related
At the moment I am trying to apply the following how-to to my Quarkus app: MongoDB Metrics
So I have added the quarkus-smallrye-metrics and quarkus-mongodb-client dependencies in the pom.xml and added quarkus.mongodb.metrics.enabled=true to the application.properties.
When I run the service, and go to /q/metrics, unfortunately specific MongoDB metrics will not appear. Generic vendor items but not related to MongoDB.
Even if I put the quarkus.mongodb.metrics.enabled to =false, it did not changed because vendor still appears.
How can I configure the Metrics of de Quarkus App for MongoDB so I can read the MongoDB metrics of the app?
Have been trying to fetch metrics for my Cloud SQL (postgres) instance to get insights into query performance, but I'm unable to find a way to fetch metrics that are in BETA and ALPHA stage.
For example, the metric
database/postgresql/insights/perquery/execution_time is listed in the google cloud metrics page but does not show up in the metrics explorer.
Have tried fetching the metrics using the java sdk which seems to accept/recognise the request and the metric name but does not return any time-series data
Curious to know if BETA/ALPHA metrics needs additional configuration to be enabled?
The SQL metrics became available in the metrics explorer and the SDK after enabling Query Insights in the google cloud console.
Although this looks obvious, would be good to have a note mentioning this in the google metrics page
I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.
I am searching for tool, which should provide collections overview, queries to them, replicas configurations, instance performance dashboards.
You can look for zabbix as elementary monitoring. However for querying the DB for data you need to use custom shell scripts or metric reporting tools to get that information
I have a MongoDB using the database profiler to collect the slowest queries. How can I send this information to Datadog and analyze it in my Datadog dashboard?
Once the datadog is properly installed on your server, you can use the custom metric feature to let datadog read your query result into a custom metric and then use that metric to create a dashboard.
You can find more on custom metric on datadog here
They work with yaml file so be cautious with the formatting of the yaml file that will hold your custom metric.