Datadog: Slow queries from MongoDB - mongodb

I have a MongoDB using the database profiler to collect the slowest queries. How can I send this information to Datadog and analyze it in my Datadog dashboard?

Once the datadog is properly installed on your server, you can use the custom metric feature to let datadog read your query result into a custom metric and then use that metric to create a dashboard.
You can find more on custom metric on datadog here
They work with yaml file so be cautious with the formatting of the yaml file that will hold your custom metric.

Related

Export Trivy vulnerability report to Grafana Dashboard

I have configured Kasten k10 and scanning images using Trivy vulnerability scanner.
I can scan images and getting reports
sample report image
I want to export and display this report to Graffana using prometheus.
can someone help me how to export vulnerability reports to grafana dashboard?
It appears to me that the data generated by trivy is a special data format and you will need a custom Prometheus exporter which will ingest this report from trivy and create metrics-like data in a familiar format like JSON which prometheus can then serve to grafana as a data source.
Here is an example of a trivy-exporter for prometheus. You can configure it in your environment, it will process trivy scan reports and delivery them to prometheus, you can then very simply query your prometheus data source in grafana and get your report data visualized.

Grafana CloudWatch query to reference SERVICE_QUOTA

According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Quotas-Visualize-Alarms.html, we can perform CloudWatch math expression like this on the AWS CloudWatch console m1/SERVICE_QUOTA(m1)*100 where m1 is a CloudWatch metric.
Is there any way we can do the same thing in the Grafana's CloudWatch query where we reference SERVICE_QUOTA() in the Grafana CloudWatch query Expression field?
Sample screenshot of Grafana CloudWatch query expression
You can use CloudWatch math in Grafana. See doc: https://grafana.com/docs/grafana/latest/datasources/aws-cloudwatch/#metric-math-expressions
You can find also ready dashboard for this use case: https://grafana.com/grafana/dashboards/12979

Setting up telegraf config file to enable data ingress

I want to use grafana to monitor data, utilizing the mathematical abilities of the FLUX query language.
For this purpose, I've set up an influxdb and a grafana server, and I installed telegraf.
user#Logger-0271:~$ influx
Connected to http://localhost:8086 version 1.8.2
InfluxDB shell version: 1.8.2
> show databases
name: databases
name
----
_internal
localdb
brunosdb
telegraf
> use brunosdb
Using database brunosdb
> show measurements
name: measurements
name
----
flowrate
ruecklauftemp
vorlauftemp
Within grafana, choosing InfluxDB as Query language (see below) I can plot the measurements just fine. However, if I choose FLUX, I'm only able to plot the measurements in the telegraf database.
> use telegraf
Using database telegraf
> show measurements
name: measurements
name
----
cpu
disk
diskio
kernel
mem
processes
swap
system
InfluxDB as Query Language:
FLUX as Query Language:
I tried to manually insert data into the telegraf database of the InfluxDB, but it does not appear as a grafana measurement.
How can I input data into the InfluxDB and then use FLUX in grafana to perform calculations and plot the data? I've read that the inputs have to be defined in the config file, I don't know how though ...
I was able to enable an MQTT data ingress by changing the [[inputs.mqtt_consumer]] section of the config file.

Sending metrics from kafka to grafana

I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.

Logging and event tracer on Kubernetes

Is there any way of getting merged logs from more than one deployments on Kybernetes? What's the best way of logging events for all deployments?
Look for Elasticsearch , Logstash and Kibana (ELK) stack with Filebeats or FluentD to ship log data from individual deployments/pods onto your Elasticsearch DB. Once data is in your DB , use Kibana to visualize and search your merged logs. Logstash can be used to modify your data inflight. A simple google search should yield you lot of resources on doing the same.