I have configured Kasten k10 and scanning images using Trivy vulnerability scanner.
I can scan images and getting reports
sample report image
I want to export and display this report to Graffana using prometheus.
can someone help me how to export vulnerability reports to grafana dashboard?
It appears to me that the data generated by trivy is a special data format and you will need a custom Prometheus exporter which will ingest this report from trivy and create metrics-like data in a familiar format like JSON which prometheus can then serve to grafana as a data source.
Here is an example of a trivy-exporter for prometheus. You can configure it in your environment, it will process trivy scan reports and delivery them to prometheus, you can then very simply query your prometheus data source in grafana and get your report data visualized.
Related
Have been trying to fetch metrics for my Cloud SQL (postgres) instance to get insights into query performance, but I'm unable to find a way to fetch metrics that are in BETA and ALPHA stage.
For example, the metric
database/postgresql/insights/perquery/execution_time is listed in the google cloud metrics page but does not show up in the metrics explorer.
Have tried fetching the metrics using the java sdk which seems to accept/recognise the request and the metric name but does not return any time-series data
Curious to know if BETA/ALPHA metrics needs additional configuration to be enabled?
The SQL metrics became available in the metrics explorer and the SDK after enabling Query Insights in the google cloud console.
Although this looks obvious, would be good to have a note mentioning this in the google metrics page
I want to use grafana to monitor data, utilizing the mathematical abilities of the FLUX query language.
For this purpose, I've set up an influxdb and a grafana server, and I installed telegraf.
user#Logger-0271:~$ influx
Connected to http://localhost:8086 version 1.8.2
InfluxDB shell version: 1.8.2
> show databases
name: databases
name
----
_internal
localdb
brunosdb
telegraf
> use brunosdb
Using database brunosdb
> show measurements
name: measurements
name
----
flowrate
ruecklauftemp
vorlauftemp
Within grafana, choosing InfluxDB as Query language (see below) I can plot the measurements just fine. However, if I choose FLUX, I'm only able to plot the measurements in the telegraf database.
> use telegraf
Using database telegraf
> show measurements
name: measurements
name
----
cpu
disk
diskio
kernel
mem
processes
swap
system
InfluxDB as Query Language:
FLUX as Query Language:
I tried to manually insert data into the telegraf database of the InfluxDB, but it does not appear as a grafana measurement.
How can I input data into the InfluxDB and then use FLUX in grafana to perform calculations and plot the data? I've read that the inputs have to be defined in the config file, I don't know how though ...
I was able to enable an MQTT data ingress by changing the [[inputs.mqtt_consumer]] section of the config file.
I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.
I have a MongoDB using the database profiler to collect the slowest queries. How can I send this information to Datadog and analyze it in my Datadog dashboard?
Once the datadog is properly installed on your server, you can use the custom metric feature to let datadog read your query result into a custom metric and then use that metric to create a dashboard.
You can find more on custom metric on datadog here
They work with yaml file so be cautious with the formatting of the yaml file that will hold your custom metric.
I want to track Akka actor's metrics and for that I am using Kamon a JVM monitoring tool, which requires a backend service to post it's stats data so for this purpose I've decided to use open source StatsD with the combination of Grafana & Graphite. Here is the Grafana image which I ran in the docker (with the help of docker tool since I am on Mac), everything thing is working fine. I am able to see Grafana UI screen but its showing some random data in the graphs, may be these are example graphs. Now I am struggling on how to configure it with my own datasource. If anybody here had same experience in the past, can help me? Any kind of help would be appreciated.
The random graphs you are seeing are the default grafana test datasource.
You first need to configure a new datasource that points at the Graphite metrics. The important thing to realise here is that the URL to the Graphite datasource from Grafana is located within the same Docker container i.e. the localhost.
If you set up a new datasource with the following properties:
Name: graphite
Default: checked
Type: Graphite
URL: http://localhost:8000
Access: proxy
You should then have a datasource that points to the Graphite metric data within the Docker container.
Note - the default username/password for the Grafana UI is admin/admin.
Hope this helps.