How to set up StatsD (along with Grafana & Graphite) as backend for Kamon? - scala

I want to track Akka actor's metrics and for that I am using Kamon a JVM monitoring tool, which requires a backend service to post it's stats data so for this purpose I've decided to use open source StatsD with the combination of Grafana & Graphite. Here is the Grafana image which I ran in the docker (with the help of docker tool since I am on Mac), everything thing is working fine. I am able to see Grafana UI screen but its showing some random data in the graphs, may be these are example graphs. Now I am struggling on how to configure it with my own datasource. If anybody here had same experience in the past, can help me? Any kind of help would be appreciated.

The random graphs you are seeing are the default grafana test datasource.
You first need to configure a new datasource that points at the Graphite metrics. The important thing to realise here is that the URL to the Graphite datasource from Grafana is located within the same Docker container i.e. the localhost.
If you set up a new datasource with the following properties:
Name: graphite
Default: checked
Type: Graphite
URL: http://localhost:8000
Access: proxy
You should then have a datasource that points to the Graphite metric data within the Docker container.
Note - the default username/password for the Grafana UI is admin/admin.
Hope this helps.

Related

Grafana (v. 8.4.1) not connecting to InfluxDB (v.2.1.1) database

I have three docker containers running. The first runs a python script that writes data from a sensor to the InfluxDB emon_data bucket running in a second container. This works perfectly and i can run queries and create dashboards within InfluxDB. The third container runs Grafana. The data source setting in Grafana that establishes the connection to InfluxDB seems to be correct as it confirms having a connection to the data source - see picture.
However, when I go to set up a dashboard in Grafana it keeps throwing an error stating that the database cannot be found - see picture.
I have tried to find information on this error but am not finding much and what I am finding seems to be for much older versions of InfluxDB and Grafana. Any suggestions or pointers on how to resolve this would be much appreciated.
Baobab

Does Jaeger support Grafana Tempo backend?

im starting with tracing tools. I would like to use Grafana Tempo backend storage and as UI Jaeger. Is possible that this stack will work together? Im running that in docker via official docker-compose files.
I checked Jaeger documentation and did not find anything about Grafana Tempo support. There is only Elastic, Cassandra, Fluxdb etc... but not Grafana Tempo.
Thanks
You have to remember when you use Tempo (or Loki) that these systems do not index data. This is why they are so inexpensive; the challenge is that you cannot do a full text search across the data in bulk, this is why Jaeger does not support Tempo as a backend. The way all the Grafana projects work is that when you troubleshoot you start with a metric, isolate down to a small timeframe or specific component, then pivot to logs or traces. Unfortunately, when troubleshooting there are lots of good reasons to start with logs or traces, but this is not possible with their backends, this is the tradeoff between indexing and not indexing, which is why they are inexpensive to operate in comparison to OpenSearch/ElasticSearch.

Monitor smart power plug with Prometheus / Grafana

I barely managed to set up Prometheus & Grafana on my new Raspberry Pi (running Raspbian). Now I would like to monitor a smart power plug with a REST API. That means I could send a curl command and receive some data back:
$ curl --location --request GET '[Switch IP]/report'
{
"power": 35.804927825927734,
"relay": true,
"temperature": 21.369983673095703
}
However I am at a loss as to how to get this data automagically queried and parsed by Prometheus. My Google Fu is failing me as all the results explain how to query Prometheus. Any hints would be greatly appreciated.
It's non-trivial, unfortunately.
Prometheus "scrapes" HTTP endpoints and expects these to publish metrics using Prometheus' exposition format. This is a simple text format that lists metrics with their values. I was unable to find a good example.
You would need to have an "exporter" that interacts with your devices and creates metrics (in the Prometheus format) and publishes these on an HTTP endpoint (not REST just a simple text page).
Then, you'd point the Prometheus server at this exporter's endpoint and Prometheus would periodically read the metrics representing your device and enable you to interact with the results.
There are a few possible approaches to make this a bit more straightforward:
https://github.com/ricoberger/script_exporter
https://github.com/grafana/agent/issues/1371 — discussing a possible script_exporter integration
https://github.com/prometheus/pushgateway — Prometheus’ push gateway
https://github.com/prometheus/blackbox_exporter — Prometheus’ blackbox exporter
https://medium.com/avmconsulting-blog/pushing-bash-script-result-to-prometheus-using-pushgateway-a0760cd261e — this post shows something similar

Sending metrics from kafka to grafana

I have a use case in which metrics will be written to kafka topics and from there I have to send these metrics to a grafana collection point.
Can it be done without a datasource?
Any idea how it can be done?
You need to store your metrics somewhere and then visualize it. If you want to use Grafana, you can store metric data from Kafka to Elasticsearch via connectors. I think you can also store them in InfluxDB, Graphite, and Prometheus. You can use data source plugins that Grafana provides.
Also using Kibana is a good option. Kibana is like Graphana. Elasticsearch and Kibana are part of Elastic Stack.
Refer to the below pics.
1 :
2 :
I found this open source code that is basically a kafka plugin for Grafana.
https://github.com/zylklab/lorca
You can either use it straightaway or get inspired to write your own Grafana plugin.

How to enable systemd collector in docker-compose.yml file for node exporter

Hi I 'm new to prometheus I have a task to make prometheus show systemd services metrics (I use grafana for visualization) I' m using stefanprodan/dockprom example as my starting point however I couldn't find how to enable systemd collector for node exporter in the node exporter section of the docker-compose.yml and also leave all the enabled by default collectors. Also I need help with getting that info to be sent into grafana. I would appreciate the code in the example or a place where I could find an adequate explanation how to do it like for dummies because I'm not experienced. Thanks in advance.
In order to enable the systemd collector in node_exporter, the command line flag --collector.systemd needs to be passed to the exporter (reference). The default collectors will remain enabled, so you don't need to worry about that.
In order to pass that flag to the application, you need to add that flag to the command portion of the nodeexporter section of the Docker Compose file (here)
In regards to sending the data to Grafana, as long as you have your Prometheus data source configured in Grafana, those metrics will show up automatically -- you don't need to update your Prometheus->Grafana when or removing metrics (or really ever, after initial setup).