I am starting to use the grafana environment and I have seen that there are plugins to compare time series. but now I'm starting to use Loki in Grafana and I need to have the same functionality as when I write normal queries.
How can I compare the data of a week with the previous week in grafana loki?
Related
I have updated my Influx database and also mapped the databases. But now I get the following problem in Grafana:
InfluxDB Error: default retention policy not set for database
InfluxDB Error: not executed
What could be the reason? I get the values via Flux without any problems. However, I would like to continue using InfluxQL
In order to continue using InfluxQL you will need to setup the Database/Retention Policy mapping for your new 2.x buckets, so that InfluxQL can treat them like 1.x databases. Have you done this already?
Docs to refer:
https://docs.influxdata.com/influxdb/cloud/query-data/influxql/dbrp/#create-dbrp-mappings
Example:
influx v1 dbrp create --default --bucket-id 520047e21111111 --db telegraf --rp default
I think you may change default to autogen (last parameter). I used default as it is used by Grafana 9? (Not confirmed). You see this in your error message:
InfluxDB Error: default retention policy not set for database
Of course you need create such mapping for each bucket you have.
Maybe you will find it also useful example connection Grafana 9.1 -> Influx 2.4.
See Configure InfluxDB authentication:: https://docs.influxdata.com/influxdb/v2.1/tools/grafana/?t=InfluxQL
In this format you need to pass Authorization header. With space in it!
Token y0uR5uP3rSecr3tT0k3n
You can generate token in Influx web GUI (it will be long and i think Base64 encoded?)
im starting with tracing tools. I would like to use Grafana Tempo backend storage and as UI Jaeger. Is possible that this stack will work together? Im running that in docker via official docker-compose files.
I checked Jaeger documentation and did not find anything about Grafana Tempo support. There is only Elastic, Cassandra, Fluxdb etc... but not Grafana Tempo.
Thanks
You have to remember when you use Tempo (or Loki) that these systems do not index data. This is why they are so inexpensive; the challenge is that you cannot do a full text search across the data in bulk, this is why Jaeger does not support Tempo as a backend. The way all the Grafana projects work is that when you troubleshoot you start with a metric, isolate down to a small timeframe or specific component, then pivot to logs or traces. Unfortunately, when troubleshooting there are lots of good reasons to start with logs or traces, but this is not possible with their backends, this is the tradeoff between indexing and not indexing, which is why they are inexpensive to operate in comparison to OpenSearch/ElasticSearch.
I want to use grafana to monitor data, utilizing the mathematical abilities of the FLUX query language.
For this purpose, I've set up an influxdb and a grafana server, and I installed telegraf.
user#Logger-0271:~$ influx
Connected to http://localhost:8086 version 1.8.2
InfluxDB shell version: 1.8.2
> show databases
name: databases
name
----
_internal
localdb
brunosdb
telegraf
> use brunosdb
Using database brunosdb
> show measurements
name: measurements
name
----
flowrate
ruecklauftemp
vorlauftemp
Within grafana, choosing InfluxDB as Query language (see below) I can plot the measurements just fine. However, if I choose FLUX, I'm only able to plot the measurements in the telegraf database.
> use telegraf
Using database telegraf
> show measurements
name: measurements
name
----
cpu
disk
diskio
kernel
mem
processes
swap
system
InfluxDB as Query Language:
FLUX as Query Language:
I tried to manually insert data into the telegraf database of the InfluxDB, but it does not appear as a grafana measurement.
How can I input data into the InfluxDB and then use FLUX in grafana to perform calculations and plot the data? I've read that the inputs have to be defined in the config file, I don't know how though ...
I was able to enable an MQTT data ingress by changing the [[inputs.mqtt_consumer]] section of the config file.
I have a working Kubernetes cluster that I want to monitor with Grafana.
I have been trying out many dashboards from https://grafana.com/dashboards but they all seem to have some problems: it looks like there's a mismatch between the Prometheus metric names and what the dashboard expects.
Eg if I look at this recently released, quite popular dashboard: https://grafana.com/dashboards/5309/revisions
I end up with many "holes" when running it:
Looking into the panel configuration, I see that the issues come from small key changes, eg node_memory_Buffers instead of node_memory_Buffers_bytes.
Similarly the dashboard expects node_disk_bytes_written when Prometheus provides node_disk_written_bytes_total.
I have tried out a lot of Kubernetes-specific dashboards and I have the same problem with almost all of them.
Am I doing something wrong?
The Prometheus node exporter changed a lot of the metric names in the 0.16.0 version to conform to new naming conventions.
From https://github.com/prometheus/node_exporter/releases/tag/v0.16.0:
Breaking changes
This release contains major breaking changes to metric names. Many
metrics have new names, labels, and label values in order to conform
to current naming conventions.
Linux node_cpu metrics now break out guest values into separate
metrics.
Many counter metrics have been renamed to include _total.
Many metrics have been renamed/modified to include
base units, for example node_cpu is now node_cpu_seconds_total.
See also the upgrade guide. One of its suggestion is to use compatibility rules that will create duplicate metrics with the old names.
Otherwise use version 0.15.x until the dashboards are updated, or fix them!
I have a MongoDB using the database profiler to collect the slowest queries. How can I send this information to Datadog and analyze it in my Datadog dashboard?
Once the datadog is properly installed on your server, you can use the custom metric feature to let datadog read your query result into a custom metric and then use that metric to create a dashboard.
You can find more on custom metric on datadog here
They work with yaml file so be cautious with the formatting of the yaml file that will hold your custom metric.