Grafana dashboard variable from Loki logs - grafana

I'm a beginer for Grafana Loki, and now we have a running instance which works without issue (can see the log itself), now we want to define some variables and monitors them in the dashboard.
Below is one of our log froward from promtail->loki->grafana, belongs to job "mqtt_log",
we want to extract the "534654234" and the "1" from the log as two variable, and monitor in the dashboard.
2022-11-02 12:16:23 mqtt_log 2022-11-02 12:16:23,428 - AliyunMqtt - INFO - elevator/534654234/cabin/position/: b'{"Name":"Group.Elevators{EquipmentNumber=534654234}.Cabins:0.Position","Value":"{"Group":"1"}","Representation":"Live","TimeStamp":1667362583365}'
The problem is we don't know how to define the variables, anyone can share some comments, thanks.

You can't create dynamic (only hardcoded) dashboard variable from the parsed logs. You can do that only from existing labels.

Related

Grafana (v. 8.4.1) not connecting to InfluxDB (v.2.1.1) database

I have three docker containers running. The first runs a python script that writes data from a sensor to the InfluxDB emon_data bucket running in a second container. This works perfectly and i can run queries and create dashboards within InfluxDB. The third container runs Grafana. The data source setting in Grafana that establishes the connection to InfluxDB seems to be correct as it confirms having a connection to the data source - see picture.
However, when I go to set up a dashboard in Grafana it keeps throwing an error stating that the database cannot be found - see picture.
I have tried to find information on this error but am not finding much and what I am finding seems to be for much older versions of InfluxDB and Grafana. Any suggestions or pointers on how to resolve this would be much appreciated.
Baobab

Grafana variable query is not retrieving all endpoints' from a Prometheus pod target (but gets some of them)

While maintaining a simple set of a Grafana and a Prometheus pod, plus a few others, within a cluster on Azure Kubernetes Services (AKS)—I ran into an issue where a "instance" variable query set up in Grafana only retrieves any data from several, out of 52, VM's/instances in a Prometheus scrape job.
There are a couple other Grafana variables that can arbitrarily nest / be nested inside of the "instance" variable, but switching the order of them did nothing.
To elaborate, this is bizarre because I am getting perfect data for 11 out of the 52 VM's/instances while the others are not being populated in the "instance" Grafana variable as they should be. Maybe this is a backend issue, but have not found any oddities while probing around with kubectl.
Thank you!
All the 52 instances were not sending the windows_iis_requests_total metric.Hence the label_values gave the incomplete response.

EFK - Have preconfigured filter by container that will appear in Kibana

I've got the EFK stack installed on kubernetes following this addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
What I want to achieve is having all the logs of the same pod together, and even maybe some other filters. But I don't want to configure the filter in kibana with the GUI, I'd like to have them preconfigured in the way that some of my known containers (the containers that I want to monitorize) are configured previously and installed when kibana rather than using an additional step to import/export them. I'd like to have the predefined filters in a way that, immediately after the installation, I can go to "discover", select the pod name that I want to see and then I see all the logs in the format:
In my understanding, that being the first time that I use this tech is near to zero, the in the fluentd-configmap.yml with the correct parameters should do the trick, but none of my tries has altered what I see in kibana.
Am I looking in the correct place for doing this or this filter is not for this use and I'm completely wasting my time? How could I do this filter in any case?
Any help, even if is only a hint, would be appreciated.

How to enable systemd collector in docker-compose.yml file for node exporter

Hi I 'm new to prometheus I have a task to make prometheus show systemd services metrics (I use grafana for visualization) I' m using stefanprodan/dockprom example as my starting point however I couldn't find how to enable systemd collector for node exporter in the node exporter section of the docker-compose.yml and also leave all the enabled by default collectors. Also I need help with getting that info to be sent into grafana. I would appreciate the code in the example or a place where I could find an adequate explanation how to do it like for dummies because I'm not experienced. Thanks in advance.
In order to enable the systemd collector in node_exporter, the command line flag --collector.systemd needs to be passed to the exporter (reference). The default collectors will remain enabled, so you don't need to worry about that.
In order to pass that flag to the application, you need to add that flag to the command portion of the nodeexporter section of the Docker Compose file (here)
In regards to sending the data to Grafana, as long as you have your Prometheus data source configured in Grafana, those metrics will show up automatically -- you don't need to update your Prometheus->Grafana when or removing metrics (or really ever, after initial setup).

How to set up StatsD (along with Grafana & Graphite) as backend for Kamon?

I want to track Akka actor's metrics and for that I am using Kamon a JVM monitoring tool, which requires a backend service to post it's stats data so for this purpose I've decided to use open source StatsD with the combination of Grafana & Graphite. Here is the Grafana image which I ran in the docker (with the help of docker tool since I am on Mac), everything thing is working fine. I am able to see Grafana UI screen but its showing some random data in the graphs, may be these are example graphs. Now I am struggling on how to configure it with my own datasource. If anybody here had same experience in the past, can help me? Any kind of help would be appreciated.
The random graphs you are seeing are the default grafana test datasource.
You first need to configure a new datasource that points at the Graphite metrics. The important thing to realise here is that the URL to the Graphite datasource from Grafana is located within the same Docker container i.e. the localhost.
If you set up a new datasource with the following properties:
Name: graphite
Default: checked
Type: Graphite
URL: http://localhost:8000
Access: proxy
You should then have a datasource that points to the Graphite metric data within the Docker container.
Note - the default username/password for the Grafana UI is admin/admin.
Hope this helps.