I need to have multiple values passed in search box in grafana application logs dashboard - kubernetes

I need to have multiple values passed in search box in grafana application logs dashboard. currently i am using below query to search for a keyword in logs panel.
kubernetes.namespace:$namespace AND kubernetes.pod.name:$pod AND stream:$log_stream AND message:/.*$search.*/
Datasource: Elasticsearch 7.6.1
This will take only single string and does not accept multiple strings. I need regex to search for anything (includes special character, alphanumeric etc) in the logs.

Use advance variable format in the query, which is suitable for Lucene - Elasticsearch. E.g.:
${variable:lucene}
Doc: https://grafana.com/docs/grafana/latest/variables/advanced-variable-format-options/#lucene---elasticsearch

Related

Conditional filtering logs in Grafana

I have a Loki instance connected as data source to Grafana collecting the log records from a web-app. Only some of those logs related with web-server contain "request_id=XYZ" and I would like to filter out specific log records using $request_id variable.
I can't parse request_id directly as a label since not all of the logs contain this key-pair value.
In order to do filtering I make a query like this:
{compose_project="$var1",host="$var2"} | regexp request=(?P<request_id>(\w+)) | request_id=$request_id
It works nicely, however when no request_id is passed by the user i.e. $request_id variable is empty, I can't see all of the logs. Only the part WITHOUT request_id in the text is listed.
In a perfect scenario, without $request_id being set I would like to see all of the logs. Now I'm wondering if it's possible to somehow conditionally apply this filter request_id=$request_id only when regex matches an occurence of the extracted label in the log? Or is there maybe another way to accomplish this?

Grafana setting alerts using wildcard in query

Hi I am using a Json template dashboard in Grafana while setting alert I got TEMPLATE VARIABLES ARE NOT SUPPORTED IN ALERT QUERIES error .(Data source prometheus)
So i decided to duplicate query and disable graph and replace variable with actual value. it worked but I have multiple servers configured .
Is it possible to set alert for all servers using wildcards ?
How to pass server names as wildcard ?
is this correct approach or there are better ways ?
((node_filesystem_avail_bytes{instance="*mycompany*.com:9100",job="node",device!~'rootfs',mountpoint="/"} * 100) / node_filesystem_size_bytes{instance="*mycompany*.com:9100",job="node",device!~'rootfs',mountpoint="/"})

Retrieve Grafana dashboard query using the API

I would like to get the query used in each of my dashboards using the Grafana API.
The expr field in the JSON model menu of the UI seems to contain the query. Is there a way of querying this using the API?
You can't do that. There is no official API which will return all "dashboard queries". It isn't possible, because frontend in the browser generate that and exact query depends on the user input (e.g. time range, dashboard variables, used macros, ....) and also used datasource.

How do I handle fields in elasticsearch that contain a '_'?

I am using a Mongo-Connector targeting elasticsearch. This works great for keeping elasticsearch up to date, but I have a problem with one of the fields because it contains an '_'. The data is being replicated/streamed from mongodb continually if I run a rename/reindex the new documents will start showing up with underscores again.
Kibana does not support underscores at the start of a field. What is the best practice for handling this?
I have filed an issue with elastic2-doc-manager for Mongo-Connector to support ingest nodes, but this feels like a much bigger issue with kibana all my attempts at fixing this issue using scripted fields and renaming the field have failed.
This seems like a huge problem. I see underscores in data everywhere, seems like a very poor decision on the side of the kibana team.
Kibana Error:
I have found some github referencese to this issue, but no work arounds.
Closed Issue: fields starting with underscore ( _ ) doesn't show
Never merged Pull request: Lift restriction of not allowing
'_'-prefixed fields.
Open Issue: Allow fields prefixed with _
(originally #4291)
Fields beginning with _ are reserved for use within Elasticsearch. Kibana does not support fields with _ currently, at least not yet. A request for this - https://github.com/elastic/kibana/issues/14856 is still open.
Until then if you would like to use the field in visualizations etc, I believe you need to rename it.
While you can't rename the field easily without using logstash or filebeat and Mongo-Connector doesn't support either of them you can instead use a scripted field as below to create a new filed and copy the _ field's value. That way you can use the new field to visualize etc. Add a new scripted field for ex. itemType with the below script and see if it works.
doc['_itemType.keyword'].value
Please note though that only keyword fields can be used like this, text type fields won't work. If your _itemType field is of type text, modify the mapping to include a sub field keyword of keyword type under _itemType and try the scripted field.

kibana error in displaying some data

Im indexing from MongoDB 2.4.9 to Elasticsearch 1.1.1 using the River Plugin. And of course, I'm using Kibana3
The documents in the MongoDB that I have contain a cidr. The cidr is in the format:
"cidr" : "0.0.0.0/00"
I have a table and a term panels in my kibana dashboard.
The Table panel shows the part 0.0.0.0/
and the term panel shows the part 00
I need both panels to show the WHOLE cidr value! Like this: 0.0.0.0/00
Does anyone have any idea why these two panels are behaving this way?
Thank you
Elasticsearch is processing the input, and splitting on the "/". logstash should be creating a "raw" version of the field. Try referencing "cidr.raw" in kibana.
If you're not using logstash, you'll need to update the elasticsearch mapping to either set the field to not_analyzed or to add the ".raw" field yourself.
The reference for using not_analyzed is here. Grab the current mapping, edit it, and post it back.
To add ".raw", check out the logstash default template, which shows you the magic to make a multi_field with ".raw".