Conditional filtering logs in Grafana - grafana

I have a Loki instance connected as data source to Grafana collecting the log records from a web-app. Only some of those logs related with web-server contain "request_id=XYZ" and I would like to filter out specific log records using $request_id variable.
I can't parse request_id directly as a label since not all of the logs contain this key-pair value.
In order to do filtering I make a query like this:
{compose_project="$var1",host="$var2"} | regexp request=(?P<request_id>(\w+)) | request_id=$request_id
It works nicely, however when no request_id is passed by the user i.e. $request_id variable is empty, I can't see all of the logs. Only the part WITHOUT request_id in the text is listed.
In a perfect scenario, without $request_id being set I would like to see all of the logs. Now I'm wondering if it's possible to somehow conditionally apply this filter request_id=$request_id only when regex matches an occurence of the extracted label in the log? Or is there maybe another way to accomplish this?

Related

Fluent Bit prometheus_scrape input is not record

I expose kube-state-metrics to an endpoint and scrape it usingthe prometheus_scrape input plugin (using fluentbit 2.0). I want to select some of these metrics and send them to Azure Log Analytics workspace as logs, but it seems like the scraped data is not a record. Not the whole dump, nor individually. When I write a regex parser and apply it via a filter, it gets applied no matter what key I specify in the filter which is wierd. But it seems like they are still not records, because even a lua script can't operate on them, can't even print it to the stdout via the script.
2022-12-02T15:48:19.388264036Z kube_pod_container_status_restarts_total{namespace="kube-system",pod="ama-logs-t6smx",uid="99825b27-919d-4943-bc7d-b87b56081297",container="ama-logs"} = 0
2022-12-02T15:48:19.388264036Z kube_pod_container_status_restarts_total{namespace="kube-system",pod="ama-logs-t6smx",uid="16195b27-915d-3963-bc7d-b86b56557297",container="ama-logs-prometheus"} = 0
2022-12-02T15:48:19.388264036Z kube_pod_container_status_restarts_total{namespace="kube-system",pod="aks-secrets-store-csi-driver-mk47n",uid="d7924927-caf4-39f3-a28b-356af3144f50",container="liveness-probe"} = 0
I tried dropping or altering the records with a lua script, but it simply does not do anything to them, and they still get printed on screen as I did nothing to them with the script.
Is there any way to make them records? Why is this not working?

I need to have multiple values passed in search box in grafana application logs dashboard

I need to have multiple values passed in search box in grafana application logs dashboard. currently i am using below query to search for a keyword in logs panel.
kubernetes.namespace:$namespace AND kubernetes.pod.name:$pod AND stream:$log_stream AND message:/.*$search.*/
Datasource: Elasticsearch 7.6.1
This will take only single string and does not accept multiple strings. I need regex to search for anything (includes special character, alphanumeric etc) in the logs.
Use advance variable format in the query, which is suitable for Lucene - Elasticsearch. E.g.:
${variable:lucene}
Doc: https://grafana.com/docs/grafana/latest/variables/advanced-variable-format-options/#lucene---elasticsearch

Firestore Security rule always returns null for resource

I am trying to create some firestore security rules. However, every rule that I write that involves something other than the users database pulling the document of the current user results in an error. There is some difference I am missing.
Here is the query and the data. The resource object is always null. Any get function that involves pulling from the design database using the designId variable also results in null.
You're putting a pattern into the form, which is not valid. You need to provide the specific document that you want to simulate a read or write. This means you need to copy the ID of the document into that field. It should be something like "/designs/j8R...Lkh", except you provide the actual value.

How to match a list of values from Database1 with a column in Database2 using JDBC Request in JMeter?

I am quite new to JMeter, so I am looking for the best approach to do this: I want to get a list of messageID's from Database1 and then check whether these messageID values will be found in Database2 and then check the ErrorMessage column for these ID's against what I expect.
I have the JDBC Request working for extracting the list of messageID's from Database1. JMeter returns the list to me, but now I'm stuck. I am not sure how to handle the variable names and result variable names field in the JDBC Request and use this in the next throughput controller loop for the JDBC Request for Database2.
My JDBC request looks like this (PostgreSQL):
SELECT messageID FROM database1
ORDER BY created DESC
FETCH FIRST 20 ROWS ONLY
Variable names: messageid
Result variable names: resultDB1
Then I use the BeanShell Assertion to see whether the connection to the database is present, or whether the response is empty.
But now, I have to connect to a different database, so I need to make a new throughput controller with a new JDBC configuration, Request, etc in there, but I don't know how to pass on the messageid list to this new request.
What I thought about was writing the list of results from Database1 into a file and then read the values from that file for Database2, but that seems like unnecessarily complicated to me, like there should be a solution in JMeter already for that. Also, I am running my JMeter tests on a remote linux server, so I don't want to make it more complicated by making new files and saving them somewhere.
You can convert your resultDB1 into a JMeter Property like:
props.put("resultDB1", vars.getObject("resultDB1"));
As per JMeter Documentation:
Properties are not the same as variables. Variables are local to a thread; properties are common to all threads
So basically JMeter Properties is a subset of Java Properties which are global for the whole JVM
Once done you will be able to access the value in other Thread Groups like:
ArrayList resultDB1 = (ArrayList)props.get("resultDB1");
ArrayList resultDB2 = (ArrayList)vars.getObject("resultDB2");
//your code to compare 2 result sets here
Also be aware that since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting so consider migrating to JSR223 Assertion on next available opportunity.

How do I handle fields in elasticsearch that contain a '_'?

I am using a Mongo-Connector targeting elasticsearch. This works great for keeping elasticsearch up to date, but I have a problem with one of the fields because it contains an '_'. The data is being replicated/streamed from mongodb continually if I run a rename/reindex the new documents will start showing up with underscores again.
Kibana does not support underscores at the start of a field. What is the best practice for handling this?
I have filed an issue with elastic2-doc-manager for Mongo-Connector to support ingest nodes, but this feels like a much bigger issue with kibana all my attempts at fixing this issue using scripted fields and renaming the field have failed.
This seems like a huge problem. I see underscores in data everywhere, seems like a very poor decision on the side of the kibana team.
Kibana Error:
I have found some github referencese to this issue, but no work arounds.
Closed Issue: fields starting with underscore ( _ ) doesn't show
Never merged Pull request: Lift restriction of not allowing
'_'-prefixed fields.
Open Issue: Allow fields prefixed with _
(originally #4291)
Fields beginning with _ are reserved for use within Elasticsearch. Kibana does not support fields with _ currently, at least not yet. A request for this - https://github.com/elastic/kibana/issues/14856 is still open.
Until then if you would like to use the field in visualizations etc, I believe you need to rename it.
While you can't rename the field easily without using logstash or filebeat and Mongo-Connector doesn't support either of them you can instead use a scripted field as below to create a new filed and copy the _ field's value. That way you can use the new field to visualize etc. Add a new scripted field for ex. itemType with the below script and see if it works.
doc['_itemType.keyword'].value
Please note though that only keyword fields can be used like this, text type fields won't work. If your _itemType field is of type text, modify the mapping to include a sub field keyword of keyword type under _itemType and try the scripted field.