kibana error in displaying some data - mongodb

Im indexing from MongoDB 2.4.9 to Elasticsearch 1.1.1 using the River Plugin. And of course, I'm using Kibana3
The documents in the MongoDB that I have contain a cidr. The cidr is in the format:
"cidr" : "0.0.0.0/00"
I have a table and a term panels in my kibana dashboard.
The Table panel shows the part 0.0.0.0/
and the term panel shows the part 00
I need both panels to show the WHOLE cidr value! Like this: 0.0.0.0/00
Does anyone have any idea why these two panels are behaving this way?
Thank you

Elasticsearch is processing the input, and splitting on the "/". logstash should be creating a "raw" version of the field. Try referencing "cidr.raw" in kibana.
If you're not using logstash, you'll need to update the elasticsearch mapping to either set the field to not_analyzed or to add the ".raw" field yourself.
The reference for using not_analyzed is here. Grab the current mapping, edit it, and post it back.
To add ".raw", check out the logstash default template, which shows you the magic to make a multi_field with ".raw".

Related

Grafana - How to set a default field value for a visualization with a cloudwatch query as the data source

I'm new to grafana, so I might be missing something obvious. But, I have a custom cloudwatch metric that records http response codes into buckets (e.g. 2xx, 3xx, etc.).
My grafana visualization is using a query to pull and group data from cloudwatch and the resulting fields are dynamic: 2xx (us-east-1), 2xx (us-west-1), 3xx (us-east-1), etc.
I then use transformations to aggregate those values for a global view of the data:
The problem is, I can't create the transformation until the data exists. I'd like to have a 5xx field, but since that data is sporadic, it doesn't show up in the UI and I can't find a way to force "5xx (...)" to exist and have it get used when/if those response codes start occurring.
Is there a way to create placeholder fields somehow to achieve this?
You can't create it in the UI. But you have still option to edit that in the panel model directly. It is JSON, which represent whole panel. Edit it manually - in the panel menu click Inspect > Panel JSON and create&customize another item in the transformation section. It is not very convenient option to edit panel, but you will achieve your target.

I need to have multiple values passed in search box in grafana application logs dashboard

I need to have multiple values passed in search box in grafana application logs dashboard. currently i am using below query to search for a keyword in logs panel.
kubernetes.namespace:$namespace AND kubernetes.pod.name:$pod AND stream:$log_stream AND message:/.*$search.*/
Datasource: Elasticsearch 7.6.1
This will take only single string and does not accept multiple strings. I need regex to search for anything (includes special character, alphanumeric etc) in the logs.
Use advance variable format in the query, which is suitable for Lucene - Elasticsearch. E.g.:
${variable:lucene}
Doc: https://grafana.com/docs/grafana/latest/variables/advanced-variable-format-options/#lucene---elasticsearch

How do I handle fields in elasticsearch that contain a '_'?

I am using a Mongo-Connector targeting elasticsearch. This works great for keeping elasticsearch up to date, but I have a problem with one of the fields because it contains an '_'. The data is being replicated/streamed from mongodb continually if I run a rename/reindex the new documents will start showing up with underscores again.
Kibana does not support underscores at the start of a field. What is the best practice for handling this?
I have filed an issue with elastic2-doc-manager for Mongo-Connector to support ingest nodes, but this feels like a much bigger issue with kibana all my attempts at fixing this issue using scripted fields and renaming the field have failed.
This seems like a huge problem. I see underscores in data everywhere, seems like a very poor decision on the side of the kibana team.
Kibana Error:
I have found some github referencese to this issue, but no work arounds.
Closed Issue: fields starting with underscore ( _ ) doesn't show
Never merged Pull request: Lift restriction of not allowing
'_'-prefixed fields.
Open Issue: Allow fields prefixed with _
(originally #4291)
Fields beginning with _ are reserved for use within Elasticsearch. Kibana does not support fields with _ currently, at least not yet. A request for this - https://github.com/elastic/kibana/issues/14856 is still open.
Until then if you would like to use the field in visualizations etc, I believe you need to rename it.
While you can't rename the field easily without using logstash or filebeat and Mongo-Connector doesn't support either of them you can instead use a scripted field as below to create a new filed and copy the _ field's value. That way you can use the new field to visualize etc. Add a new scripted field for ex. itemType with the below script and see if it works.
doc['_itemType.keyword'].value
Please note though that only keyword fields can be used like this, text type fields won't work. If your _itemType field is of type text, modify the mapping to include a sub field keyword of keyword type under _itemType and try the scripted field.

Titan - How to Use 'Lucene' Search Backend

I am attempting to use the lucene search backend with Titan. I am setting the index.search.backend property to lucene as so.
TitanFactory.Builder config = TitanFactory.build();
config.set("storage.backend", "hbase");
config.set("storage.hostname", "node1");
config.set("storage.hbase.table", "titan");
config.set("index.search.backend", "lucene");
config.set("index.search.directory", "/tmp/foo");
TitanGraph graph = config.open();
GraphOfTheGodsFactory.load(graph);
graph.getVertices().forEach(v -> System.out.println(v.toString()));
Of course, this does not work because this setting is of the GLOBAL_OFFLINE variety. The logs make me aware of this. Titan ignores my 'lucene' setting and then attempts to use Elasticsearch as the search backend.
WARN com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration
- Local setting index.search.backend=lucene (Type: GLOBAL_OFFLINE)
is overridden by globally managed value (elasticsearch). Use
the ManagementSystem interface instead of the local configuration to control
this setting.
After some reading, I understand that I need to use the Management System to set the index.search.backend. I need some code that looks something like the following.
graph.getManagementSystem().set("index.search.backend", "lucene");
graph.getManagementSystem().set("index.search.directory", "/tmp/foo");
graph.getManagementSystem().commit();
I am confused on how to integrate this in my original example code above. Since this is a GLOBAL_OFFLINE setting, I cannot set this on an open graph. At the same time, I do not know how to get a graph unless I open one first. How do I set the search backend correctly?
There is no inmemory search backend. The supported search backends are Lucene, Solr, and Elasticsearch.
Lucene is a good option for a small scale, single machine search backend. You need to set 2 properties to do this, index.search.backend and index.search.directory:
index.search.backend=lucene
index.search.directory=/path/to/titansearchindexdir
As you've noted, the search backend is a GLOBAL_OFFLINE, so you should configure this before initially creating your graph. Since you've already created a titan table in your HBase, either disable and drop the titan table, or set your graph configuration to point at a new storage.hbase.table.

Link with context from Grafana to Kibana (retain time frame and lucene query)

I have Grafana setup with an Elasticsearch datasource and I am graphing 404 http status codes from my webserver.
I want to implement a drill down link to the Kibana associated with my Elasticsearch instance. The required URL is of this form:
https://my.elasticsearch.com/_plugin/kibana/#/discover?_g=(refreshInterval:(display:Off,section:0,value:0),time:(from:now-12h,mode:quick,to:now))&_a=(columns:!(_source),filters:!(),index:'cwl-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'status:404')),sort:!('#timestamp',desc))
For the from: and to: fields, I want to use the current "from" and "to" values that Grafana is using. And for the query: field, I want to use the value from the "Lucene query" of the associated metric.
Does Grafana expose some context object from which I can pull these values, and thus generate the necessary URL?
Or is there some other way?
It's now possible, starting Grafana 7.1.2:
complete working example:
https://kibana/app/kibana#/discover/my-search?_g=(time:(from:'${__from:date}',to:'${__to:date}'))&_a=(query:(language:lucene,query:'host:${host:lucene}'))
https://github.com/grafana/grafana/issues/25396