Syncing the data from mongoDB to elasticsearch using Logstash - mongodb

I am trying to sync the mongoDB database to the elasticsearch. I am using logstash-input-mongoDb and logstash-output-elasticsearch plugins.
The issue is mongoDb plugin is not able to extract all the information from the inserted document in mongodb, thus I am seeing only few fields being inserted to the elasticsearch. And I also get the entire query as the log in elasticsearch index. I tried to manipulate the filters in the config file for the logstash and change the input to the elasticsearch but could not make it work.
Any help or suggestion would be great.
Edit:
Mongo schema:
A:{
B: 'sometext',
C: {G: 'someText', H:'some text'}
},
D:[
{E:'sometext',F:'sometext'},
{E:'sometext',F:'sometext'},
{E:'sometext',F:'sometext'}
]
plugin:
input {
mongodb {
uri => 'mongodb://localhost:27017/testDB'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'testCOllection'
batch_size => 1000
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "testdb_testColl"
hosts => ["localhost:9200"]
}
}
output to elastic:
{
//some metadata
A_B: 'sometext',
A_C_G: 'someText',
A_C_H: 'some text',
log_entry: 'contains complete document inserted to mongoDB'
}
We are not getting property D of mongo collection in the elastic.
Hope this explains the problem more elaborately.

because your configuration looked good to me, I checked the issues of the phutchins/logstash-input-mongodb repo, and I found this one: "array not stored to elasticsearch", which pretty much described your problem. It is still an open issue, but you might want to try out the workaround suggested by ivancruzbht. Such workaround uses the ruby Logstash filter to parse the log_entry field, which you also confirmed has all the fields - including D.

Related

Displaying mongo data with Binary data

Relatively new to mongo, more familiar with mysql. (This is all done in the shell - mongosh)
I have a collection that has Binary fields in it like this:
db> db.myColl.find()
[
{
_id: ObjectId("633c6e6af5c0fc6e55d6ad44"),
MyID: '2',
Data: Binary(Buffer.from("7b22....", "hex"), 0),
OtherFieldsFollow.......,
}
]
I've found that I can iterate over the results using a forEach loop like this:
db.myColl.find().forEach(function(x) { console.log(x.Data.toString() })
However I'm looking for something bit more global. Like in mysql I can apply transforms to each field in my select like: SELECT UNHEX(Data), MyID FROM myColl
Is there anything like that in mongo so that I can see the whole document with Data decoded to a string without having to iterate and manually console.log each field?
Slightly related, how do I index document lists returned from find()?
var docs = db.myColl.find()
msgs[0] => nothing
console.log(msgs[0]) => nothing
How do I access individual documents from a find result?

sync data from mongodb to elasticsearch via logstash

I just want to sync the data from mongodb to elastic using logstash. Its working good, when any new record comes in mongodb, logstash pushes into elastic. But when I update any record in mongodb then it does not change into elasticsearch even when i delete it nothing happen . I want to make changes in the config file so that when any record updates or deleted in mongo it should reflect in elastic as well.
input {
mongodb {
uri => 'mongodb://xxxxxx:32769/database'
placeholder_db_dir =>'/usr/share/logstash/bin/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'tags'
}
}
filter {
mutate {
rename => { "_id" => "mongo_id" }
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
action => "index"
index => "mongo_data"
hosts => ["https://xxxxxxxx:8443"]
ssl => true
doc_as_upsert => true
}
}
That is not a use-case that the input was designed to support. The documentation states "This was designed for parsing logs that were written into mongodb. This means that it may not re-parse db entries that were changed and already parsed." For "may not" read "will not". The code builds a cursor that finds documents with an id greater than the last id it read. It never looks for updates or deletions. Note also that the test is "greater than the last id" and the way it initializes the last id means it never reads the first document in the collection.

How can I specify the fileds to insert in MongoDB Output in Logstash

I've below in output config, but it inserts the raw data in MongoDB. How can I insert only a few selected fields? For InfluxDB, we have an attribute data_points with which I can do it, but MongoDB plugin doesn't seem to have any such feature.
mongodb {
collection => "logs"
database => "test"
uri => "mongodb://localhost:27017"
codec => line
{
enable_metric => "false"
format => "data1:%{val1}, data2:%{val2}, data3:%{val3}"
}
}
You can use the Logstash prune filter for this. The prune filter's whitelist_names setting allows you to removes all fields that are not enumerated in the array.
filter {
prune {
whitelist_names => ["field1", "field2", "field3"]
}
}
Something that I think is really cool about the prune filter is that it also allows you to input regular expressions - and removes any field that does not match the regular expression. So instead of the above, you could have:
filter {
prune {
whitelist_names => ["^field\d+"]
}
}
Another note: The prune filter does not come installed by default. You must run bin/logstash-plugin install logstash-filter-prune

Perl module for Elastisearch Percolator

I'm trying to use the Elasticsearch Percolator with perl and I have found this cool module.
The Percolation methods are listed here
As far as I can tell they're just read methods, hence it is only possible to read the queries index and see if a query already exists, count the queries matched, etc.
Unless I'm missing something it is not possible to add queries via the Percolator interface, so what I did is use the normal method to create a document against the .percolator index as follow:
my $e = Search::Elasticsearch->new( nodes => 'localhost:9200' );
$e->create(
index => 'my_index',
type => '.percolator',
id => $max_idx,
body => {
query => {
match => {
...whatever the query is....
},
},
},
);
Is that the best way of adding a query to the percolator index via the perl module ?
Thanks!
As per DrTech answer the code I posted looks to be the correct way of doing it.

logstash mongodb output and ISODate type

I have some troubles trying to convert a date type field into mongoDB format (ISODate).
I have a RabbitMQ queue with JSON messages inside. These messages have a Date property like this :
Date : "2014-05-01T14:53:34.25677Z"
My logstash service read the RabbitMQ queue and inject messages into mongoDB.
Here is my logstash config file :
input {
rabbitmq {
...
codec => json
}
}
output {
mongodb {
codec => json
collection => "log"
isodate => true
database => "Test"
uri => "mongodb://localhost:27017"
}
}
My problem is that my Date property is insterted as string instead as Date. How can I do to tell Logstash to insert my Date field as an ISODate field into mongoDB?
Thank you
You should use a logstash Date filter to convert the string into a Date prior to inserting it into MongoDB: http://logstash.net/docs/1.4.2/filters/date
Don't know your full schema but it should looking something like this:
filter {
date {
match => [ "Date", "ISO8601" ]
}
}
Note the use of "ISO8601" - that appears to match the format you are receiving but you may need to play around a bit with it. As you test this I'd strongly suggest using the stdout output option for test runs to easily see what's getting done prior to insertion into MongoDB:
output {
stdout { codec => rubydebug }
}