Real time sync between mongodb and elastic search - mongodb

Using Logstash-input-mongodb able to insert records at real time. But when it comes to update, that is not happening as expected. Can anyone guide me on this.
logstash-mongodb.conf
input {
mongodb {
uri => 'mongodb://127.0.0.1:27017/test-db'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'mycol'
batch_size => 5000
generateId => true
}
}
filter{
mutate { remove_field => "_id" }
}
output {
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "test-index"
}
}

Related

JDBC Logstash Elastic Kibana

I'm using JDBC input plugin to ingest data from mongodb to ElasticSearch.
My config is:
`input {
jdbc {
jdbc_driver_class => "mongodb.jdbc.MongoDriver"
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mongodb_unityjdbc_free.jar"
jdbc_user => ""
jdbc_password => ""
jdbc_connection_string => "jdbc:mongodb://localhost:27017/pritunl"
schedule => "* * * * *"
jdbc_page_size => 100000
jdbc_paging_enabled => true
statement => "select * from servers_output"
}
}
filter {
mutate {
copy => { "_id" => "[#metadata][id]"}
remove_field => ["_id"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "pritunl"
document_id => "%{[#metadata][_id]}"
}
stdout {}
}`
In Kibana I see only one hitenter image description here, but in stdout I see many records from mongodb collection. What should I do, to see them all?
The problem is, that all your documents are saved with the same "_id", so even though you're sending different records to ES, only one document is being overwritten internally - thus you get 1 hit in Kibana.
There is a typo in your configuration to cause this issue.
You're copying "_id" into "[#metadata][id]"
But you're trying to read "[#metadata][_id]" with an underscore.
Removing the underscore when reading the value for document_id should fix your issue.
output {
elasticsearch {
hosts => "localhost:9200"
index => "pritunl"
document_id => "%{[#metadata][id]}"
}
stdout {}
}`

Elasticsearch searching with perl client

I'm attempting to do something that should be simple but I cannot get it to work. I've looked and search all over to find detailed doc for perl search::elsticsearch. I can only find CPAN doc and as far as search is concerned it is barely mentioned. I've search here and cannot find a duplicate question.
I have elasticsearch and filebeat. Filebeat is sending syslog to elasticsearch. I just want to search for messages with matching text and date range. I can find the messages but when I try to add date range the query fails. Here is the query from kibana dev tools.
GET _search
{
"query": {
"bool": {
"filter": [
{ "term": { "message": "metrics" }},
{ "range": { "timestamp": { "gte": "now-15m" }}}
]
}
}
}
I don't get exactly what I'm looking for but there isn't an error.
Here is my attempt with perl
my $results=$e->search(
body => {
query => {
bool => {
filter => {
term => { message => 'metrics' },
range => { timestamp => { 'gte' => 'now-15m' }}
}
}
}
}
);
This is the error.
[Request] ** [http://x.x.x.x:9200]-[400]
[parsing_exception]
[range] malformed query, expected [END_OBJECT] but found [FIELD_NAME],
with: {"col":69,"line":1}, called from sub Search::Elasticsearch::Role::Client::Direct::__ANON__
at ./elasticsearchTest.pl line 15.
With vars: {'body' => {'status' => 400,'error' => {
'root_cause' => [{'col' => 69,'reason' => '[range]
malformed query, expected [END_OBJECT] but found [FIELD_NAME]',
'type' => 'parsing_exception','line' => 1}],'col' => 69,
'reason' => '[range] malformed query, expected [END_OBJECT] but found [FIELD_NAME]',
'type' => 'parsing_exception','line' => 1}},'request' => {'serialize' => 'std',
'path' => '/_search','ignore' => [],'mime_type' => 'application/json',
'body' => {
'query' => {
'bool' =>
{'filter' => {'range' => {'timestamp' => {'gte' => 'now-15m'}},
'term' => {'message' => 'metrics'}}}}},
'qs' => {},'method' => 'GET'},'status_code' => 400}
Can someone help me figure out how to search with the search::elasticsearch perl module?
Multiple filter clauses must be passed as separate JSON objects within an array (like in your initial JSON query), not multiple filters in the same JSON object. This maps to how you must create the Perl data structure.
filter => [
{term => { message => 'metrics' }},
{range => { timestamp => { 'gte' => 'now-15m' }}}
]

How to sync changes from mongodb to elasticsearch(i.e.reflect changes in elastic when crud operation performed in mongo) using logstash

I just want to sync the data from mongodb to elastic using logstash. Its working good, when any new record comes in mongodb, logstash pushes into elastic. But when I update any record in mongodb then it does not change into elasticsearch. I want to make changes in the config file so that when any record updates in mongo it should reflect in elastic as well.
I have already tried by making action => "index" as well as action =>"update" in .conf file.
Here is my mongodata.conf file
input{
mongodb{
uri => 'mongodb://localhost:27017/DBname'
placeholder_db_dir => '/opt/logstash'
placeholder_db_name => 'logstash_sqlite.db'
collection => "collectionName"
batch_size => 500
}
}
filter {
mutate {
remove_field => [ "_id" ]
}
}
output{
elasticsearch{
action => "index"
hosts => ["localhost:9200"]
index => "mongo_hi_log_data"
}
stdout{ codec => rubydebug }
}
I want to sync the data from mongodb to elastic using logstash when any record is updated or inserted in mongodb.
input {
mongodb {
uri => 'mongodb://mongo-db-url/your-collection'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'products'
batch_size => 5000
}
}
filter {
date {
match => [ "logdate", "ISO8601" ]
}
mutate {
rename => { "[_id]" => "product_id" }
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["http://es-url"]
user => "elastic"
password => "changeme"
index => "your-index"
action => "update"
doc_as_upsert => true
document_id => "%{product_id}"
timeout => 30
workers => 1
}
}

Logstash-input-mongodb configuration file

I am trying to pull data from a mongoDB to Elasticsearch using logstash.
I am using the Logstash-input-mongodb plugin. This is my config file:
input {
mongodb {
uri => 'mongodb://localhost:27017/test'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'student'
batch_size => 202
}
}
filter {
}
output {
elasticsearch {
host => "localhost:9200"
user => elastic
password => changeme
index => "testStudent"
}
stdout { codec => rubydebug }
}
I am having an error saying:
Pipelines YAML file is empty.
Is this because I left the filter part empty?

ELK tagging and type not filter syslog

ELK stack version
Logstash: 5.1.2
Kibana: 5.1.2
Elasticsearch:5.1.2
I have the below logstash configuration to send my router syslog events to elastic search.
My router is configured to send events to port 5514 and I can see the logs in Kibana.
BUT, I would like to to ensure all events send to port 5514 are given the type of syslog-network, which is then filtered by 11-network-filter.conf and send to Elasticsearch logstash-syslog-% index.
At present all the syslog events are falling under the logstash index.
Any ideas why?
03-network-input.conf
input {
syslog {
port => 5514
type => "syslog-network"
tags => "syslog-network"
}
}
11-network-filter.conf
filter {
if [type] == "syslog-network" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
30-elasticsearch-output.conf
output {
if "file-beats" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
else if "syslog-network" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
}