Ingesting data in MongoDB with mongodb-output-plugin in Logstash - mongodb

I am trying to ingest data from a txt file in MongoDB (Machine 1), using Logstash (Machine 2).
I set a DB and a collection with Compass and I am using the mongodb-output-plugin in Logstash.
Here's the Logstash conf file:
input
{
file {
path => "/home/user/Data"
type => "cisco-asa"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter
{
grok {
match => { "message" => "^%{SYSLOGTIMESTAMP:syslog_timestamp} %{HOSTNAME:device_src} %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDY>
}
date {
match => ["syslog_timestamp", "MMM dd HH:mm:ss" ]
target => "#timestamp"
}
}
output
{
stdout {
codec => dots
}
mongodb {
id => "mongo-cisco"
collection => "Cisco ASA"
database => "Logs"
uri => "mongodb+srv://user:pass#192.168.10.9:27017/Logs"
}
}
Here's a screenshot of the Logstash output:
Logstash output
[2021-03-27T13:29:35,178][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
.............................................................................................................................
[2021-03-27T13:30:06,201][WARN ][logstash.outputs.mongodb ][main][mongo-cisco] Failed to send event to MongoDB, retrying in 3 seconds {:event=>#<LogStash::Event:0x6d0984a>, :exception=>#<Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x6711494c #tag_sets=[], #server_selection_timeout=30, #options={:database=>"Logs", :user=>"username", :password=>"passwd"}>>}
PS: this is my first time using MongoDB

Related

JDBC Logstash Elastic Kibana

I'm using JDBC input plugin to ingest data from mongodb to ElasticSearch.
My config is:
`input {
jdbc {
jdbc_driver_class => "mongodb.jdbc.MongoDriver"
jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mongodb_unityjdbc_free.jar"
jdbc_user => ""
jdbc_password => ""
jdbc_connection_string => "jdbc:mongodb://localhost:27017/pritunl"
schedule => "* * * * *"
jdbc_page_size => 100000
jdbc_paging_enabled => true
statement => "select * from servers_output"
}
}
filter {
mutate {
copy => { "_id" => "[#metadata][id]"}
remove_field => ["_id"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "pritunl"
document_id => "%{[#metadata][_id]}"
}
stdout {}
}`
In Kibana I see only one hitenter image description here, but in stdout I see many records from mongodb collection. What should I do, to see them all?
The problem is, that all your documents are saved with the same "_id", so even though you're sending different records to ES, only one document is being overwritten internally - thus you get 1 hit in Kibana.
There is a typo in your configuration to cause this issue.
You're copying "_id" into "[#metadata][id]"
But you're trying to read "[#metadata][_id]" with an underscore.
Removing the underscore when reading the value for document_id should fix your issue.
output {
elasticsearch {
hosts => "localhost:9200"
index => "pritunl"
document_id => "%{[#metadata][id]}"
}
stdout {}
}`

How to sync changes from mongodb to elasticsearch(i.e.reflect changes in elastic when crud operation performed in mongo) using logstash

I just want to sync the data from mongodb to elastic using logstash. Its working good, when any new record comes in mongodb, logstash pushes into elastic. But when I update any record in mongodb then it does not change into elasticsearch. I want to make changes in the config file so that when any record updates in mongo it should reflect in elastic as well.
I have already tried by making action => "index" as well as action =>"update" in .conf file.
Here is my mongodata.conf file
input{
mongodb{
uri => 'mongodb://localhost:27017/DBname'
placeholder_db_dir => '/opt/logstash'
placeholder_db_name => 'logstash_sqlite.db'
collection => "collectionName"
batch_size => 500
}
}
filter {
mutate {
remove_field => [ "_id" ]
}
}
output{
elasticsearch{
action => "index"
hosts => ["localhost:9200"]
index => "mongo_hi_log_data"
}
stdout{ codec => rubydebug }
}
I want to sync the data from mongodb to elastic using logstash when any record is updated or inserted in mongodb.
input {
mongodb {
uri => 'mongodb://mongo-db-url/your-collection'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'products'
batch_size => 5000
}
}
filter {
date {
match => [ "logdate", "ISO8601" ]
}
mutate {
rename => { "[_id]" => "product_id" }
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["http://es-url"]
user => "elastic"
password => "changeme"
index => "your-index"
action => "update"
doc_as_upsert => true
document_id => "%{product_id}"
timeout => 30
workers => 1
}
}

Logstash-input-mongodb configuration file

I am trying to pull data from a mongoDB to Elasticsearch using logstash.
I am using the Logstash-input-mongodb plugin. This is my config file:
input {
mongodb {
uri => 'mongodb://localhost:27017/test'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'student'
batch_size => 202
}
}
filter {
}
output {
elasticsearch {
host => "localhost:9200"
user => elastic
password => changeme
index => "testStudent"
}
stdout { codec => rubydebug }
}
I am having an error saying:
Pipelines YAML file is empty.
Is this because I left the filter part empty?

Logstash - Custom Timestamp Error

I am trying to input a timestamp field in Logstash and i am getting dateparsefailure message.
My Message -
2014-08-01;11:00:22.123
Pipeline file
input {
stdin{}
#beats {
# port => "5043"
# }
}
# optional.
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}
Can someone tell me what i am missing ?
Update 1
I referred to the link - How to remove trailing newline from message field and now it works.
But, in my log message, i have multiple values other than timestamp
<B 2014-08-01;11:00:22.123 Field1=Value1 Field2=Value2
When i give this as input, it is not working. How to read a part of the log and make it as timestamp ?
Update 2
it works now.
Changed the config file as below
filter {
kv
{
}
mutate {
strip => "message"
}
date {
locale => "en"
match => ["timestamp1", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
I am posting the answer below and steps i used to solve the issue so that i can help people like me.
Step 1 - I read the message in the form of key and value pair
Step 2 - I trimmed off the extra space that leads to parse exception
Step 3 - I read the timestamp value and other fields in respective fields.
input {
beats {
port => "5043"
}
}
# optional.
filter {
kv { }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}

ELK tagging and type not filter syslog

ELK stack version
Logstash: 5.1.2
Kibana: 5.1.2
Elasticsearch:5.1.2
I have the below logstash configuration to send my router syslog events to elastic search.
My router is configured to send events to port 5514 and I can see the logs in Kibana.
BUT, I would like to to ensure all events send to port 5514 are given the type of syslog-network, which is then filtered by 11-network-filter.conf and send to Elasticsearch logstash-syslog-% index.
At present all the syslog events are falling under the logstash index.
Any ideas why?
03-network-input.conf
input {
syslog {
port => 5514
type => "syslog-network"
tags => "syslog-network"
}
}
11-network-filter.conf
filter {
if [type] == "syslog-network" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
30-elasticsearch-output.conf
output {
if "file-beats" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
else if "syslog-network" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
}