ELK tagging and type not filter syslog - elastic-stack

ELK stack version
Logstash: 5.1.2
Kibana: 5.1.2
Elasticsearch:5.1.2
I have the below logstash configuration to send my router syslog events to elastic search.
My router is configured to send events to port 5514 and I can see the logs in Kibana.
BUT, I would like to to ensure all events send to port 5514 are given the type of syslog-network, which is then filtered by 11-network-filter.conf and send to Elasticsearch logstash-syslog-% index.
At present all the syslog events are falling under the logstash index.
Any ideas why?
03-network-input.conf
input {
syslog {
port => 5514
type => "syslog-network"
tags => "syslog-network"
}
}
11-network-filter.conf
filter {
if [type] == "syslog-network" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
30-elasticsearch-output.conf
output {
if "file-beats" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
else if "syslog-network" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
}

Related

Ingesting data in MongoDB with mongodb-output-plugin in Logstash

I am trying to ingest data from a txt file in MongoDB (Machine 1), using Logstash (Machine 2).
I set a DB and a collection with Compass and I am using the mongodb-output-plugin in Logstash.
Here's the Logstash conf file:
input
{
file {
path => "/home/user/Data"
type => "cisco-asa"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter
{
grok {
match => { "message" => "^%{SYSLOGTIMESTAMP:syslog_timestamp} %{HOSTNAME:device_src} %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDY>
}
date {
match => ["syslog_timestamp", "MMM dd HH:mm:ss" ]
target => "#timestamp"
}
}
output
{
stdout {
codec => dots
}
mongodb {
id => "mongo-cisco"
collection => "Cisco ASA"
database => "Logs"
uri => "mongodb+srv://user:pass#192.168.10.9:27017/Logs"
}
}
Here's a screenshot of the Logstash output:
Logstash output
[2021-03-27T13:29:35,178][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
.............................................................................................................................
[2021-03-27T13:30:06,201][WARN ][logstash.outputs.mongodb ][main][mongo-cisco] Failed to send event to MongoDB, retrying in 3 seconds {:event=>#<LogStash::Event:0x6d0984a>, :exception=>#<Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x6711494c #tag_sets=[], #server_selection_timeout=30, #options={:database=>"Logs", :user=>"username", :password=>"passwd"}>>}
PS: this is my first time using MongoDB

How to check if the source is kafka or beat in logstash?

I have two sources of data for my logs. One is the beat and one is kafka and I want to create ES indexes based on the source. if kafka -> prefix index_name with kafka, and if beat prefix the index name with beat.
input {
beats {
port => 9300
}
}
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["my-topic"]
codec => json
}
}
output {
# if kafka
elasticsearch {
hosts => "http://localhost:9200"
user => "elastic"
password => "password"
index => "[kafka-topic]-my-index"
}
# else if beat
elasticsearch {
hosts => "http://localhost:9200"
user => "elastic"
password => "password"
index => "[filebeat]-my-index"
}
}
Add tags in your inputs and use them to filter the output.
input {
beats {
port => 9300
tags => ["beats"]
}
}
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["my-topic"]
codec => json
tags => ["kafka"]
}
}
output {
if "beats" in [tags] {
output for beats
}
if "kafka" in [tags] {
output for kafka
}
}

Real time sync between mongodb and elastic search

Using Logstash-input-mongodb able to insert records at real time. But when it comes to update, that is not happening as expected. Can anyone guide me on this.
logstash-mongodb.conf
input {
mongodb {
uri => 'mongodb://127.0.0.1:27017/test-db'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'mycol'
batch_size => 5000
generateId => true
}
}
filter{
mutate { remove_field => "_id" }
}
output {
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "test-index"
}
}

Logstash - Custom Timestamp Error

I am trying to input a timestamp field in Logstash and i am getting dateparsefailure message.
My Message -
2014-08-01;11:00:22.123
Pipeline file
input {
stdin{}
#beats {
# port => "5043"
# }
}
# optional.
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}
Can someone tell me what i am missing ?
Update 1
I referred to the link - How to remove trailing newline from message field and now it works.
But, in my log message, i have multiple values other than timestamp
<B 2014-08-01;11:00:22.123 Field1=Value1 Field2=Value2
When i give this as input, it is not working. How to read a part of the log and make it as timestamp ?
Update 2
it works now.
Changed the config file as below
filter {
kv
{
}
mutate {
strip => "message"
}
date {
locale => "en"
match => ["timestamp1", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
I am posting the answer below and steps i used to solve the issue so that i can help people like me.
Step 1 - I read the message in the form of key and value pair
Step 2 - I trimmed off the extra space that leads to parse exception
Step 3 - I read the timestamp value and other fields in respective fields.
input {
beats {
port => "5043"
}
}
# optional.
filter {
kv { }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}

Logstash email alerts

I configured logstash to send email alerts in case there are some combinations of words in the log message. I get the alerts but instead of receiving the message field value in the alert, I get word "#message".
How can I solve this problem?
Here is my logstash config file:
root#srv-syslog:~# cat /etc/logstash/conf.d/central.conf
input {
syslog {
type => "syslog"
port => 5144
}
tcp {
type => "cisco_asa"
port => 5145
}
tcp {
type => "cisco_ios"
port => 5146
}
}
output {
elasticsearch {
bind_host => "127.0.0.1"
port => "9200"
protocol => http
}
if "executed the" in [message] {
email {
from => "logstash_alert#company.local"
subject => "logstash alert"
to => "myemail#company.local"
via => "smtp"
body => "Here is the event line that occured: %{#message}"
}
}
}
The field name in this case is message, not #message.
See demo:
input {
generator {
count => 1
lines => ["Example line."]
}
}
filter {
mutate {
add_field => {
"m1" => "%{message}"
"m2" => "%{#message}"
}
}
}
output {
stdout {
codec => rubydebug{}
}
}
In your case, you should just need to fix the one line:
body => "Here is the event line that occured: %{message}"
Remove the # sign. The field is message, not #message.