Logstash email alerts - email

I configured logstash to send email alerts in case there are some combinations of words in the log message. I get the alerts but instead of receiving the message field value in the alert, I get word "#message".
How can I solve this problem?
Here is my logstash config file:
root#srv-syslog:~# cat /etc/logstash/conf.d/central.conf
input {
syslog {
type => "syslog"
port => 5144
}
tcp {
type => "cisco_asa"
port => 5145
}
tcp {
type => "cisco_ios"
port => 5146
}
}
output {
elasticsearch {
bind_host => "127.0.0.1"
port => "9200"
protocol => http
}
if "executed the" in [message] {
email {
from => "logstash_alert#company.local"
subject => "logstash alert"
to => "myemail#company.local"
via => "smtp"
body => "Here is the event line that occured: %{#message}"
}
}
}

The field name in this case is message, not #message.
See demo:
input {
generator {
count => 1
lines => ["Example line."]
}
}
filter {
mutate {
add_field => {
"m1" => "%{message}"
"m2" => "%{#message}"
}
}
}
output {
stdout {
codec => rubydebug{}
}
}
In your case, you should just need to fix the one line:
body => "Here is the event line that occured: %{message}"

Remove the # sign. The field is message, not #message.

Related

How to check if the source is kafka or beat in logstash?

I have two sources of data for my logs. One is the beat and one is kafka and I want to create ES indexes based on the source. if kafka -> prefix index_name with kafka, and if beat prefix the index name with beat.
input {
beats {
port => 9300
}
}
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["my-topic"]
codec => json
}
}
output {
# if kafka
elasticsearch {
hosts => "http://localhost:9200"
user => "elastic"
password => "password"
index => "[kafka-topic]-my-index"
}
# else if beat
elasticsearch {
hosts => "http://localhost:9200"
user => "elastic"
password => "password"
index => "[filebeat]-my-index"
}
}
Add tags in your inputs and use them to filter the output.
input {
beats {
port => 9300
tags => ["beats"]
}
}
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["my-topic"]
codec => json
tags => ["kafka"]
}
}
output {
if "beats" in [tags] {
output for beats
}
if "kafka" in [tags] {
output for kafka
}
}

Logstash displaying weird characters in it's output

While getting the output from a kafka stream, logstash is also displaying other characters. (\u0018, \u0000, \u0002, etc.)
I tried adding a key_deserializer_class to the logstash conf file, but that didn't help much.
input {
kafka {
bootstrap_servers => "broker1-kafka.net:9092"
topics => ["TOPIC"]
group_id => "T-group"
jaas_path => "/opt/kafka_2.11-1.1.0/config/kafka_client_jaas.conf"
key_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
sasl_mechanism => "SCRAM-SHA-256"
security_protocol => "SASL_PLAINTEXT"
}
}
output { stdout { codec => rubydebug } }
Output
{
"#timestamp" => 2019-04-10T06:09:53.918Z,
"message" => "(TOPIC\u0002U42019-04-10 06:09:47.01739142019-04-10T06:09:53.738000(00000021290065792800\u0002\u0004C1\u0000\u0000\u0002\u001EINC000014418569\u0002\u0010bppmUser\u0002����\v\u0000\u0002\u0010bppmUser\u0002֢��\v\u0002\u0002\u0002\u0002.\u0002\u0018;1000012627;\u0002<AGGAA5V0FEEW7APPOPCYPOR3RPPOLL\u0000\",
"#version" => "1"
}
Is there any way to not get these characters in the output.

Logstash - Custom Timestamp Error

I am trying to input a timestamp field in Logstash and i am getting dateparsefailure message.
My Message -
2014-08-01;11:00:22.123
Pipeline file
input {
stdin{}
#beats {
# port => "5043"
# }
}
# optional.
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}
Can someone tell me what i am missing ?
Update 1
I referred to the link - How to remove trailing newline from message field and now it works.
But, in my log message, i have multiple values other than timestamp
<B 2014-08-01;11:00:22.123 Field1=Value1 Field2=Value2
When i give this as input, it is not working. How to read a part of the log and make it as timestamp ?
Update 2
it works now.
Changed the config file as below
filter {
kv
{
}
mutate {
strip => "message"
}
date {
locale => "en"
match => ["timestamp1", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
I am posting the answer below and steps i used to solve the issue so that i can help people like me.
Step 1 - I read the message in the form of key and value pair
Step 2 - I trimmed off the extra space that leads to parse exception
Step 3 - I read the timestamp value and other fields in respective fields.
input {
beats {
port => "5043"
}
}
# optional.
filter {
kv { }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}

ELK tagging and type not filter syslog

ELK stack version
Logstash: 5.1.2
Kibana: 5.1.2
Elasticsearch:5.1.2
I have the below logstash configuration to send my router syslog events to elastic search.
My router is configured to send events to port 5514 and I can see the logs in Kibana.
BUT, I would like to to ensure all events send to port 5514 are given the type of syslog-network, which is then filtered by 11-network-filter.conf and send to Elasticsearch logstash-syslog-% index.
At present all the syslog events are falling under the logstash index.
Any ideas why?
03-network-input.conf
input {
syslog {
port => 5514
type => "syslog-network"
tags => "syslog-network"
}
}
11-network-filter.conf
filter {
if [type] == "syslog-network" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
30-elasticsearch-output.conf
output {
if "file-beats" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
else if "syslog-network" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
}

Add a field if match

i'm triyng to monitor an irc server. And i'm loot for a way to create a new numeral field (example: Alert_level) only if a message match a specific word inside.
Example: Message: ABC | Alert_level: 1 ; Message: ZYX | Alert_level: 3.
Its the running code
input {
irc {
channels => "#xyz"
host => "a.b.c"
nick => "myusername"
catch_all => true
get_stats => true
}
}
output {
stdout { codec => "rubydebug" }
elasticsearch {
hosts => "localhost"
index => "logstash-irc-%{+YYYY.MM.dd}"
}
}
Thank you!
As #Val suggested above you might need to use the grok filter in order match something from the input. For example your filter could look something like this:
filter {
grok {
match => { "message" => "%{GREEDYDATA:somedata}" }
}
if "ZYX" in [message]{ <-- change your condition accordingly
mutate {
add_field => { "%{Alert_level}" => "12345" } <-- somefield is the field name
convert => { "Alert_level" => "integer" } <-- do the conversion
}
}
}
NOTE that you have to do the conversion in order to create a numeric field through logstash, where you can't directly create one. The above is just a sample so that you can reproduce. Do change the grok match in respect to your requirement. Hope it helps!