Unable to ingest the syslog-logstash.conf for remove & replace functions - elastic-stack

I am just a newbie to the ELK and trying some testing on this, i'm able to run some tests but while i'm trying a filter with grok & mutate to remoev & replace some feilds from my syslog output i'm getting into below error..
21:58:47.976 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, {, ,, ] at line 21, column 9 (byte 496) after filter {\n if [type] == \"syslog\" {\n grok {\n match => { \"message\" => \"%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\\[%{POSINT:pid}\\])?: %{GREEDYDATA:syslog_message}\" }\n }\n date {\n match => [ \"syslog_timestamp\", \"MMM d HH:mm:ss\", \"MMM dd HH:mm:ss\" ]\n }\n mutate {\n remove_field => [\n \"message\",\n \"pid\",\n \"port\"\n "}
Below is my config file ....
# cat logstash-syslog2.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
mutate {
remove_field => [
"message",
"pid",
"port"
"_grokparsefailure"
]
}
mutate {
replace => [
"#source_host", "%{allLogs_hostname}"
"#message", "%{allLogs_message}"
]
}
mutate {
remove => [
"allLogs_hostname",
"syslog_message",
"syslog_timestamp"
]
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "localhost:9200"
index => "%{type}-%{+YYYY.MM.dd}"
}
}
}
please suggest what i'm doing wrong and help to understand the remove & replace functions for the lagstash..
PS: my ELK version is 5.4

The Config you posted have lot of syntactical errors , the logsatsh has it's own config language and expects the config file to abide by the rule.
This link has complete logstash config language reference.
I made some corrections to your config file and posted here , Have added my comments and explanation of what was wrong in the config file itself
input
{
file
{
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
}
filter
{
if [type] == "syslog"
{
grok
{
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:syslog_message}" }
}
date
{
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
# Have merged it with the remove_field option below
#mutate {
# remove_field => [
# "message",
# "pid",
# "port",
# "_grokparsefailure"
# ]
#}
mutate
{
# The replace option only accept hash data type which has a syntax as below
# For more details visit the below link
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-replace
replace => {
"#source_host" => "%{allLogs_hostname}"
"#message" => "%{allLogs_message}"
}
}
mutate
{
# Mutate does not have remove option i guess your intention is to remove the event field
# hence used remove_field option here
# The remove_filed option only accepts arary as value type as shown below
# For details read the below link
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-remove_field
remove_field => [
"message",
"pid",
"port",
"_grokparsefailure",
"allLogs_hostname",
"syslog_message",
"syslog_timestamp"
]
}
}
}
output
{
if [type] == "syslog"
{
elasticsearch
{
# The Hosts option only takes uri as a value type , originally you have provided string as it's value type
# For more info please read the below link
#https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-hosts
hosts => ["localhost:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
}
You can test whether the config file is syntactically correct by using logstash command line option -t this option will test and report the config file is syntactically correct
bin\logstash -f 'path-to-your-config-file' -t
Please let me know for any clarification

You have to add a comma after "port" in your logstash configuration file.
mutate {
remove_field => [
"message",
"pid",
"port",
"_grokparsefailure"
]
}

Related

Create index with the same name as request path value, using ElasticSearch output

This is my logstash.conf:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
stdout {
codec => "rubydebug"
}
}
As a test, I ran this command in PowerShell:
C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -XPUT
'http://127.0.0.1:31311/twitter'
The following output was displayed inside my Logstash terminal:
{
"#timestamp" => 2019-04-09T08:32:09.250Z,
"message" => "",
"#version" => "1",
"headers" => {
"request_path" => "/twitter",
"http_version" => "HTTP/1.1",
"http_user_agent" => "curl/7.64.1",
"request_method" => "PUT",
"http_accept" => "*/*",
"content_length" => "0",
"http_host" => "127.0.0.1:31311"
},
"host" => "127.0.0.1"
}
When I then ran
C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -XGET
"http://127.0.0.1:9200/_cat/indices"
inside PowerShell, I saw
yellow open logstash-2019.04.09 1THStdPfQySWl1WPNeiwPQ 5 1 0 0 401b 401b
An index named logstash-2019.04.09 has been created in response to my PUT request, following the ElasticSearch convention.
My question is: If I want the index to have the same value as the {index_name} parameter I pass inside the the command .\curl.exe -XPUT 'http://127.0.0.1:31311/{index_name}', how should I configure the ElasticSearch output inside my logstash.conf file?
EDIT: Just to clarify, I want {index_name} to be read dynamically every single time I make a PUT request to create a new index. Is that even possible?
It is possible with the index output configuration option.
This configuration can be dynamic using the %{foo} syntax. Since you want the value of [headers][request_path] to be in the index configuration, you can do something like this:
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[headers][request_path]}"
}
}
For this to work the value [headers][request_path] field must not contain any of these characters: [ , \", *, \\, <, |, ,, >, /, ?].
I recommend that you use the gsub configuration option of the mutate filter. So, to remove all the forward slashes, you should have something like this:
filter{
mutate{
gsub => ["[headers][request_path]","/",""]
}
}
If the request path has several forward slashes, you could replace them with some character that will be accepted by elasticsearch.
So, your final logstash.conf file should look like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter{
mutate{
gsub => ["[headers][request_path]","/",""]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[headers][request_path]}"
}
stdout {
codec => "rubydebug"
}
}
You can do so by adding an index configuration setting to your elasticsearch output section. e.g.
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "yourindexnamehere"
}
stdout {
codec => "rubydebug"
}
}

Set Logstash Date Filter from current date to last 3 days

I have Complete database in one index and daily bases i need to create or fetch 3 days records and store in CSV Format. Target is daily it take 3 days back records and store in CSV File. How To set start from Current date to Last 3 Days using only logstash.config?
My Logstash Config File
input {
elasticsearch {
hosts => "**Endpoint URL**"
index => "**Index NAME**"
user => "***"
password => "***"
query => '{ "query": { "query_string": { "query": "*" } } }'
}
}
filter {
csv {
separator => ","
autodetect_column_names => true
autogenerate_column_names => true
}
}
output {
stdout {
codec => json_lines
}
csv {
fields => []
path => "C:/ELK_csv/**cvs_File_Name**.csv"
}
}
Need To Add date filter range
{"query":{"bool":{"must":[{"range":{"createddate":{"gte":"","lt":""}}}],"must_not":[],"should":[]}},"from":0,"size":5000,"sort":[],"aggs":{}}
gte start from current date and lt to last 3 days.
Working Logstash.config File Code
input {
elasticsearch {
hosts => "**ELK ENDPOINT URL**"
index => "**INDEX NAME**"
user => "***"
password => "***"
query => '{ "query":{"bool":{"must":[{"range":{"createddate":{"gt":"now-3d/d","lte":"now/d"}}}],"must_not":[],"should":[]}},"from":0,"size":10000,"sort":[],"aggs":{} }'
}
}
filter {
csv {
separator => ","
autodetect_column_names => true
autogenerate_column_names => true
}
}
output {
stdout {
codec => json_lines
}
csv {
fields => [**FIELDS NAMES**]
path => "C:/ELK6.4.2/logstash-6.4.2/bin/tmp/**CSV_3days**.csv"
}
}

Logstash - Custom Timestamp Error

I am trying to input a timestamp field in Logstash and i am getting dateparsefailure message.
My Message -
2014-08-01;11:00:22.123
Pipeline file
input {
stdin{}
#beats {
# port => "5043"
# }
}
# optional.
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}
Can someone tell me what i am missing ?
Update 1
I referred to the link - How to remove trailing newline from message field and now it works.
But, in my log message, i have multiple values other than timestamp
<B 2014-08-01;11:00:22.123 Field1=Value1 Field2=Value2
When i give this as input, it is not working. How to read a part of the log and make it as timestamp ?
Update 2
it works now.
Changed the config file as below
filter {
kv
{
}
mutate {
strip => "message"
}
date {
locale => "en"
match => ["timestamp1", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
I am posting the answer below and steps i used to solve the issue so that i can help people like me.
Step 1 - I read the message in the form of key and value pair
Step 2 - I trimmed off the extra space that leads to parse exception
Step 3 - I read the timestamp value and other fields in respective fields.
input {
beats {
port => "5043"
}
}
# optional.
filter {
kv { }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}

ELK tagging and type not filter syslog

ELK stack version
Logstash: 5.1.2
Kibana: 5.1.2
Elasticsearch:5.1.2
I have the below logstash configuration to send my router syslog events to elastic search.
My router is configured to send events to port 5514 and I can see the logs in Kibana.
BUT, I would like to to ensure all events send to port 5514 are given the type of syslog-network, which is then filtered by 11-network-filter.conf and send to Elasticsearch logstash-syslog-% index.
At present all the syslog events are falling under the logstash index.
Any ideas why?
03-network-input.conf
input {
syslog {
port => 5514
type => "syslog-network"
tags => "syslog-network"
}
}
11-network-filter.conf
filter {
if [type] == "syslog-network" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
30-elasticsearch-output.conf
output {
if "file-beats" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
else if "syslog-network" in [tags] {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
}

Logstash date ISO8601 convert

I want to convert logstash date type to this type 2015-12-03 03:01:00 and [message][3] - [message][1]
Date match doesn't work, how can I do?
Or %{[message][0]} expression is right or not.
filter {
multiline {
.............
}
grok {
match => { "message" => "%{GREEDYDATA:message}" }
overwrite => ["message"]
}
mutate {
gsub => ["message", "\n", " "]
split => ["message", " "]
}
date {
match => [ "%{[message][0]}","ISO8601"} ]
}
}
Message output like this:
"message" => [
[0] "2015-12-03T01:33:22+00:00"
[1]
[2]
[3] "2015-12-03T01:33:24+00:00"
]
Assuming your input is:
2015-12-03T01:33:22+00:00\n\n2015-12-03T01:33:24+00:00
You can grok that without split:
match => { message, "%{TIMESTAMP_ISO8601:string1}\\n\\n%{TIMESTAMP_ISO8601:string2}" }
You can then use the date{} filter with string1 or string2 as input.