i'm triyng to monitor an irc server. And i'm loot for a way to create a new numeral field (example: Alert_level) only if a message match a specific word inside.
Example: Message: ABC | Alert_level: 1 ; Message: ZYX | Alert_level: 3.
Its the running code
input {
irc {
channels => "#xyz"
host => "a.b.c"
nick => "myusername"
catch_all => true
get_stats => true
}
}
output {
stdout { codec => "rubydebug" }
elasticsearch {
hosts => "localhost"
index => "logstash-irc-%{+YYYY.MM.dd}"
}
}
Thank you!
As #Val suggested above you might need to use the grok filter in order match something from the input. For example your filter could look something like this:
filter {
grok {
match => { "message" => "%{GREEDYDATA:somedata}" }
}
if "ZYX" in [message]{ <-- change your condition accordingly
mutate {
add_field => { "%{Alert_level}" => "12345" } <-- somefield is the field name
convert => { "Alert_level" => "integer" } <-- do the conversion
}
}
}
NOTE that you have to do the conversion in order to create a numeric field through logstash, where you can't directly create one. The above is just a sample so that you can reproduce. Do change the grok match in respect to your requirement. Hope it helps!
Related
This is my logstash.conf:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
}
stdout {
codec => "rubydebug"
}
}
As a test, I ran this command in PowerShell:
C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -XPUT
'http://127.0.0.1:31311/twitter'
The following output was displayed inside my Logstash terminal:
{
"#timestamp" => 2019-04-09T08:32:09.250Z,
"message" => "",
"#version" => "1",
"headers" => {
"request_path" => "/twitter",
"http_version" => "HTTP/1.1",
"http_user_agent" => "curl/7.64.1",
"request_method" => "PUT",
"http_accept" => "*/*",
"content_length" => "0",
"http_host" => "127.0.0.1:31311"
},
"host" => "127.0.0.1"
}
When I then ran
C:\Users\Me\Downloads\curl-7.64.1-win64-mingw\bin> .\curl.exe -XGET
"http://127.0.0.1:9200/_cat/indices"
inside PowerShell, I saw
yellow open logstash-2019.04.09 1THStdPfQySWl1WPNeiwPQ 5 1 0 0 401b 401b
An index named logstash-2019.04.09 has been created in response to my PUT request, following the ElasticSearch convention.
My question is: If I want the index to have the same value as the {index_name} parameter I pass inside the the command .\curl.exe -XPUT 'http://127.0.0.1:31311/{index_name}', how should I configure the ElasticSearch output inside my logstash.conf file?
EDIT: Just to clarify, I want {index_name} to be read dynamically every single time I make a PUT request to create a new index. Is that even possible?
It is possible with the index output configuration option.
This configuration can be dynamic using the %{foo} syntax. Since you want the value of [headers][request_path] to be in the index configuration, you can do something like this:
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[headers][request_path]}"
}
}
For this to work the value [headers][request_path] field must not contain any of these characters: [ , \", *, \\, <, |, ,, >, /, ?].
I recommend that you use the gsub configuration option of the mutate filter. So, to remove all the forward slashes, you should have something like this:
filter{
mutate{
gsub => ["[headers][request_path]","/",""]
}
}
If the request path has several forward slashes, you could replace them with some character that will be accepted by elasticsearch.
So, your final logstash.conf file should look like this:
input {
http {
host => "127.0.0.1"
port => 31311
}
}
filter{
mutate{
gsub => ["[headers][request_path]","/",""]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[headers][request_path]}"
}
stdout {
codec => "rubydebug"
}
}
You can do so by adding an index configuration setting to your elasticsearch output section. e.g.
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "yourindexnamehere"
}
stdout {
codec => "rubydebug"
}
}
I am trying to input a timestamp field in Logstash and i am getting dateparsefailure message.
My Message -
2014-08-01;11:00:22.123
Pipeline file
input {
stdin{}
#beats {
# port => "5043"
# }
}
# optional.
filter {
date {
locale => "en"
match => ["message", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
stdout { codec => rubydebug }
}
Can someone tell me what i am missing ?
Update 1
I referred to the link - How to remove trailing newline from message field and now it works.
But, in my log message, i have multiple values other than timestamp
<B 2014-08-01;11:00:22.123 Field1=Value1 Field2=Value2
When i give this as input, it is not working. How to read a part of the log and make it as timestamp ?
Update 2
it works now.
Changed the config file as below
filter {
kv
{
}
mutate {
strip => "message"
}
date {
locale => "en"
match => ["timestamp1", "YYYY-MM-dd;HH:mm:ss.SSS"]
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
}
I am posting the answer below and steps i used to solve the issue so that i can help people like me.
Step 1 - I read the message in the form of key and value pair
Step 2 - I trimmed off the extra space that leads to parse exception
Step 3 - I read the timestamp value and other fields in respective fields.
input {
beats {
port => "5043"
}
}
# optional.
filter {
kv { }
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}
I have a search form for searching "documents", that have a small dozen of search criterions, including "entire_text", "keywords" and "description".
I'm using pg_search_scope, but I have 2 different scopes.
This is in my document.rb:
pg_search_scope :search_entire_text,
:against => :entire_text,
:using => {
:tsearch => {
:prefix => true,
:dictionary => "french"
}
}
pg_search_scope :search_keywords,
:associated_against => {
:keywords => [:keyword]
},
:using => {
:tsearch => {
:any_word => true
}
}
Each separately works fine. But I can't do this:
#resultats = Document.search_keywords(params[:ch_document][:keywords]).search_entire_text(params[:ch_document][:entire_text])
Is there any way to work around this?
Thanks
I've never used pg_search_scope but it looks like you indeed can't combine two pg_search_scope's.
What you could do is use :search_entire_text with a pg_search_scope and use the resulting id's in a Document.where([1,2,3]) that way you can use standard rails scope's for the remaining keyword searches.
Example:
# If pluck doesn't work you can also use map(&:id)
txt_res_ids = Document.search_entire_text(params[:ch_document][:entire_text]).pluck(:id)
final_results = Document.where(id: txt_res_ids).some_keyword_scope.all
It works. Here's the entire code ... if ever this could help someone :
Acte.rb (I didn't translate to english, the explanations are commented to correspond to the question above)
pg_search_scope :cherche_texte_complet, #i.e. find entire text
:against => :texte_complet,
:using => {
:tsearch => {
:prefix => true,
:dictionary => "french"
}
}
pg_search_scope :cherche_mots_clefs, #find keywords
:associated_against => {
:motclefs => [:motcle]
},
:using => {
:tsearch => {
:any_word => true
}
}
def self.cherche_date(debut, fin) #find date between
where("acte_date BETWEEN :date_debut AND :date_fin", {date_debut: debut, date_fin: fin})
end
def self.cherche_mots(mots)
if mots.present? #the if ... else is necessary, see controller.rb
cherche_mots_clefs(mots)
else
order("id DESC")
end
end
def self.ids_texte_compl(ids)
if ids.any?
where("id = any (array #{ids})")
else
where("id IS NOT NULL")
end
end
and actes_controller.rb
ids = Acte.cherche_texte_complet(params[:ch_acte][:texte_complet]).pluck(:id)
#resultats = Acte.cherche_date(params[:ch_acte][:date_debut],params[:ch_acte][:date_fin])
.ids_texte_compl(ids)
.cherche_mots(params[:ch_acte][:mots])
Thanks !
chaining works in pg_search 2.3.2 at least
SomeModel.pg_search_based_scope("abc").pg_search_based_scope("xyz")
I have a log in a CSV file with the date field in this pattern "24/09/2014", but when I read the file with Logstash the date field has a string type.
csv file example:
fecha,cant,canb,id,zon
24/09/2014,80,20,1.5,2
01/12/2014,50,25,1,3
My Logstash conf file:
input {
file {
path => "path/to/data.csv"
start_position => "beginning"
}
}
filter {
csv {
separator => ","
columns => ["fecha","cant","canb","id","zon"]
}
date {
match=> ["fecha","dd/MM/yyyy"]
}
mutate {convert => ["cant", "integer"]}
mutate {convert => ["canb", "integer"]}
mutate {convert => ["id", "float"]}
}
output {
elasticsearch {
action => "index"
host => "localhost"
index => "barrena"
workers => 1
}
stdout {}
}
Thanks!
The date would be copied to a field called #timestamp ( The date filter does it ) and this field would be date format.
You can safely remove the fetcha field once you have used the date filter.
I configured logstash to send email alerts in case there are some combinations of words in the log message. I get the alerts but instead of receiving the message field value in the alert, I get word "#message".
How can I solve this problem?
Here is my logstash config file:
root#srv-syslog:~# cat /etc/logstash/conf.d/central.conf
input {
syslog {
type => "syslog"
port => 5144
}
tcp {
type => "cisco_asa"
port => 5145
}
tcp {
type => "cisco_ios"
port => 5146
}
}
output {
elasticsearch {
bind_host => "127.0.0.1"
port => "9200"
protocol => http
}
if "executed the" in [message] {
email {
from => "logstash_alert#company.local"
subject => "logstash alert"
to => "myemail#company.local"
via => "smtp"
body => "Here is the event line that occured: %{#message}"
}
}
}
The field name in this case is message, not #message.
See demo:
input {
generator {
count => 1
lines => ["Example line."]
}
}
filter {
mutate {
add_field => {
"m1" => "%{message}"
"m2" => "%{#message}"
}
}
}
output {
stdout {
codec => rubydebug{}
}
}
In your case, you should just need to fix the one line:
body => "Here is the event line that occured: %{message}"
Remove the # sign. The field is message, not #message.