ElastAlert2 No mapping found - opensearch

I'm trying set ElastAlert for Opensearch 2.8.
I Write config
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: /etc/elastalert/rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
minutes: 1
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: localhost
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
use_ssl: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport
# for details
# es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
es_username: admin
es_password: password
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
verify_certs: False
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
writeback_alias: elastalert_alerts
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
... And rule file
# Alert when the rate of events exceeds a threshold
.
# (Optional)
# Elasticsearch host
es_host: localhost
.
# (Optional)
# Elasticsearch port
es_port: 9200
.
# (OptionaL) Connect with SSL to Elasticsearch
use_ssl: True
ssl_show_warn: False
verify_certs: False
.
# (Optional) basic-auth username and password for Elasticsearch
# es_username: admin
# es_password: ytnhfvgkby
.
# (Required)
# Rule name, must be unique
name: Loopdetect
.
# (Required)
# Type of alert.
# the frequency rule type alerts when num_events events occur with timeframe time
type: any
.
# (Required)
# Index to search, wildcard supported
index: syslog-20221104
.
# (Required, frequency specific)
# Alert when this many documents matching the query occur within a timeframe
num_events: 1
.
# (Required, frequency specific)
# num_events must occur within this amount of time to trigger an alert
timeframe:
hours: 24
.
# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
# filter:
# - term:
# process.name: "JUSTME"
filter:
- query:
query_string:
query: "message: *loop*"
# (Required)
# The alert is use when a match is found
alert:
- "email"
.
# (required, email specific)
# a list of email addresses to send alerts to
email:
- "myemail"
But when I try check this rule.
I get error:
elastalert-test-rule rules/loopdetect_alert.yaml
INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent.
To send them but remain verbose, use --verbose instead.
WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1 [status:400 request:0.048s]
Error running your filter:
RequestError(400, 'search_phase_execution_exception', {'error': {'root_cause': [{'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}], 'type': 'search_phase_execution_exception', 'reason': 'all shards failed', 'phase': 'query', 'grouped': True, 'failed_shards': [{'shard': 0, 'index': 'syslog-20221104', 'node': '5spTsU7-QienT8Jn064MMA', 'reason': {'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}}]}, 'status': 400})
INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent.
To send them but remain verbose, use --verbose instead.
INFO:elastalert:1 rules loaded
INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts
WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?_source_includes=%40timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=10000 [status:400 request:0.039s]
ERROR:elastalert:Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')
{"writeback": {"elastalert_error": {"message": "Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')", "traceback": ["Traceback (most recent call last):", " File \"/usr/local/lib/python3.11/dist-packages/elastalert2-2.8.0-py3.11.egg/elastalert/elastalert.py\", line 370, in get_hits", " res = self.thread_data.current_es.search(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/utils.py\", line 152, in _wrapped", " return func(*args, params=params, headers=headers, **kwargs)", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/__init__.py\", line 1658, in search", " return self.transport.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 392, in perform_request", " raise e", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 358, in perform_request", " status, headers_response, data = connection.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/http_requests.py\", line 199, in perform_request", " self._raise_error(response.status_code, raw_data)", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/base.py\", line 315, in _raise_error", " raise HTTP_EXCEPTIONS.get(status_code, TransportError)(", "elasticsearch.exceptions.RequestError: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')"], "data": {"rule": "Loopdetect", "query": {"query": {"bool": {"filter": {"bool": {"must": [{"range": {"#timestamp": {"gt": "2022-11-03T12:12:39.618168Z", "lte": "2022-11-03T12:27:39.618168Z"}}}, {"query_string": {"query": "message: *loop*"}}]}}}}, "sort": [{"#timestamp": {"order": "asc"}}]}}}}}
But if I try get data by CURL, it's ok
curl -X GET 'https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1' -u 'admin:password' --insecure
{"took":4,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":10000,"relation":"gte"},"max_score":1.0,"hits":[{"_index":"syslog-20221104","_id":"_bSKQYQB_cpiH2g_hgvj","_score":1.0,"_source":{"host":"10.53.0.35","hostname":"10.53.0.35","message":"Port 2 link up, 100Mbps FULL duplex","source_ip":"91.195.230.4","source_type":"syslog","timestamp":"2022-11-04T07:28:27Z"}}]}}
Help me please understand, what I do wrong.
Thanks.

I add timestamp_field: timestamp.
And all work fine!

Related

grafana - data transformation from journald

I would like to clean up the data gathered by promtail. Specifically, I want grafana log dashboard to show only show SYSLOG_TIMESTAMP and MESSAGE fields. The problem is that grafana transform doesn't show fields that are otherwise detected by it. The query I'm using is simple - {name="promtailtest1"}. Any ideas where to start looking?
Detected fields by grafana transform:
Detected fields by grafana:
Log labels
job systemd-journal
name1 promtailtest1
Detected fields
MESSAGE "1"
PRIORITY "5"
SYSLOG_FACILITY "1"
SYSLOG_IDENTIFIER "promtailtest1"
SYSLOG_TIMESTAMP "Mar 1 11:15:25 "
_BOOT_ID "d5f4b43026124bccb1372918ff44fb70"
_GID "1000"
_HOSTNAME "pc"
_MACHINE_ID "cce4800beb84473b9cd93f8d6412880a"
_PID "1902474"
_SELINUX_CONTEXT "unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
_SOURCE_REALTIME_TIMESTAMP "1646126125653980"
_TRANSPORT "syslog"
_UID "1000"
ts 2022-03-01T09:15:25.654Z
tsNs 1646126125654005000

Having issues with rs.add() in ansible playbook for mongo

I am using below tasks in my playbook to initialize cluster and add secondary to primary:
- name: Initialize replica set
run_once: true
delegate_to: host1
shell: >
mongo --eval 'printjson(rs.initiate())'
- name: Format secondaries
run_once: true
local_action:
module: debug
msg: '"{{ item }}:27017"'
with_items: ['host2', 'host3']
register: secondaries
- name: Add secondaries
run_once: true
delegate_to: host1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item.msg }}))'
with_items: secondaries.results
I am getting below error:
TASK [mongodb-setup : Add secondaries] *******************************
fatal: [host1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'msg'\n\nThe error appears to have been in '/var/lib/awx/projects/_dev/roles/mongodb-setup/tasks/users.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Add secondaries\n ^ here\n"}
Thanks for the response, I have amended my code as below
- name: Add secondaries
run_once: true
delegate_to: host-1
shell: >
/usr/bin/mongo --eval 'printjson(rs.add({{ item }}:27017))'
with_items:
- host2
- host3
but getting below error
failed: [host-2 -> host-1] (item=host-2) => {"changed": true, "cmd": "/usr/bin/mongo --eval 'printjson(rs.add(host-2:27017))'", "delta": "0:00:00.173077", "end": "2019-08-06 13:29:09.422560", "item": "host-2", "msg": "non-zero return code", "rc": 252, "start": "2019-08-06 13:29:09.249483", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version: 3.2.22\nconnecting to: test\n2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37", "stdout_lines": ["MongoDB shell version: 3.2.22", "connecting to: test", "2019-08-06T13:29:09.419-0500 E QUERY [thread1] SyntaxError: missing ) after argument list #(shell eval):1:37"]}
You issue is not with rs.add() but with the data you loop over. In your last task, your item list is a single string.
# Wrong #
with_items: secondaries.results
You want to pass an actual list form your previously registered result:
with_items: "{{ secondaries.results }}"
That being said, registering the result of a debug task is rather odd. You should use set_fact to register what you need in a var, or better directly loop other your list of hosts in your task. It also looks like the rs.add funcion is exepecting a string so you should quote the argument in your eval. Something like:
- name: Add secondaries
shell: >
/usr/bin/mongo --eval 'printjson(rs.add("{{ item }}:27017"))'
with_items:
- host2
- host3
And the way you use delegation seems rather strange to me in this context but it's hard to give any valid clues without a complete playbook example of what you are trying to do (that you might give in a new question if necessary).

Capistrano deployment is not happenning after Server IP Change

Problem: Recently We have changed IP address of the stage server. We are using Capistrano for deploying rails application. So after changing server IP address when we run a command: cap develop(branch name) deploy, it is not working. Please find below config files
deploy.rb
# config valid for current version and patch releases of Capistrano
lock "~> 3.10.0"
set :application, "app_name"
set :repo_url, "git#bitbucket.org:repo.git"
set :branch, :develop
set :deploy_to, '/home/deploy/app_name'
set :pty, true
set :linked_files, %w{config/mongoid.yml config/application.yml}
set :linked_dirs, %w{ bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system public/uploads}
set :keep_releases, 5
set :rvm_type, :user
set :rvm_ruby_version, 'ruby-2.3.1' # Edit this if you are using MRI Ruby
set :bundle_binstubs, nil
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock" #accept array for multi-bind
set :puma_conf, "#{shared_path}/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma_error.log"
set :puma_error_log, "#{shared_path}/log/puma_access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'production'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, false
set :puma_preload_app, false
# Default branch is :master
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp
# Default deploy_to directory is /var/www/my_app_name
# set :deploy_to, "/var/www/my_app_name"
# Default value for :format is :airbrussh.
# set :format, :airbrussh
# You can configure the Airbrussh format using :format_options.
# These are the defaults.
# set :format_options, command_output: true, log_file: "log/capistrano.log", color: :auto, truncate: :auto
# Default value for :pty is false
# set :pty, true
# Default value for :linked_files is []
# append :linked_files, "config/database.yml", "config/secrets.yml"
# Default value for linked_dirs is []
# append :linked_dirs, "log", "tmp/pids", "tmp/cache", "tmp/sockets", "public/system"
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for local_user is ENV['USER']
# set :local_user, -> { `git config user.name`.chomp }
# Default value for keep_releases is 5
# set :keep_releases, 5
# Uncomment the following to require manually verifying the host key before first deploy.
# set :ssh_options, verify_host_key: :secure
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
config/deploy/develop.rb
# server-based syntax
# ======================
# Defines a single server with a list of roles and multiple properties.
# You can define all roles on a single server, or split them:
# server "example.com", user: "deploy", roles: %w{app db web}, my_property: :my_value
# server "example.com", user: "deploy", roles: %w{app web}, other_property: :other_value
# server "db.example.com", user: "deploy", roles: %w{db}
server '<new_ip>', user: 'deploy', roles: %w{web app db}
# role-based syntax
# ==================
# Defines a role with one or multiple servers. The primary server in each
# group is considered to be the first unless any hosts have the primary
# property set. Specify the username and a domain or IP for the server.
# Don't use `:all`, it's a meta role.
# role :app, %w{deploy#example.com}, my_property: :my_value
# role :web, %w{user1#primary.com user2#additional.com}, other_property: :other_value
# role :db, %w{deploy#example.com}
# Configuration
# =============
# You can set any configuration variable like in config/deploy.rb
# These variables are then only loaded and set in this stage.
# For available Capistrano configuration variables see the documentation page.
# http://capistranorb.com/documentation/getting-started/configuration/
# Feel free to add new variables to customise your setup.
# Custom SSH Options
# ==================
# You may pass any option but keep in mind that net/ssh understands a
# limited set of options, consult the Net::SSH documentation.
# http://net-ssh.github.io/net-ssh/classes/Net/SSH.html#method-c-start
#
# Global options
# --------------
# set :ssh_options, {
# keys: %w(/home/rlisowski/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(password)
# }
#
# The server-based syntax can be used to override options:
# ------------------------------------
# server "example.com",
# user: "user_name",
# roles: %w{web app},
# ssh_options: {
# user: "user_name", # overrides user setting above
# keys: %w(/home/user_name/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(publickey password)
# # password: "please use keys"
# }
Not sure what we are missing, Any help would be appreciated

Logstash wont start when adding a match statement in a grok block

I'm having difficulty with starting Logstash.
My logstash.conf looks like this:
input {
beats {
port => "5044"
}
}
filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{WORD:event_type}\t%{NUMBER:server_time}\t%{NUMBER:market_time}\t%{WORD:instrument}\t%{C_NUMBER:last_price}\t%{C_NUMBER:trade_quantity}\t%{C_NUMBER:bid_price}\t%{C_NUMBER:bid_quantity}\t%{C_NUMBER:ask_price}\t%{C_NUMBER:ask_quantity}\t%{GREEDYDATA:flags}\t%{GREEDYDATA:additional_infos}"}
}
# ... and other stuff here...
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Logstash works fine if I comment the match => line. But with it, it does not start, meaning nothing shows up when I run netstat -na | grep 5044 in the container. It is simply not listening on 5044.
And when I try to run Logstash manually by /opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /etc/logstash/conf.d/filebeat-config.conf, I get the following:
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-27T09:35:25,883][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-27T09:35:25,887][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-27T09:35:26,177][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-27T09:35:26,213][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"5abcdba2-475f-46a9-b192-a343ca15ce89", :path=>"/tmp/logstash/data/uuid"}
[2018-08-27T09:35:26,727][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-27T09:35:29,016][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-27T09:35:29,316][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-27T09:35:29,325][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-27T09:35:29,467][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-27T09:35:29,510][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-27T09:35:29,513][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-27T09:35:29,533][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-27T09:35:29,549][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-27T09:35:29,565][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-27T09:35:29,689][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::FilterDelegator:0x68bd7527 #metric_events_out=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0, #metric_events_in=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0, #metric_events_time=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0, #id=\"e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724\", #klass=LogStash::Filters::Grok, #metric_events=#<LogStash::Instrument::NamespacedMetric:0x5867faed #metric=#<LogStash::Instrument::Metric:0x61ef1454 #collector=#<LogStash::Instrument::Collector:0x51306706 #agent=nil, #metric_store=#<LogStash::Instrument::MetricStore:0x5227344a #store=#<Concurrent::Map:0x00000000000fb4 entries=2 default_proc=nil>, #structured_lookup_mutex=#<Mutex:0x7efeb9ea>, #fast_lookup=#<Concurrent::Map:0x00000000000fb8 entries=75 default_proc=nil>>>>, #namespace_name=[:stats, :pipelines, :main, :plugins, :filters, :e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724, :events]>, #filter=<LogStash::Filters::Grok patterns_dir=>[\"./patterns\"], match=>{\"message\"=>\"%{WORD:event_type}\\\\t%{NUMBER:server_time}\\\\t%{NUMBER:market_time}\\\\t%{WORD:instrument}\\\\t%{C_NUMBER:last_price}\\\\t%{C_NUMBER:trade_quantity}\\\\t%{C_NUMBER:bid_price}\\\\t%{C_NUMBER:bid_quantity}\\\\t%{C_NUMBER:ask_price}\\\\t%{C_NUMBER:ask_quantity}\\\\t%{GREEDYDATA:flags}\\\\t%{GREEDYDATA:additional_infos}\"}, id=>\"e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724\", enable_metric=>true, periodic_flush=>false, patterns_files_glob=>\"*\", break_on_match=>true, named_captures_only=>true, keep_empty_captures=>false, tag_on_failure=>[\"_grokparsefailure\"], timeout_millis=>30000, tag_on_timeout=>\"_groktimeout\">>", :error=>"pattern %{C_NUMBER:last_price} not defined", :thread=>"#<Thread:0x20b6525c run>"}
[2018-08-27T09:35:29,699][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{C_NUMBER:last_price} not defined>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "org/jruby/RubyKernel.java:1292:in `loop'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:281:in `block in register'", "org/jruby/RubyArray.java:1734:in `each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:275:in `block in register'", "org/jruby/RubyHash.java:1343:in `each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:270:in `register'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:340:in `register_plugin'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `register_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:729:in `maybe_setup_out_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:361:in `start_workers'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `run'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:248:in `block in start'"], :thread=>"#<Thread:0x20b6525c run>"}
[2018-08-27T09:35:29,724][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
Also, next to my logstash.conf, I have the directory patterns including a file containing the following:
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
C_NUMBER (?:[+-]?(?:[(0-9)|(*,#,.)]+))
C_NUMBER2 (?:[+-]?(?:[(0-9)|(*,#,.)|null]+))
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b
POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>(?>\\.|[^\\]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
TIMESTAMP_CUSTOM %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND}.?%{NUMBER})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
What is wrong with the match => line??
I highly appreciate your help.
You're attempting to use a grok pattern, {C_NUMBER}, that Logstash doesn't know about. It doesn't appear to be a standard pattern bundled with Logstash. put NUMBER in that place, and restart logstash.
I was able to resolve the issue by changing patterns_dir => ["./patterns"] to patterns_dir => ["/etc/logstash/conf.d/patterns"].
The match line is referencing a grok pattern that Logstash didn't find because of the relative path to the patterns directory.

Handling attributes in InSpec

I was trying to create some basic inspec tests to validate a set of HTTP URLs. The way I started is like this -
control 'http-url-checks' do
impact 1.0
title 'http-url-checks'
desc '
Specify the URLs which need to be up and working.
'
tag 'http-url-checks'
describe http('http://example.com') do
its('status') { should eq 200 }
its('body') { should match /abc/ }
its('headers.name') { should eq 'header' }
end
describe http('http://example.net') do
its('status') { should eq 200 }
its('body') { should match /abc/ }
its('headers.name') { should eq 'header' }
end
end
We notice that the URLs are hard-coded in the controls and isn't a lot of fun. I'd like to move them to some 'attributes' file of some sort and loop through them in the control file.
My attempt was to use the 'files' folder structure inside the profile.I created a file - httpurls.yml and had the following content in it -
- url: http://example.com
- url: http://example.net
..and in my control file, I had the construct -
my_urls = yaml(content: inspec.profile.file('httpurls.yml')).params
my_urls.each do |s|
describe http(s['url']) do
its('status') { should eq 200 }
end
end
However, when I execute the compliance profile, I get an error - 'httpurls.yml not found' (not sure about the exact error message though though). The following is the folder structure I had for my compliance profile.
What I am doing wrong?
Is there a better way to achieve what I am trying to do?
The secret is to use profile attributes, as defined near the bottom of this page:
https://www.inspec.io/docs/reference/profiles/
First, create a profile attributes YML file. I name mine profile-attribute.yml.
Second, put your array of values in the YML file, like so:
urls:
- http://example.com
- http://example.net
Third, create an attribute at the top of your InSpec tests:
my_urls = attribute('urls', description: 'The URLs that I am validating.')
Fourth, use your attribute in your InSpec test:
my_urls.each do |s|
describe http(s['url']) do
its('status') { should eq 200 }
end
end
Finally, when you call your InSpec test, point to your YML file using --attrs:
inspec exec mytest.rb --reporter=cli --attrs profile-attribute.yml
There is another way to do this using files (instead of the profile attributes and the --attrs flag). You can use JSON or YAML.
First, create the JSON and/or YAML file and put them in the files directory. A simple example of the JSON file might look like this:
{
"urls": ["https://www.google.com", "https://www.apple.com"]
}
And a simple example of the YAML file might look like this:
urls:
- https://www.google.com
- https://www.apple.com
Second, include code at the top of your InSpec file to read and parse the JSON and/or YAML, like so:
jsoncontent = inspec.profile.file("tmp.json")
jsonparams = JSON.parse(jsoncontent)
jsonurls = jsonparams['urls']
yamlcontent = inspec.profile.file("tmp.yaml")
yamlparams = YAML.load(yamlcontent)
yamlurls = yamlparams['urls']
Third, use the variables in your InSpec tests, like so:
jsonurls.each do |jsonurl|
describe http(jsonurl) do
puts "json url is " + jsonurl
its('status') { should eq 200 }
end
end
yamlurls.each do |yamlurl|
describe http(yamlurl) do
puts "yaml url is " + yamlurl
its('status') { should eq 200 }
end
end
(NOTE: the puts line is for debugging.)
The result is what you would expect:
json url is https://www.google.com
json url is https://www.apple.com
yaml url is https://www.google.com
yaml url is https://www.apple.com
Profile: InSpec Profile (inspec-file-test)
Version: 0.1.0
Target: local://
http GET on https://www.google.com
✔ status should eq 200
http GET on https://www.apple.com
✔ status should eq 200
http GET on https://www.google.com
✔ status should eq 200
http GET on https://www.apple.com
✔ status should eq 200