Logstash wont start when adding a match statement in a grok block - docker-compose
I'm having difficulty with starting Logstash.
My logstash.conf looks like this:
input {
beats {
port => "5044"
}
}
filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{WORD:event_type}\t%{NUMBER:server_time}\t%{NUMBER:market_time}\t%{WORD:instrument}\t%{C_NUMBER:last_price}\t%{C_NUMBER:trade_quantity}\t%{C_NUMBER:bid_price}\t%{C_NUMBER:bid_quantity}\t%{C_NUMBER:ask_price}\t%{C_NUMBER:ask_quantity}\t%{GREEDYDATA:flags}\t%{GREEDYDATA:additional_infos}"}
}
# ... and other stuff here...
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Logstash works fine if I comment the match => line. But with it, it does not start, meaning nothing shows up when I run netstat -na | grep 5044 in the container. It is simply not listening on 5044.
And when I try to run Logstash manually by /opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /etc/logstash/conf.d/filebeat-config.conf, I get the following:
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-27T09:35:25,883][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-27T09:35:25,887][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-27T09:35:26,177][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-27T09:35:26,213][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"5abcdba2-475f-46a9-b192-a343ca15ce89", :path=>"/tmp/logstash/data/uuid"}
[2018-08-27T09:35:26,727][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-27T09:35:29,016][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-27T09:35:29,316][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-27T09:35:29,325][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-27T09:35:29,467][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-27T09:35:29,510][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-27T09:35:29,513][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-27T09:35:29,533][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-27T09:35:29,549][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-27T09:35:29,565][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-27T09:35:29,689][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::FilterDelegator:0x68bd7527 #metric_events_out=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0, #metric_events_in=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0, #metric_events_time=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0, #id=\"e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724\", #klass=LogStash::Filters::Grok, #metric_events=#<LogStash::Instrument::NamespacedMetric:0x5867faed #metric=#<LogStash::Instrument::Metric:0x61ef1454 #collector=#<LogStash::Instrument::Collector:0x51306706 #agent=nil, #metric_store=#<LogStash::Instrument::MetricStore:0x5227344a #store=#<Concurrent::Map:0x00000000000fb4 entries=2 default_proc=nil>, #structured_lookup_mutex=#<Mutex:0x7efeb9ea>, #fast_lookup=#<Concurrent::Map:0x00000000000fb8 entries=75 default_proc=nil>>>>, #namespace_name=[:stats, :pipelines, :main, :plugins, :filters, :e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724, :events]>, #filter=<LogStash::Filters::Grok patterns_dir=>[\"./patterns\"], match=>{\"message\"=>\"%{WORD:event_type}\\\\t%{NUMBER:server_time}\\\\t%{NUMBER:market_time}\\\\t%{WORD:instrument}\\\\t%{C_NUMBER:last_price}\\\\t%{C_NUMBER:trade_quantity}\\\\t%{C_NUMBER:bid_price}\\\\t%{C_NUMBER:bid_quantity}\\\\t%{C_NUMBER:ask_price}\\\\t%{C_NUMBER:ask_quantity}\\\\t%{GREEDYDATA:flags}\\\\t%{GREEDYDATA:additional_infos}\"}, id=>\"e473071da674c7efab2a8ee71c9e682afff58b8a4725d076964bc668f3b2c724\", enable_metric=>true, periodic_flush=>false, patterns_files_glob=>\"*\", break_on_match=>true, named_captures_only=>true, keep_empty_captures=>false, tag_on_failure=>[\"_grokparsefailure\"], timeout_millis=>30000, tag_on_timeout=>\"_groktimeout\">>", :error=>"pattern %{C_NUMBER:last_price} not defined", :thread=>"#<Thread:0x20b6525c run>"}
[2018-08-27T09:35:29,699][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{C_NUMBER:last_price} not defined>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "org/jruby/RubyKernel.java:1292:in `loop'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:281:in `block in register'", "org/jruby/RubyArray.java:1734:in `each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:275:in `block in register'", "org/jruby/RubyHash.java:1343:in `each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:270:in `register'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:340:in `register_plugin'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `register_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:729:in `maybe_setup_out_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:361:in `start_workers'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `run'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:248:in `block in start'"], :thread=>"#<Thread:0x20b6525c run>"}
[2018-08-27T09:35:29,724][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
Also, next to my logstash.conf, I have the directory patterns including a file containing the following:
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
C_NUMBER (?:[+-]?(?:[(0-9)|(*,#,.)]+))
C_NUMBER2 (?:[+-]?(?:[(0-9)|(*,#,.)|null]+))
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b
POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>(?>\\.|[^\\]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
TIMESTAMP_CUSTOM %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND}.?%{NUMBER})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
What is wrong with the match => line??
I highly appreciate your help.
You're attempting to use a grok pattern, {C_NUMBER}, that Logstash doesn't know about. It doesn't appear to be a standard pattern bundled with Logstash. put NUMBER in that place, and restart logstash.
I was able to resolve the issue by changing patterns_dir => ["./patterns"] to patterns_dir => ["/etc/logstash/conf.d/patterns"].
The match line is referencing a grok pattern that Logstash didn't find because of the relative path to the patterns directory.
Related
ElastAlert2 No mapping found
I'm trying set ElastAlert for Opensearch 2.8. I Write config # This is the folder that contains the rule yaml files # Any .yaml file will be loaded as a rule rules_folder: /etc/elastalert/rules # How often ElastAlert will query Elasticsearch # The unit can be anything from weeks to seconds run_every: minutes: 1 # ElastAlert will buffer results from the most recent # period of time, in case some log sources are not in real time buffer_time: minutes: 15 # The Elasticsearch hostname for metadata writeback # Note that every rule can have its own Elasticsearch host es_host: localhost # The Elasticsearch port es_port: 9200 # The AWS region to use. Set this when using AWS-managed elasticsearch #aws_region: us-east-1 # The AWS profile to use. Use this if you are using an aws-cli profile. # See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html # for details #profile: test # Optional URL prefix for Elasticsearch #es_url_prefix: elasticsearch # Connect with TLS to Elasticsearch use_ssl: True # GET request with body is the default option for Elasticsearch. # If it fails for some reason, you can pass 'GET', 'POST' or 'source'. # See http://elasticsearch-py.readthedocs.io/en/master/connection.html?highlight=send_get_body_as#transport # for details # es_send_get_body_as: GET # Option basic-auth username and password for Elasticsearch es_username: admin es_password: password # Use SSL authentication with client certificates client_cert must be # a pem file containing both cert and key for client verify_certs: False #ca_certs: /path/to/cacert.pem #client_cert: /path/to/client_cert.pem #client_key: /path/to/client_key.key # The index on es_host which is used for metadata storage # This can be a unmapped index, but it is recommended that you run # elastalert-create-index to set a mapping writeback_index: elastalert_status writeback_alias: elastalert_alerts # If an alert fails for some reason, ElastAlert will retry # sending the alert until this time period has elapsed alert_time_limit: days: 2 ... And rule file # Alert when the rate of events exceeds a threshold . # (Optional) # Elasticsearch host es_host: localhost . # (Optional) # Elasticsearch port es_port: 9200 . # (OptionaL) Connect with SSL to Elasticsearch use_ssl: True ssl_show_warn: False verify_certs: False . # (Optional) basic-auth username and password for Elasticsearch # es_username: admin # es_password: ytnhfvgkby . # (Required) # Rule name, must be unique name: Loopdetect . # (Required) # Type of alert. # the frequency rule type alerts when num_events events occur with timeframe time type: any . # (Required) # Index to search, wildcard supported index: syslog-20221104 . # (Required, frequency specific) # Alert when this many documents matching the query occur within a timeframe num_events: 1 . # (Required, frequency specific) # num_events must occur within this amount of time to trigger an alert timeframe: hours: 24 . # (Required) # A list of Elasticsearch filters used for find events # These filters are joined with AND and nested in a filtered query # For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html # filter: # - term: # process.name: "JUSTME" filter: - query: query_string: query: "message: *loop*" # (Required) # The alert is use when a match is found alert: - "email" . # (required, email specific) # a list of email addresses to send alerts to email: - "myemail" But when I try check this rule. I get error: elastalert-test-rule rules/loopdetect_alert.yaml INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent. To send them but remain verbose, use --verbose instead. WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1 [status:400 request:0.048s] Error running your filter: RequestError(400, 'search_phase_execution_exception', {'error': {'root_cause': [{'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}], 'type': 'search_phase_execution_exception', 'reason': 'all shards failed', 'phase': 'query', 'grouped': True, 'failed_shards': [{'shard': 0, 'index': 'syslog-20221104', 'node': '5spTsU7-QienT8Jn064MMA', 'reason': {'type': 'query_shard_exception', 'reason': 'No mapping found for [#timestamp] in order to sort on', 'index': 'syslog-20221104', 'index_uuid': 'BG6MQmmYRUyLBY3tEFykEQ'}}]}, 'status': 400}) INFO:elastalert:Note: In debug mode, alerts will be logged to console but NOT actually sent. To send them but remain verbose, use --verbose instead. INFO:elastalert:1 rules loaded INFO:apscheduler.scheduler:Adding job tentatively -- it will be properly scheduled when the scheduler starts WARNING:elasticsearch:POST https://localhost:9200/syslog-20221104/_search?_source_includes=%40timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=10000 [status:400 request:0.039s] ERROR:elastalert:Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on') {"writeback": {"elastalert_error": {"message": "Error running query: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')", "traceback": ["Traceback (most recent call last):", " File \"/usr/local/lib/python3.11/dist-packages/elastalert2-2.8.0-py3.11.egg/elastalert/elastalert.py\", line 370, in get_hits", " res = self.thread_data.current_es.search(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/utils.py\", line 152, in _wrapped", " return func(*args, params=params, headers=headers, **kwargs)", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/client/__init__.py\", line 1658, in search", " return self.transport.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 392, in perform_request", " raise e", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/transport.py\", line 358, in perform_request", " status, headers_response, data = connection.perform_request(", " ^^^^^^^^^^^^^^^^^^^^^^^^^^^", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/http_requests.py\", line 199, in perform_request", " self._raise_error(response.status_code, raw_data)", " File \"/usr/local/lib/python3.11/dist-packages/elasticsearch/connection/base.py\", line 315, in _raise_error", " raise HTTP_EXCEPTIONS.get(status_code, TransportError)(", "elasticsearch.exceptions.RequestError: RequestError(400, 'search_phase_execution_exception', 'No mapping found for [#timestamp] in order to sort on')"], "data": {"rule": "Loopdetect", "query": {"query": {"bool": {"filter": {"bool": {"must": [{"range": {"#timestamp": {"gt": "2022-11-03T12:12:39.618168Z", "lte": "2022-11-03T12:27:39.618168Z"}}}, {"query_string": {"query": "message: *loop*"}}]}}}}, "sort": [{"#timestamp": {"order": "asc"}}]}}}}} But if I try get data by CURL, it's ok curl -X GET 'https://localhost:9200/syslog-20221104/_search?ignore_unavailable=true&size=1' -u 'admin:password' --insecure {"took":4,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":10000,"relation":"gte"},"max_score":1.0,"hits":[{"_index":"syslog-20221104","_id":"_bSKQYQB_cpiH2g_hgvj","_score":1.0,"_source":{"host":"10.53.0.35","hostname":"10.53.0.35","message":"Port 2 link up, 100Mbps FULL duplex","source_ip":"91.195.230.4","source_type":"syslog","timestamp":"2022-11-04T07:28:27Z"}}]}} Help me please understand, what I do wrong. Thanks.
I add timestamp_field: timestamp. And all work fine!
sync mongo data to elastic using logstash
I want to sync my mongodb data(local mongodb) to elastic search(local elastic) using logstash-plugin of mongodb I have install logstash plugin using bin/logstash-plugin install logstash-input-mongodb . Then i created a mongodata.conf file in /usr/share/logstash directory. When I execute the conf file then it shows --> Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties My config file is: input{ mongodb{ uri => "mongodb://localhost:27017/reporterDB" placeholder_db_dir => "/opt/logstash-mongodb/" placeholder_db_name => "logstash_sqlite.db" collection => "iam_ms_test" batch_size => 5000 } } filter{ } output { stdout { codec => rubydebug } elasticsearch { action => "index" hosts => "localhost:9200" user => elastic password => changeme index => "mongo_log" document_type => "document_type" document_id => "%{id}" } } I am getting below lines in logstash-plain.log file [2019-11-01T15:41:00,869][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]} [2019-11-01T15:41:00,871][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>750, :thread=>"#<Thread:0x351f7fd1#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 run>"} [2019-11-01T15:41:01,068][INFO ][logstash.inputs.mongodb ] Registering MongoDB input [2019-11-01T15:41:01,116][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::MongoDB uri=>\"mongodb://localhost:27017/anchorReports\", placeholder_db_dir=>\"/opt/logstash-mongodb/\", placeholder_db_name=>\"logstash_sqlite.db\", collection=>\"hi_p5m\", batch_size=>5000, id=>\"ec7682e8c6c5676deca84d5072c5f7865120a107ffce81ce21caa878c6e4ed09\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_441f95b8-cc8a-4b9e-a45f-657ed2011e2b\", enable_metric=>true, charset=>\"UTF-8\">, since_table=>\"logstash_since\", since_column=>\"_id\", since_type=>\"id\", parse_method=>\"flatten\", isodate=>false, retry_delay=>3, generateId=>false, unpack_mongo_id=>false, message=>\"Default message...\", interval=>1>", :error=>"Java::JavaSql::SQLException: path to '/opt/logstash-mongodb/logstash_sqlite.db': '/opt/logstash-mongodb' does not exist", :thread=>"#<Thread:0x351f7fd1#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 run>"} [2019-11-01T15:41:01,869][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLException: path to '/opt/logstash-mongodb/logstash_sqlite.db': '/opt/logstash-mongodb' does not exist>, :backtrace=>["org.sqlite.core.CoreConnection.open(org/sqlite/core/CoreConnection.java:190)", "org.sqlite.core.CoreConnection.<init>(org/sqlite/core/CoreConnection.java:74)", "org.sqlite.jdbc3.JDBC3Connection.<init>(org/sqlite/jdbc3/JDBC3Connection.java:24)", "org.sqlite.jdbc4.JDBC4Connection.<init>(org/sqlite/jdbc4/JDBC4Connection.java:23)", "org.sqlite.SQLiteConnection.<init> "(org/sqlite/SQLiteConnection.java:45)", "org.sqlite.JDBC.createConnection(org/sqlite/JDBC.java:114)", "org.sqlite.JDBC.connect(org/sqlite/JDBC.java:88)" I want the records on my elastic search under `index("mongo_log"). I also want to know the uses of placeholder_db_dir and placeholder_db_name and whats should be these values when we are using mongodb as the input database.
Problem solved! actually the directory opt/logstash was not created . So I manually create the logstash folder under opt. After that i gave Write permission to that directory , so that when we execute the command for logstash then it can create file inside this folder.
Problem with Logstash configuration file using Mongodb input plugin
I am getting this error while running the conf file [2019-10-24T16:17:09,572][ERROR][logstash.javapipeline ][main] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>java.lang.ClassCastException: org.jruby.RubyNil cannot be cast to org.jruby.RubyFixnum, :backtrace=>["org.jruby.runtime.invokedynamic.MathLinker.fixnum_op_equal(MathLinker.java:237)", "java.lang.invoke.MethodHandle.invokeWithArguments(Unknown Source)", "org.jruby.runtime.invokedynamic.MathLinker.fixnumOperator(MathLinker.java:171)", "D_3a_.Elastic.logstash_minus_7_dot_4_dot_0.vendor.bundle.jruby.$2_dot_5_dot_0.gems.mongo_minus_2_dot_10_dot_2.lib.mongo.server_selector.selectable.RUBY$method$initialize$0(D:/Elastic/logstash-7.4.0/vendor/bundle/jruby/2.5.0/gems/mongo-2.10.2/lib/mongo/server_selector/selectable.rb:46)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:91)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:90)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", ... and a lot of other rows This is my config file input{ mongodb{ uri=>"mongodb://localhost:27017/Tesi" placeholder_db_dir=>"D:\Elastic\logstash-7.4.0" placeholder_db_name=>"commenti_sqlite.db" collection=>"Commenti_youtube" batch_size=>5000 } } filter{ } output{ elasticsearch{ hosts=>["localhost:9200"] } } what's wrong with my configuration file? Are there other ways to pass MongoDb data to logstash?
logstash-input-mongodb loop on a "restarting error" - Timestamp
I try to use the mongodb plugin as input for logstash. Here is my simple configuration: input { mongodb { uri => 'mongodb://localhost:27017/testDB' placeholder_db_dir => '/Users/TEST/Documents/WORK/ELK_Stack/LogStash/data/' collection => 'logCollection_ALL' batch_size => 50 } } filter {} output { stdout {} } But I'm facing a "loop issue" probably due to a field "timestamp" but I don't know what to do. [2018-04-25T12:01:35,998][WARN ][logstash.inputs.mongodb ] MongoDB Input threw an exception, restarting {:exception=>#TypeError: wrong argument type String (expected LogStash::Timestamp)>} With also a DEBUG log: [2018-04-25T12:01:34.893000 #2900] DEBUG -- : MONGODB | QUERY | namespace=testDB.logCollection_ALL selector={:_id=>{:$gt=>BSON::ObjectId('5ae04f5917e7979b0a000001')}} flags=[:slave_ok] limit=50 skip=0 project=nil | runtime: 39.0000ms How can I parametrize my logstash config to get my output in the stdout console ?
It's because of field #timestamp that has ISODate data type. You must remove this field from all documents. db.getCollection('collection1').update({}, {$unset: {"#timestamp": 1}}, {multi: true})
logstash mongo Db connection issue
I am unable to push data to mongo Db using logstash My config file looks like:- input { file { type => "log" path => "d:\logs\*.txt" } } output { mongodb { database => "abhi1" collection => "plain" uri => "mongodb://127.0.0.1:27017" } } command used for executing configuration file is logstash -f ./conf/demo.conf ERROR :- [2015-09-08T16:26:04.883000 #4528] DEBUG -- : MONGODB | COMMAND | namespace=a in.$cmd selector={:ismaster=>1} flags=[] limit=-1 skip=0 project=nil | runtime 46.9999ms hoping to get a workaround soon. thanks