Logstash 5.0 with input websocket - unable to stdout json data - sockets

I have the following environment:
/usr/share/logstash# bin/logstash --path.settings=/etc/logstash -f /etc/logstash/conf.d/stream.conf -V
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties.
logstash 5.0.0
jruby 1.7.25 (1.9.3p551) 2016-04-13 867cb81 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_111-b14 +jit [linux-amd64]
java 1.8.0_111 (Oracle Corporation)
jvm Java HotSpot(TM) 64-Bit Server VM / 25.111-b14
I installed the following 2 repos
logstash-input-websocket version 3.0.2
ruby-ftw http://github.com/jordansissel/ruby-ftw"
I created both gems by running gem install .gemspec file.
My Gemfile under /usr/share/logstash folder was modified to add those 2 lines.
gem "logstash-input-websocket", :path => "/home/xav/source/logstash-input-websocket-master"
gem "ftw", :path => "/home/xav/source/ruby-ftw-master"
I know my logstash configuration is ok ( I checked it with -t option).
Furthermore, on purpose I modified my stream.conf file to omit the url definition for the input websocket to make sure the plugin was being used.
I got the expected error in the /var/log/logstash/logstash-plain.log file as below:
xav#xav-VirtualBox:/var/log/logstash$ tail -f logstash-plain.log
[2016-11-01T11:40:28,998][ERROR][logstash.inputs.websocket] Missing a required setting for the websocket input plugin:
input {
websocket {
url => # SETTING MISSING
...
}
}
[2016-11-01T11:40:29,011][ERROR][logstash.agent ] fetched an invalid config {:config=>"\ninput {\n websocket {\n mode => client\n}\n}\n\noutput {\n\n stdout { }\n}\n\n", :reason=>"Something is wrong with your configuration."
I edited the stream.conf file to add the wss url I want to read the json
output from with:
input {
websocket {
mode => client
url => "wss://<my-url-to-websocket/something"}
}
output {
stdout { }
}
I run logstash again. Everything seems to be working fine BUT I don't get anything in my stdout. The log file output is:
11-01T12:09:25,968][DEBUG][logstash.runner ] -------- Logstash Settings (* means modified) ---------
11-01T12:09:25,980][DEBUG][logstash.runner ] node.name: "xav-VirtualBox"
11-01T12:09:25,981][DEBUG][logstash.runner ] *path.config: "/etc/logstash/conf.d/stream.conf"
11-01T12:09:25,981][DEBUG][logstash.runner ] *path.data: "/var/lib/logstash" (default: "/usr/share/logstash/data")
11-01T12:09:25,981][DEBUG][logstash.runner ] config.test_and_exit: false
11-01T12:09:25,981][DEBUG][logstash.runner ] config.reload.automatic: false
11-01T12:09:25,981][DEBUG][logstash.runner ] config.reload.interval: 3
11-01T12:09:25,982][DEBUG][logstash.runner ] metric.collect: true
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.id: "main"
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.workers: 1
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.output.workers: 1
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.batch.size: 125
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.batch.delay: 5
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.unsafe_shutdown: false
11-01T12:09:25,983][DEBUG][logstash.runner ] path.plugins: []
11-01T12:09:25,984][DEBUG][logstash.runner ] config.debug: false
11-01T12:09:25,984][DEBUG][logstash.runner ] *log.level: "debug" (default: "info")
11-01T12:09:25,984][DEBUG][logstash.runner ] version: false
11-01T12:09:25,984][DEBUG][logstash.runner ] help: false
11-01T12:09:25,984][DEBUG][logstash.runner ] log.format: "plain"
11-01T12:09:25,984][DEBUG][logstash.runner ] http.host: "127.0.0.1"
11-01T12:09:25,984][DEBUG][logstash.runner ] http.port: 9600..9700
11-01T12:09:25,986][DEBUG][logstash.runner ] http.environment: "production"
11-01T12:09:25,986][DEBUG][logstash.runner ] *path.settings: "/etc/logstash" (default: "/usr/share/logstash/config")
11-01T12:09:25,986][DEBUG][logstash.runner ] *path.logs: "/var/log/logstash" (default: "/usr/share/logstash/logs")
11-01T12:09:25,987][DEBUG][logstash.runner ] --------------- Logstash Settings -------------------
11-01T12:09:26,039][DEBUG][logstash.agent ] Agent: Configuring metric collection
11-01T12:09:26,043][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Starting {:polling_interval=>1, :polling_timeout=>60}
11-01T12:09:26,049][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Starting {:polling_interval=>1, :polling_timeout=>60}
11-01T12:09:26,122][DEBUG][logstash.agent ] Reading config file {:config_file=>"/etc/logstash/conf.d/stream.conf"}
11-01T12:09:26,197][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#id = "json_982d8975-bd47-4f64-8a68-0da7e7a59a55"
11-01T12:09:26,198][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#enable_metric = true
11-01T12:09:26,198][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#charset = "UTF-8"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#mode = "client"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#url = "wss://REDACTED"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#id = "93ebfb1d0936097ee295b418952f2dab3abb3ef8-1"
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#enable_metric = true
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#codec = <LogStash::Codecs::JSON id=>"json_982d8975-bd47-4f64-8a68-0da7e7a59a55", enable_metric=>true, charset=>"UTF-8">
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#add_field = {}
11-01T12:09:26,211][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#id = "line_a255b752-4d76-4933-b4d3-e76e427bbddb"
11-01T12:09:26,214][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#enable_metric = true
11-01T12:09:26,215][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#charset = "UTF-8"
11-01T12:09:26,215][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#delimiter = "\n"
11-01T12:09:26,218][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#id = "93ebfb1d0936097ee295b418952f2dab3abb3ef8-2"
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#enable_metric = true
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#codec = <LogStash::Codecs::Line id=>"line_a255b752-4d76-4933-b4d3-e76e427bbddb", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#workers = 1
11-01T12:09:26,238][DEBUG][logstash.agent ] starting agent
11-01T12:09:26,242][DEBUG][logstash.agent ] starting pipeline {:id=>"main"}
11-01T12:09:26,650][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
11-01T12:09:26,654][INFO ][logstash.pipeline ] Pipeline main started
11-01T12:09:26,676][DEBUG][logstash.agent ] Starting puma
11-01T12:09:26,682][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
11-01T12:09:26,688][DEBUG][logstash.api.service ] [api-service] start
11-01T12:09:26,748][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
11-01T12:09:27,038][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:27 -0400}
11-01T12:09:28,053][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:28 -0400}
11-01T12:09:29,060][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:29 -0400}
11-01T12:09:30,100][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:30 -0400}
What am I missing?

Related

Connecting Ditto to InfluxDB via Kafka

I am running kafka and influxDB on docker.
I have created a digital twin on ditto, that correctly updates when i send a message with mqtt.
I want the data to be sent from ditto to the influxDB but on influxDB once i create the bucket it shows no data whatsoever.
I have followed this guide:https://www.influxdata.com/blog/getting-started-apache-kafka-influxdb/
(i know this is for a python program but the steps should be the same, i just use the telegraf plugin for kafka consumer instead of the one used in the guide).
I have created the connection and the configuration file of telegraf but nothing happens on InfluxDB.
Here is the telegraf.conf
`
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## ex: urls = ["https://us-west-2-1.aws.cloud2.influxdata.com"]
urls = ["http://localhost:8086"]
## API token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "digital"
## Destination bucket to write into.
bucket = "arduino"
## The value of this tag will be used to determine the bucket. If this
## tag is not set the 'bucket' option is used as the default.
# bucket_tag = ""
## If true, the bucket tag will not be added to the metric.
# exclude_bucket_tag = false
## Timeout for HTTP messages.
# timeout = "5s"
## Additional HTTP headers
# http_headers = {"X-Special-Header" = "Special-Value"}
## HTTP Proxy override, if unset values the standard proxy environment
## variables are consulted to determine which proxy, if any, should be used.
# http_proxy = "http://corporate.proxy:3128"
## HTTP User-Agent
# user_agent = "telegraf"
## Content-Encoding for write request body, can be set to "gzip" to
## compress body or "identity" to apply no encoding.
# content_encoding = "gzip"
## Enable or disable uint support for writing uints influxdb 2.0.
# influx_uint_support = false
## Optional TLS Config for use on HTTP connections.
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
# Read metrics from Kafka topics
[[inputs.kafka_consumer]]
## Kafka brokers.
brokers = ["localhost:9092"]
## Topics to consume.
topics = ["arduino"]
## When set this tag will be added to all metrics with the topic as the value.
# topic_tag = ""
## Optional Client id
# client_id = "Telegraf"
## Set the minimal supported Kafka version. Setting this enables the use of new
## Kafka features and APIs. Must be 0.10.2.0 or greater.
## ex: version = "1.1.0"
# version = ""
## Optional TLS Config
# enable_tls = false
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
## SASL authentication credentials. These settings should typically be used
## with TLS encryption enabled
# sasl_username = "kafka"
# sasl_password = "secret"
## Optional SASL:
## one of: OAUTHBEARER, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512, GSSAPI
## (defaults to PLAIN)
# sasl_mechanism = ""
## used if sasl_mechanism is GSSAPI (experimental)
# sasl_gssapi_service_name = ""
# ## One of: KRB5_USER_AUTH and KRB5_KEYTAB_AUTH
# sasl_gssapi_auth_type = "KRB5_USER_AUTH"
# sasl_gssapi_kerberos_config_path = "/"
# sasl_gssapi_realm = "realm"
# sasl_gssapi_key_tab_path = ""
# sasl_gssapi_disable_pafxfast = false
## used if sasl_mechanism is OAUTHBEARER (experimental)
# sasl_access_token = ""
## SASL protocol version. When connecting to Azure EventHub set to 0.
# sasl_version = 1
# Disable Kafka metadata full fetch
# metadata_full = false
## Name of the consumer group.
# consumer_group = "telegraf_metrics_consumers"
## Compression codec represents the various compression codecs recognized by
## Kafka in messages.
## 0 : None
## 1 : Gzip
## 2 : Snappy
## 3 : LZ4
## 4 : ZSTD
# compression_codec = 0
## Initial offset position; one of "oldest" or "newest".
# offset = "oldest"
## Consumer group partition assignment strategy; one of "range", "roundrobin" or "sticky".
# balance_strategy = "range"
## Maximum length of a message to consume, in bytes (default 0/unlimited);
## larger messages are dropped
max_message_len = 1000000
## Maximum messages to read from the broker that have not been written by an
## output. For best throughput set based on the number of metrics within
## each message and the size of the output's metric_batch_size.
##
## For example, if each message from the queue contains 10 metrics and the
## output metric_batch_size is 1000, setting this to 100 will ensure that a
## full batch is collected and the write is triggered immediately without
## waiting until the next flush_interval.
# max_undelivered_messages = 1000
## Maximum amount of time the consumer should take to process messages. If
## the debug log prints messages from sarama about 'abandoning subscription
## to [topic] because consuming was taking too long', increase this value to
## longer than the time taken by the output plugin(s).
##
## Note that the effective timeout could be between 'max_processing_time' and
## '2 * max_processing_time'.
# max_processing_time = "100ms"
## The default number of message bytes to fetch from the broker in each
## request (default 1MB). This should be larger than the majority of
## your messages, or else the consumer will spend a lot of time
## negotiating sizes and not actually consuming. Similar to the JVM's
## `fetch.message.max.bytes`.
# consumer_fetch_default = "1MB"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
the kafka connection as it is on ditto explorer:
{
"id": "0ab4b527-617f-4f4f-8bac-4ffa4b5a8471",
"name": "Kafka 2.x",
"connectionType": "kafka",
"connectionStatus": "open",
"uri": "tcp://192.168.109.74:9092",
"sources": [
{
"addresses": [
"arduino"
],
"consumerCount": 1,
"qos": 1,
"authorizationContext": [
"nginx:ditto"
],
"enforcement": {
"input": "{{ header:device_id }}",
"filters": [
"{{ entity:id }}"
]
},
"acknowledgementRequests": {
"includes": []
},
"headerMapping": {},
"payloadMapping": [
"Ditto"
],
"replyTarget": {
"address": "theReplyTopic",
"headerMapping": {},
"expectedResponseTypes": [
"response",
"error",
"nack"
],
"enabled": true
}
}
],
"targets": [
{
"address": "topic/key",
"topics": [
"_/_/things/twin/events",
"_/_/things/live/messages"
],
"authorizationContext": [
"nginx:ditto"
],
"headerMapping": {}
}
],
"clientCount": 1,
"failoverEnabled": true,
"validateCertificates": true,
"processorPoolSize": 1,
"specificConfig": {
"saslMechanism": "plain",
"bootstrapServers": "localhost:9092"
},
"tags": []
}
the policy file for ditto:
{
"policyId": "my.test:policy1",
"entries": {
"owner": {
"subjects": {
"nginx:ditto": {
"type": "nginx basic auth user"
}
},
"resources": {
"thing:/": {
"grant": ["READ","WRITE"],
"revoke": []
},
"policy:/": {
"grant": ["READ","WRITE"],
"revoke": []
},
"message:/": {
"grant": ["READ","WRITE"],
"revoke": []
}
}
},
"observer": {
"subjects": {
"ditto:observer": {
"type": "observer user"
}
},
"resources": {
"thing:/features": {
"grant": ["READ"],
"revoke": []
},
"policy:/": {
"grant": ["READ"],
"revoke": []
},
"message:/": {
"grant": ["READ"],
"revoke": []
}
}
}
}
}
the configuration file of telegraf but nothing happens on InfluxDB
When Telegraf is reading data from Kafka is needs to transform that into time-series metrics that InfluxDB can digest. You have correctly selected the JSON parser, but there may be additional configuraiton required, or even the use of the more powerful json_v2 parser to be able to set the tags and fields based on the JSON data.
My suggestion is to use the [[outputs.file]] output to see if anything is even getting passed, probably nothing will show up. Then do the following:
determine what your JSON looks like in kafka
what you want that JSON to look like as time-series data in influxdb
use the json_v2 parser to set the apporporiate tags and fields.

Jfrog artifactory CI/CD integration with Azure - Artifact version control

I am using ArtifactoryGenericDownload#3 task to download .whl file from JFrog artifactory. However I want to only download the latest version which is python/de-cf-dnalib/0.7.0 but this cannot be hardcoded because the version needs to be updated from time to time. Could you please suggest any solution on how to add version control to my code ?
task:
ArtifactoryGenericDownload#3
inputs:
connection: "JFROG"
specSource: "taskConfiguration"
fileSpec: |
{
"files": [
{
"pattern": "python/*.whl",
"target": "./$(Pipeline.Workspace)/de-cf-dnalib"
}
]
}
failNoOp: true
result:
{
"files": [
{
"pattern": "python/de-cf-dnalib/*.whl",
"target": ".//datadisk/agents-home/...work/744/de-cf-dnalib"
}
]
}
Executing JFrog CLI Command: /datadisk/hostedtoolcache/jfrog/1.53.2/x64/jfrog rt dl --url="https://jfrog.io/artifactory" --access-token=*** --spec="/datadisk/agents-home/agent-0/azl-da-d-02-0/_work/744/s/downloadSpec1656914680005.json" --fail-no-op=true --dry-run=false --insecure-tls=false --threads=3 --retries=3 --validate-symlinks=false --split-count=3 --min-split=5120
[Info] Searching items to download...
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.5.0/de_cf_dnalib-0.5.0-py3-none-any.whl
[Info] [Thread 1] Downloading python/de-cf-dnalib/0.6.0/de_cf_dnalib-0.6.0-py3-none-any.whl
[Info] [Thread 0] Downloading python/de-cf-dnalib/0.7.0.dev0/de_cf_dnalib-0.7.0.dev0-py3-none-any.whl
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.7.0/de_cf_dnalib-0.7.0-py3-none-any.whl
{
"status": "success",
"totals": {
"success": 4,
"failure": 0
}
}
Artifactory from Jfrog
fileSpec also supports filtering by Artifactory Query Language (AQL) instead of pattern.
With AQL, you can sort by version of create date, and get only the latest recorded file, for example:
items.find({
"repo": "my-repo",
"name": {"$match":"*.jar"}
}).include("name","created").sort({"$desc": ["created"]}).limit(2)
You can read more about AQL in the following link:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language

Error while generating node info file with database.runMigration

I added database.runMigration: true to my build.gradle file but I'm getting this error when running deployNodes. What's causing this?
[ERROR] 14:05:21+0200 [main] subcommands.ValidateConfigurationCli.logConfigurationErrors$node - Error(s) while parsing node configuration:
- for path: "database.runMigration": Unknown property 'runMigration'
Here's my build.gradle's deployNode task
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
ext.drivers = ['.jdbc_driver']
ext.extraConfig = [
'dataSourceProperties.dataSourceClassName' : "org.postgresql.ds.PGSimpleDataSource",
'dataSourceProperties.dataSource.user' : "corda",
'dataSourceProperties.dataSource.password' : "corda1234",
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.runMigration' : "true"
]
nodeDefaults {
projectCordapp {
deploy = false
}
cordapp project(':cordapp-contracts-states')
cordapp project(':cordapp')
}
node {
name "O=HUS,L=Helsinki,C=FI"
p2pPort 10008
rpcSettings {
address "localhost:10009"
adminAddress "localhost:10049"
}
webPort 10017
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
extraConfig = ext.extraConfig + [
'dataSourceProperties.dataSource.url' :
"jdbc:postgresql://localhost:5432/hus_db?currentSchema=corda_schema"
]
drivers = ext.drivers
}
}
The database.runMigration is a Corda Enterprise property only.
To control database migration in Corda Open Source use initialiseSchema.
initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to false on startup, the node will validate if it’s running against a compatible database schema.
Default: true
You may refer to the below link to look out for other database properties which you can set.
https://docs.corda.net/corda-configuration-file.html

kamon stastsd not sending metrics when i run my scala application as a docker container

When I run scala application using 'sbt run' command it is sending kamon metrics to graphite/grafana container. Then I created a docker image for my scala application and running it as a docker container.
Now it is not sending metrics to graphite/grafana container. Both my application container and graphite/grafana container are running under same docker network.
The command I used to run grafana image is: docker run --network smart -d -p 80:80 -p 81:81 -p 2003:2003 -p 8125:8125/udp -p 8126:8126 8399049ce731
kamon configuration in application.conf is
kamon {
auto-start=true
metric {
tick-interval = 1 seconds
filters {
akka-actor {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "**/kamon/**" ]
}
akka-router {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "**/kamon/**" ]
}
akka-dispatcher {
includes = ["*/user/*"]
excludes = [ "*/system/**", "*/user/IO-**", "*kamon*",
"*/kamon/*", "**/kamon/**" ]
}
trace {
includes = [ "**" ]
excludes = [ ]enter code here
}
}
}
# needed for "[error] Exception in thread "main"
java.lang.ClassNotFoundException: local"
internal-config {
akka.actor.provider = "akka.actor.LocalActorRefProvider"
}
statsd {
hostname = "127.0.0.1"
port = 8125
# Subscription patterns used to select which metrics will be pushed
to StatsD. Note that first, metrics
# collection for your desired entities must be activated under the
kamon.metrics.filters settings.
subscriptions {
histogram = [ "**" ]
min-max-counter = [ "**" ]
gauge = [ "**" ]
counter = [ "**" ]
trace = [ "**" ]
trace-segment = [ "**" ]
akka-actor = [ "**" ]
akka-dispatcher = [ "**" ]
akka-router = [ "**" ]
system-metric = [ "**" ]
http-server = [ "**" ]
}
metric-key-generator = kamon.statsd.SimpleMetricKeyGenerator
simple-metric-key-generator {
application = "my-application"
include-hostname = true
hostname-override = none
metric-name-normalization-strategy = normalize
}
}
modules {
kamon-scala.auto-start = yes
kamon-statsd.auto-start = yes
kamon-system-metrics.auto-start = yes
}
}
your help will be very much appreciated.
It is necessary to add AspectJ weaver as Java Agent when you're starting application: -javaagent:aspectjweaver.jar
You can add the following settings in your project SBT configuration:
.settings(
retrieveManaged := true,
libraryDependencies += "org.aspectj" % "aspectjweaver" % aspectJWeaverV)
So AspectJ weaver JAR will be copied to ./lib_managed/jars/org.aspectj/aspectjweaver/aspectjweaver-[aspectJWeaverV].jar in your project root.
Then you can refer this JAR in your Dockerfile:
COPY ./lib_managed/jars/org.aspectj/aspectjweaver/aspectjweaver-*.jar /app-
workdir/aspectjweaver.jar
WORKDIR /app-workdir
CMD ["java", "-javaagent:aspectjweaver.jar", "-jar", "app.jar"]

Fiware cygnus: no data have been persisted in mongo DB

I am trying to use cygnus with Mongo DB, but no data have been persisted in the data base.
Here is the notification got in cygnus:
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Starting transaction (1437482681-118-0000000000)
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Received data ({ "subscriptionId" : "55a73819d0c457bb20b1d467", "originator" : "localhost", "contextResponses" : [ { "contextElement" : { "type" : "enocean", "isPattern" : "false", "id" : "enocean:myButtonA", "attributes" : [ { "name" : "ButtonValue", "type" : "", "value" : "ON", "metadatas" : [ { "name" : "TimeInstant", "type" : "ISO8601", "value" : "2015-07-20T21:29:56.509293Z" } ] } ] }, "statusCode" : { "code" : "200", "reasonPhrase" : "OK" } } ]})
15/07/21 14:48:01 INFO handlers.OrionRestHandler: Event put in the channel (id=1454120446, ttl=10)
Here is my agent configuration:
cygnusagent.sources = http-source
cygnusagent.sinks = OrionMongoSink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /home/egm_demo/usr/fiware-cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
#cygnusagent.sinks.mongo-sink.db_prefix = kura
# prefix pro the MongoDB collections
#cygnusagent.sinks.mongo-sink.collection_prefix = button
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
# ============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Here is my rule :
{
"grouping_rules": [
{
"id": 1,
"fields": [
"button"
],
"regex": ".*",
"destination": "kura",
"fiware_service_path": "/kuraspath"
}
]
}
Any ideas of what I have missed? Thanks in advance for your help!
This configuration parameter is wrong:
cygnusagent.sinks = OrionMongoSink
According to your configuration, it must be mongo-sink (I mean, you are configuring a Mongo sink named mongo-sink when you configure lines such as cygnusagent.sinks.mongo-sink.type).
In addition, I would recommend you to not using the grouping rules feature; it is an advanced feature about sending the data to a collection different than the default one, and in a first stage I would play with the default behaviour. Thus, my recommendation is to leave the path to the file in cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file, but comment all the JSON within it :)