Hyperledger fabric channel creation error - unknown consortium name - docker-compose

I've been following the byfn network tutorial from hyperledger-fabric v1.4.4 and adapting it so that it has only a single organisation with a single orderer (however my idea is to not have a solo orderer but a etcdraft orderer type) and two peers so far.
I can generate the certificates and artifacts and then start the network (using docker-compose -f docker-compose-cli.yaml up) without any errors, using the commands provided in the tutorial (I've changed some peer names and orderer names from the tutorial to fit my project), however when trying to create my first channel with the command peer channel create -o orderer.trading.com:7050 -c trading-channel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/trading.com/orderers/orderer.trading.com/msp/tlscacerts/tlsca.trading.com-cert.pem, I get the following error: Error: got unexpected status: BAD_REQUEST -- Unknown consortium name: TradingConsortium I've checked the channel-artifacts genesis.block and channel.tx and both have reference of the consortium.
It is also defined under the profiles of the configtx.yaml file as shown below.
What could it be and how would I go about debugging this?
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg
# ID to load the MSP definition as
ID: OrdererMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/trading.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &OrgTrader
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrgTraderMSP
# ID to load the MSP definition as
ID: OrgTraderMSP
MSPDir: crypto-config/peerOrganizations/orgtrader.trading.com/msp
# Policies defines the set of policies at this level of the config tree
# For organization policies, their canonical path is usually
# /Channel/<Application|Orderer>/<OrgName>/<PolicyName>
Policies:
Readers:
Type: Signature
Rule: "OR('OrgTraderMSP.admin', 'OrgTraderMSP.peer', 'OrgTraderMSP.client')"
Writers:
Type: Signature
Rule: "OR('OrgTraderMSP.admin', 'OrgTraderMSP.client')"
Admins:
Type: Signature
Rule: "OR('OrgTraderMSP.admin')"
Endorsement:
Type: Signature
Rule: "OR('OrgTraderMSP.peer')"
# leave this flag set to true.
AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.orgtrader.trading.com
Port: 7051
################################################################################
#
# SECTION: Capabilities
#
# - This section defines the capabilities of fabric network. This is a new
# concept as of v1.1.0 and should not be utilized in mixed networks with
# v1.0.x peers and orderers. Capabilities define features which must be
# present in a fabric binary for that binary to safely participate in the
# fabric network. For instance, if a new MSP type is added, newer binaries
# might recognize and validate the signatures from this type, while older
# binaries without this support would be unable to validate those
# transactions. This could lead to different versions of the fabric binaries
# having different world states. Instead, defining a capability for a channel
# informs those binaries without this capability that they must cease
# processing transactions until they have been upgraded. For v1.0.x if any
# capabilities are defined (including a map with all capabilities turned off)
# then the v1.0.x peer will deliberately crash.
#
################################################################################
Capabilities:
# Channel capabilities apply to both the orderers and the peers and must be
# supported by both.
# Set the value of the capability to true to require it.
Channel: &ChannelCapabilities
# V2_0 capability ensures that orderers and peers behave according
# to v2.0 channel capabilities. Orderers and peers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 capability.
# Prior to enabling V2.0 channel capabilities, ensure that all
# orderers and peers on a channel are at v2.0.0 or later.
V2_0: true
# Orderer capabilities apply only to the orderers, and may be safely
# used with prior release peers.
# Set the value of the capability to true to require it.
Orderer: &OrdererCapabilities
# V2_0 orderer capability ensures that orderers behave according
# to v2.0 orderer capabilities. Orderers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 orderer capability.
# Prior to enabling V2.0 orderer capabilities, ensure that all
# orderers on channel are at v2.0.0 or later.
V2_0: true
# Application capabilities apply only to the peer network, and may be safely
# used with prior release orderers.
# Set the value of the capability to true to require it.
Application: &ApplicationCapabilities
# V2_0 application capability ensures that peers behave according
# to v2.0 application capabilities. Peers from
# prior releases would behave in an incompatible way, and are therefore
# not able to participate in channels at v2.0 application capability.
# Prior to enabling V2.0 application capabilities, ensure that all
# peers on channel are at v2.0.0 or later.
V2_0: true
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Application policies, their canonical path is
# /Channel/Application/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
LifecycleEndorsement:
Type: ImplicitMeta
Rule: "MAJORITY Endorsement"
Endorsement:
Type: ImplicitMeta
Rule: "MAJORITY Endorsement"
Capabilities:
<<: *ApplicationCapabilities
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
OrdererType: etcdraft
Addresses:
- orderer.trading.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
# Policies defines the set of policies at this level of the config tree
# For Orderer policies, their canonical path is
# /Channel/Orderer/<PolicyName>
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# BlockValidation specifies what signatures must be included in the block
# from the orderer for the peer to validate it.
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
################################################################################
#
# CHANNEL
#
# This section defines the values to encode into a config transaction or
# genesis block for channel related parameters.
#
################################################################################
Channel: &ChannelDefaults
# Policies defines the set of policies at this level of the config tree
# For Channel policies, their canonical path is
# /Channel/<PolicyName>
Policies:
# Who may invoke the 'Deliver' API
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
# Who may invoke the 'Broadcast' API
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
# By default, who may modify elements at this config level
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
# Capabilities describes the channel level capabilities, see the
# dedicated Capabilities section elsewhere in this file for a full
# description
Capabilities:
<<: *ChannelCapabilities
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
ChemartOrgsChannel:
Consortium: TradingConsortium
<<: *ChannelDefaults
Application:
<<: *ApplicationDefaults
Organizations:
- *OrgTrader
Capabilities:
<<: *ApplicationCapabilities
MultiNodeEtcdRaft:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: orderer.trading.com
Port: 7050
ClientTLSCert: crypto-config/ordererOrganizations/trading.com/orderers/orderer.trading.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/trading.com/orderers/orderer.trading.com/tls/server.crt
Addresses:
- orderer.trading.com:7050
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *OrdererOrg
Consortiums:
TradingConsortium:
Organizations:
- *OrgTrader
This is my crypto-config.yaml file:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: trading.com
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# OrgTrader
# ---------------------------------------------------------------------------
- Name: OrgTrader
Domain: orgtrader.trading.com
EnableNodeOUs: true
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.orgtrader.trading.com"
# CommonName: foo27.org5.trading.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 2
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 1

Related

Trying to understand the behavior of Jmx exporter blacklist with example from confluentic to monitor kafka components

I am trying to understand how the blacklist mechanism of the jmx exporter works.
I took an example from here https://github.com/confluentinc/jmx-monitoring-stacks/blob/7.2-post/shared-assets/jmx-exporter/confluent_ksql.yml
At the top of it we have the following blacklist
blacklistObjectNames:
- "io.confluent.ksql.metrics:name=*"
- kafka.streams:type=kafka-metrics-count
# This will ignore the admin client metrics from KSQL server and will blacklist certain metrics
# that do not make sense for ingestion.
- "kafka.admin.client:*"
- "kafka.consumer:type=*,id=*"
- "kafka.consumer:type=*,client-id=*"
- "kafka.consumer:type=*,client-id=*,node-id=*"
- "kafka.producer:type=*,id=*"
- "kafka.producer:type=*,client-id=*"
- "kafka.producer:type=*,client-id=*,node-id=*"
- "kafka.streams:type=stream-processor-node-metrics,thread-id=*,task-id=*,processor-node-id=*"
- "kafka.*:type=kafka-metrics-count,*"
- "io.confluent.ksql.metrics:type=_confluent-ksql-rest-app-command-runner,*"
Yet in the rule pattern we have things like
# "kafka.consumer:type=app-info,client-id=*"
# "kafka.producer:type=app-info,client-id=*"
- pattern: "kafka.(.+)<type=app-info, client-id=(.+)><>(.+): (.+)"
value: 1
name: kafka_$1_app_info
labels:
client_type: $1
client_id: $2
$3: $4
type: UNTYPED
Isn't that rule supposed to not work given
- "kafka.producer:type=*,client-id=*"

Telegraf connection to Mosquitto using TLS

In my system (with raspberry) I have some sensors that publish data to Mosquitto, I'm using Telegraf to transfer the data do an influxDB database and I'm using Grafana to show the data.
During the test without TLS connection (in mosquittos) everything works correctly but when I activated the TLS I start to have a problem with Telegraf.
The sensor are sending the data to the broker using the client.key, client.crt and ca.crt.
In the broker I can see the data from the sensor. So I think the problem in not in this.
In telegraf (I suppose it works as client) I tried to configure the TLS connection.
Looking at the telegraf.service status , it is active and running. Looking at the journal I don't see errors in the connection but I can't see any data from the broker.
In Telegraf.conf I set the certificate as you can see here below. Instead using pem file I used the file that I use for the sensor or other client connected to the system: the extension is different and I don't know if the problem is here.
Here the configuration of Telegraf (mqtt_consumer)
# # Read metrics from MQTT topic(s)
[[inputs.mqtt_consumer]]
# ## Broker URLs for the MQTT server or cluster. To connect to multiple
# ## clusters or standalone servers, use a seperate plugin instance.
# ## example: servers = ["tcp://localhost:1883"]
# ## servers = ["ssl://localhost:1883"]
# ## servers = ["ws://localhost:1883"]
servers = ["tcp://192.168.1.58:8883"]
#
# ## Topics that will be subscribed to.
topics = [
"sensors/#"
]
#
# ## The message topic will be stored in a tag specified by this value. If set
# ## to the empty string no topic tag will be created.
# # topic_tag = "topic"
#
# ## QoS policy for messages
# ## 0 = at most once
# ## 1 = at least once
# ## 2 = exactly once
# ##
# ## When using a QoS of 1 or 2, you should enable persistent_session to allow
# ## resuming unacknowledged messages.
# # qos = 0
#
# ## Connection timeout for initial connection in seconds
# # connection_timeout = "30s"
#
# ## Maximum messages to read from the broker that have not been written by an
# ## output. For best throughput set based on the number of metrics within
# ## each message and the size of the output's metric_batch_size.
# ##
# ## For example, if each message from the queue contains 10 metrics and the
# ## output metric_batch_size is 1000, setting this to 100 will ensure that a
# ## full batch is collected and the write is triggered immediately without
# ## waiting until the next flush_interval.
# # max_undelivered_messages = 1000
#
# ## Persistent session disables clearing of the client session on connection.
# ## In order for this option to work you must also set client_id to identify
# ## the client. To receive messages that arrived while the client is offline,
# ## also set the qos option to 1 or 2 and don't forget to also set the QoS when
# ## publishing.
# # persistent_session = false
#
# ## If unset, a random client ID will be generated.
client_id = ""
#
# ## Username and password to connect MQTT server.
#username = ""
#password = ""
#
# ## Optional TLS Config
tls_ca = "/etc/telegraf/ca.crt"
tls_cert = "/etc/telegraf/client.crt"
tls_key = "/etc/telegraf/client.key"
# ## Use TLS but skip chain & host verification
# insecure_skip_verify = false
#
# ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
How can I check the connection to the broker in Telegraf? Is it correct the configuration or I should use only .pem file?
Your MQTT URL starts with tcp:// but it should start with ssl:// for a MQTT over SSL connection.

how to remove filebeat metadata

am using filebeat to forward incoming logs from haproxy to Kafka topic but after forwarding filebeat is adding so much metadata to the kafka message which consumes more memory which I want to avoid.
Example of message sinked to kafka from filebeat where it is adding metadata, host and lot of other things:
{
"#timestamp": "2017-03-27T08:14:09.508Z",
"beat": {
"hostname": "stage-kube03",
"name": "stage-kube03",
"version": "5.2.1"
},
"input_type": "log",
"message": {
"message": {
"activityType": null
},
"offset": 3783008,
"source": "/var/log/audit.log",
"type": "log"
}
How do I control/reduce the additional metadata filebeat adds to kafka message along with the log line payload? below is my filebeat.yml file
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/haproxy.log
#exclude_files: [".gz$"]
#fields:
# codec: plain
# token: USER_TOKEN
# type: haproxy_log
#fields_under_root: true
#- c:\programdata\elasticsearch\logs\*
processors:
- drop_event:
# fields: ["prospector","event","dataset"]
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
exclude_lines: ['^source']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:
output.kafka:
hosts: ["10.12.0.90:9092"]
topic: "data-meter-topic"
codec.json:
pretty: true
You need to remove the additional add_host_metadata and add_cloud_metadata metadata you're adding explicitly and remove the remainder of the fields with the drop_field processor:
I've tested your configuration and changed the following:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
output.console:
pretty: true
processors:
- drop_fields:
fields: ["agent", "log", "input", "host", "ecs" ]
#- add_host_metadata: ~
#- add_cloud_metadata: ~
The result:
{
"#timestamp": "2020-11-27T15:55:17.098Z",
"#metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.10.0"
},
"message": "2020-11-27 00:29:58 status installed libc-bin:amd64 2.28-10"
}
According to the documentation, you can't remove some of the metadata, namely the #timestamp and type (which should include the #metadata field).
The drop_fields processor specifies which fields to drop if a certain
condition is fulfilled. The condition is optional. If it’s missing,
the specified fields are always dropped. The #timestamp and type
fields cannot be dropped, even if they show up in the drop_fields
list.
EDIT:
Since you appear to be running filebeat 5.2.1, I've tried the following configuration with even better success than filebeat 7.x:
filebeat.prospectors:
- input_type: log
paths:
- /var/log/*.log
output.console:
pretty: true
processors:
- drop_fields:
fields: ["log_type", "input_type", "offset", "beat", "source"]
Result:
{
"#timestamp": "2020-11-30T09:51:40.404Z",
"message": "2020-11-27 00:29:58 status half-configured vim:amd64 2:8.1.0875-5",
"type": "log"
}
EDIT2:
Conversely, because you've posted a filebeat 6.8.0 version output, I've also tested with this very same version:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
output.console:
pretty: true
processors:
- drop_fields:
fields: ["beat", "source", "prospector", "offset", "host", "log", "input", "event", "fileset" ]
#- add_host_metadata: ~
#- add_cloud_metadata: ~
Output:
{
"#timestamp": "2020-11-30T10:08:26.176Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.8.0"
},
"message": "2020-11-27 00:29:58 status unpacked vim:amd64 2:8.1.0875-5"
}

Not able to send filebeat output to mongodb

I have added output.mongodb in filebeat.yml file but it is showing error "Exiting: error initializing publisher: output type mongodb undefined"
Does anyone here has any different fail safe approach towards this requirement where I want to redirect filebeat output directly to mongo database?
Filbeat.yml file
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/test.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
reload.period: 5s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 2
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "10.27.3.235:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
# # Array of hosts to connect to.
# # hosts: ["10.27.3.235:9200"]
# hosts: ["http://10.27.3.235:9200"]
# index: "filebeatSYS-%{[agent.version]}-%{+yyyy.MM.dd}"
# setup.template:
# name: 'api-access'
# pattern: 'api-access-*'
# enabled: false
#
# # Optional protocol and basic auth credentials.
# #protocol: "https"
# #username: "elastic"
# #password: "changeme"
# #index: "filebeat-%{+yyyy.MM.dd}"
#-------------------------- MongoDB output ------------------------------
output.mongodb:
enabled: true
# URL format, according to mgo.v2 doc : [mongodb://][user:pass#]host1[:port1][,host2[:port2],...][/database][?options]
# More info : https://godoc.org/gopkg.in/mgo.v2#Dial
hosts: ["mongodb://<my-db-url-inserted-here>:27017"]
# The mongodb database to push to
db: "<my-db-here>"
# The database collection to push to
# Could be configured like key/keys of the Redis output : https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html#_key_2
collection: "filebeat"
# https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html#_loadbalance
loadbalance: true
# https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html#_timeout_4
timeout: 5s
# https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html#_max_retries_4
max_retries: 5
# https://www.elastic.co/guide/en/beats/filebeat/current/redis-output.html#_bulk_max_size_4
bulk_max_size: 2048
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
#================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
You get the error
Exiting: error initializing publisher: output type mongodb undefined
because Filebeat does not support this kind of output. Take a look at the Output Configuration doc of Filebeat. There is no output for MongoDB mentioned. Filebeat supports only the following outputs:
Elasticsearch
Logstash
Kafka
Redis
File
Console
Elastic Cloud
By defining
output.mongodb:
Filebeat crashes because 'mongodb' is an unknown/undefined configuration-field in the output-element.
Does anyone here has any different fail safe approach towards this requirement where I want to redirect filebeat output directly to mongo database?
Logstash has a dedicated MongoDB output plugin. So you could send the data from Filebeat to Logstash which sends it to your MongoDB (this approach is not direct but a valid workaround).

IAM nested stack fails to complete due to undefined resource policies

I have created a nested IAM stack, which constists of 3 templates:
- iam-policies
- iam-roles
-iam user/groups
the masterstack template looks like this:
Resources:
Policies:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_policies.yaml
UserGroups:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_user_groups.yaml
Roles:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_roles.yaml
The policy ARNs are exported via Outputs section like:
Outputs:
StackName:
Description: Name of the Stack
Value: !Ref AWS::StackName
CodeBuildServiceRolePolicy:
Description: ARN of the managed policy
Value: !Ref CodeBuildServiceRolePolicy
in the Role template the policies ARNs are imported like
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub ${EnvironmentName}-CodeBuildRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 'sts:AssumeRole'
Effect: Allow
Principal:
Service:
- codebuild.amazonaws.com
Path: /
ManagedPolicyArns:
- !GetAtt
- Policies
- Outputs.CodeBuildServiceRolePolicy
But when I try create the stack, it fails saying the Roles stack cannot be created because
Template error: instance of Fn::GetAtt references undefined resource Policies
How can I force the creation of the policies first so the second and third template can use the policies to create roles and user/ groups? Or is the issue elsewhere?
merci A
Your question,
How can I force the creation of the policies first so the second and
third template can use the policies to create roles and user/ groups?
Or is the issue elsewhere?
You can use "DependsOn" attribute. It automatically determines which resources in a template can be parallelized and which have dependencies that require other operations to finish first. You can use DependsOn to explicitly specify dependencies, which overrides the default parallelism and directs CloudFormation to operate on those resources in a specified order.
In your case second and third template DependsOn Policies
More details : DependsOn
The reason on why you aren't able to access the outputs is that, you haven't exposed the outputs for other stacks.
Update your Outputs with the data you want to export. Ref - Outputs for the same.
Then, use the function Fn::ImportValue in the dependent stacks to consume the required data. Ref - ImportValue for the same.
Hope this helps.