Filebeat add tags per file directory - tags

I have the same kind of log file that uses the same grok patterns to match, however they are in different folders and I want to tag them accordingly. How would I go about it?
Would something like this work:
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_one/SystemOut.log
tags: ["server_one"]
paths:
- /opt/IBM/WebSphere/server_two/SystemOut.log
tags: ["server_two"]

You can add custom fields to the events and then use conditional filtering in Logstash.
In order to do this, you need to define multiple prospectors in the Filebeat configuration. Now, group the files that need the same processing under the same prospector so that the same custom fields can be added.
More reference: https://www.elastic.co/guide/en/beats/filebeat/1.1/multiple-prospectors.html

Having multiple log entries with different tags worked.
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_one/SystemOut.log
tags: ["server_one"]
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_two/SystemOut.log
tags: ["server_two"]

Related

Multi-line Filebeat templates don’t work with filebeat.inputs - type: filestream

I ran into a multiline processing problem in Filebeat when the filebeat.inputs: parameters specify type: filestream - the logs of the file stream are not analyzed according to the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}', in the output, I see that the lines are not added to the lines, are created new single-line messages with individual lines from the log file.
If you specify type: log in the file bat.inputs: parameters, then everything works correctly, in accordance with the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}' - a multiline message is created.
What is not correctly specified in my config?
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
multiline.type: pattern
multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
To get it working you should have something like this:
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
parsers:
- multiline:
type: pattern
pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
There are 2 reasons why this works:
The general documentation at the time of this writing regarding multiline handling is not updated to reflect changes done for fileinputstream type. You can find information regarding setting up multiline for fileinputstream under parsers on this page: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html
The documentation I just mentioned above is also wrong (at least at the time of this writing). The example shows configuring the multiline parsers without indenting its children, which will not work as the parser will not initialize any of the values underneath it. This issue is also being discussed here: https://discuss.elastic.co/t/filebeat-filestream-input-parsers-multiline-fails/290543/13 and I expect it will be fixed sometime in the future.

Custom field name not showing in Filebeat

Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.
If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis
There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.

Filebeat collects logs and pushes them to Kafka error

1.My version information
jdk-8u191-linux-x64.tar.gz
kibana-6.5.0-linux-x86_64.tar.gz
elasticsearch-6.5.0.tar.gz
logstash-6.5.0.tar.gz
filebeat-6.5.0-linux-x86_64.tar.gz
kafka_2.11-2.1.0.tgz
zookeeper-3.4.12.tar.gz
2.Problem description
I have a log file in XML format. I use filebeat to collect this file and push it to Kafka Content garbled.
Here's my filebeat configuration
filebeat.inputs:
- type: log
enabled: true
paths:
- /data/reporttg/ChannelServer.log
include_lines: ['\<\bProcID.*\<\/ProcID\b\>']
### Filebeat modules
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
### Elasticsearch template setting
setup.template.settings:
index.number_of_shards: 3
### Kibana
setup.kibana:
### Kafka
output.kafka:
enabled: true
hosts: ["IP:9092", "IP:9092", "IP:9092"]
topic: houry
### Procesors
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
My log content
<OrigDomain>ECIP</OrigDomain>
<HomeDomain>UCRM</HomeDomain>
<BIPCode>BIP2A011</BIPCode>
<BIPVer>0100</BIPVer>
<ActivityCode>T2000111</ActivityCode>
<ActionCode>1</ActionCode>
<ActionRelation>0</ActionRelation>
<Routing>
<RouteType>01</RouteType>
<RouteValue>13033935743</RouteValue>
</Routing>
<ProcID>PROC201901231142020023206514</ProcID>
<TransIDO>SSP201901231142020023206513</TransIDO>
<TransIDH>2019012311420257864666</TransIDH>
<ProcessTime>20190123114202</ProcessTime>
<Response>
<RspType>0</RspType>
<RspCode>0000</RspCode>
<RspDesc>success</RspDesc>
</Response>
Test regular expressions
3.Start filebeat and view Kafka content
4.I tested that it was normal for filebeat to collect content and push it to logstash.
How should this problem be solved?
If you don't want to include the XML tags, I would recommend you use a regex grouping
Something like so
'<ProcID>(PROC[0-9]+)<\/ProcID>'

Serverless CloudFormation template error instance of Fn::GetAtt references undefined resource

I'm trying to setup a new repo and I keep getting the error
The CloudFormation template is invalid: Template error: instance of Fn::GetAtt
references undefined resource uatLambdaRole
in my uat stage, however the dev stage with the exact same format works fine.
I have a resource file for each of these environments.
dev
devLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: dev-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
uat
uatLambdaRole:
Type: AWS::IAM::Role
Properties:
RoleName: uat-lambda-role # The name of the role to be created in aws
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
#Documentation states the below policy is included automatically when you add VPC configuration but it is currently bugged.
- arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
In my serverless.yml my role is defined as
role: ${self:custom.stage}LambdaRole
and the stage is set as
custom:
stage: ${opt:stage, self:provider.stage}
Running serverless deploy --stage dev --verbose succeeds, but running serverless deploy --stage uat --verbose fails with the error. Can anyone see what I'm doing wrong? The uat resource was copied directly from the dev one with only the stage name change.
Here is a screenshot of the directory the resource files are in
I had the same issue, eventually I discovered that my SQS queue name wasn't the same in all 3 places. The following 3 places that the SQS name should match are shown below:
...
functions:
mylambda:
handler: sqsHandler.handler
events:
- sqs:
arn:
Fn::GetAtt:
- mySqsName # <= Make sure that these match
- Arn
resources:
Resources:
mySqsName: # <= Make sure that these match
Type: "AWS::SQS::Queue"
Properties:
QueueName: "mySqsName" # <= Make sure that these match
FifoQueue: true
Ended up here with the same error message. My issue ended up being that I got the "resource" and "Resource" keys in serverless.yml backwards.
Correct:
resources: # <-- lowercase "r" first
Resources: # <-- uppercase "R" second
LambdaRole:
Type: AWS::IAM::Role
Properties:
...
🤦‍♂️
I missed copying a key part of my config here, the actual reference to my Resources file
resources:
Resources: ${file(./serverless-resources/${self:provider.stage}-resources.yml)}
The issue was that I had copied this from a guide and had accientally used self:provider.stage rather than self:custom.stage. When I changed this, it could then deploy.
Indentation Issue
In general, when YAML isn't working I start by checking the indentation.
I hit this issue in my case one of my resources was indented too much, therefore, putting the resource in the wrong node/object. The resources should be two indents in as they're in node resources sub-node Resources
For more info on this see yaml docs

IAM nested stack fails to complete due to undefined resource policies

I have created a nested IAM stack, which constists of 3 templates:
- iam-policies
- iam-roles
-iam user/groups
the masterstack template looks like this:
Resources:
Policies:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_policies.yaml
UserGroups:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_user_groups.yaml
Roles:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_roles.yaml
The policy ARNs are exported via Outputs section like:
Outputs:
StackName:
Description: Name of the Stack
Value: !Ref AWS::StackName
CodeBuildServiceRolePolicy:
Description: ARN of the managed policy
Value: !Ref CodeBuildServiceRolePolicy
in the Role template the policies ARNs are imported like
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub ${EnvironmentName}-CodeBuildRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 'sts:AssumeRole'
Effect: Allow
Principal:
Service:
- codebuild.amazonaws.com
Path: /
ManagedPolicyArns:
- !GetAtt
- Policies
- Outputs.CodeBuildServiceRolePolicy
But when I try create the stack, it fails saying the Roles stack cannot be created because
Template error: instance of Fn::GetAtt references undefined resource Policies
How can I force the creation of the policies first so the second and third template can use the policies to create roles and user/ groups? Or is the issue elsewhere?
merci A
Your question,
How can I force the creation of the policies first so the second and
third template can use the policies to create roles and user/ groups?
Or is the issue elsewhere?
You can use "DependsOn" attribute. It automatically determines which resources in a template can be parallelized and which have dependencies that require other operations to finish first. You can use DependsOn to explicitly specify dependencies, which overrides the default parallelism and directs CloudFormation to operate on those resources in a specified order.
In your case second and third template DependsOn Policies
More details : DependsOn
The reason on why you aren't able to access the outputs is that, you haven't exposed the outputs for other stacks.
Update your Outputs with the data you want to export. Ref - Outputs for the same.
Then, use the function Fn::ImportValue in the dependent stacks to consume the required data. Ref - ImportValue for the same.
Hope this helps.