Custom field name not showing in Filebeat - elastic-stack

Below is how im trying to add a custom fiels name in my filebeat 7.2.0
filebeat.inputs:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
processors:
- add_fields:
fields:
application: oasis
and with this, im expecting a new field called application whose data entries will be 'oasis'.
But i dont get any.
I also tried
fields:
application: oasis/'oasis'
Help me with this.

If you want to add a customized field for every log, you should put the "fields" configuration in the same level of type. Try the following:
- type: log
enabled: true
paths:
- D:\Oasis\Logs\Admin_Log\*
- D:\Oasis\Logs\ERA_Log\*
- D:\OasisServices\Logs\*
fields.application: oasis

There are two ways to add custom fields on filebeat, using the fields option and using the add_fields processor.
To add fields using the fields option, your configuration needs to be something like the one below.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
fields:
custom_field: 'custom field value'
fields_under_root: true
To add fields using the add_fields processor, you can try the following configuration.
filebeat.inputs:
- type: log
paths:
- 'D:/path/to/your/files/*'
processors:
- add_fields:
target: ''
fields:
custom_field: 'custom field value'
Both configurations will create a field named custom_field with the value custom field value in the root of your document.
The fields option can be used per input and the add_fields processor is applied to all the data exported by the filebeat instance.
Just remember to pay attention to the indentation of your configuration, if it is wrong filebeat won't work correctly or even start.

Related

Multi-line Filebeat templates don’t work with filebeat.inputs - type: filestream

I ran into a multiline processing problem in Filebeat when the filebeat.inputs: parameters specify type: filestream - the logs of the file stream are not analyzed according to the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}', in the output, I see that the lines are not added to the lines, are created new single-line messages with individual lines from the log file.
If you specify type: log in the file bat.inputs: parameters, then everything works correctly, in accordance with the requirements of multiline. pattern: '^[[0-9]{4}-[0-9]{2}-[0-9]{2}' - a multiline message is created.
What is not correctly specified in my config?
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
multiline.type: pattern
multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after
To get it working you should have something like this:
filebeat.inputs:
- type: filestream
enabled: true
paths:
- C:\logs\GT\TTL\*\*.log
fields_under_root: true
fields:
instance: xml
system: ttl
subsystem: GT
account: abc
parsers:
- multiline:
type: pattern
pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
negate: true
match: after
There are 2 reasons why this works:
The general documentation at the time of this writing regarding multiline handling is not updated to reflect changes done for fileinputstream type. You can find information regarding setting up multiline for fileinputstream under parsers on this page: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html
The documentation I just mentioned above is also wrong (at least at the time of this writing). The example shows configuring the multiline parsers without indenting its children, which will not work as the parser will not initialize any of the values underneath it. This issue is also being discussed here: https://discuss.elastic.co/t/filebeat-filestream-input-parsers-multiline-fails/290543/13 and I expect it will be fixed sometime in the future.

Filebeat add tags per file directory

I have the same kind of log file that uses the same grok patterns to match, however they are in different folders and I want to tag them accordingly. How would I go about it?
Would something like this work:
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_one/SystemOut.log
tags: ["server_one"]
paths:
- /opt/IBM/WebSphere/server_two/SystemOut.log
tags: ["server_two"]
You can add custom fields to the events and then use conditional filtering in Logstash.
In order to do this, you need to define multiple prospectors in the Filebeat configuration. Now, group the files that need the same processing under the same prospector so that the same custom fields can be added.
More reference: https://www.elastic.co/guide/en/beats/filebeat/1.1/multiple-prospectors.html
Having multiple log entries with different tags worked.
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_one/SystemOut.log
tags: ["server_one"]
- type: log
enabled: true
paths:
- /opt/IBM/WebSphere/server_two/SystemOut.log
tags: ["server_two"]

Do AWS-SAM templates support lifecycleconfiguration settings?

Does anyone know if SAM templates support Lifecycleconfigruation settings? I see within standard cloudformation definitions you can define the lifecycle of objects like:
BucketName: "Mys3Bucket"
LifecycleConfiguration:
Rules:
- AbortIncompleteMultipartUpload:
DaysAfterInitiation: 7
Status: Enabled
- ExpirationInDays: 14
...
But this seems to fail when used in a SAM template. Am I doing something wrong or is this not part of the serverless application model definition?
It works for me using the SAM CLI 1.15.0, although documentation seems sparse (hence my landing on this question while trying to figure it out).
The SAM template snippet below successfully creates a bucket and sets an appropriate lifecycle rule.
Resources:
Bucket1:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: !Sub "${BucketName}"
AccessControl: Private
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- ExpirationInDays: 6
Status: Enabled

AWS CloudFormation function call fails: Fn::ImportValue must not depend on any resources, imported values, or Fn::GetAZs

I have a cloud formation template (mainVPC) that creates few Subnets in a VPC and exports the subnets with names "PrivateSubnetA", "PrivateSubnetB" ...
I have a different cloud formation template that creates DBSubnetGroup. I want to use "PrivateSubnetA", "PrivateSubnetB" as default values if user does not provide data. CloundFormation does not support imported values in parameters. So I put some default value (XXXX) and had a condition section to see if the user has provided some input
Conditions:
userNotProvidedSubnetA: !Equals
- !Ref PrivateSubnetA
- XXXX
userNotProvidedSubnetB: !Equals
- !Ref PrivateSubnetB
- XXXX
This helps me in figuring out if the user has provided data. Now I want to use default values, if the user has not provided values, else use user-provided values.
below is code for that
DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription: RDS Aurora Cluster Subnet Group
SubnetIds:
- !If
- userNotProvidedSubnetA
- Fn::ImportValue:
!Sub '${fmMainVpc}-PrivateSubnetA'
- !Ref PrivateSubnetA
- !If
- userNotProvidedSubnetB
- Fn::ImportValue:
!Sub '${fmMainVpc}-PrivateSubnetB'
- !Ref PrivateSubnetB
This fails with the error "Template error: the attribute in Fn::ImportValue must not depend on any resources, imported values, or Fn::GetAZs".
ImportValue is not used anywhere else in the template.
Is there a way for using exported values as default values ( the default values cannot be hardcoded, they come as exported values from a run of another stack), while providing an option for the users to provide their own values (to create resources).
Thanks.
This can also be caused by having a reference inside Fn::ImportValue to a parameter be misnamed. For example, if I have the following parameter NetworkStackName defined and I mis-reference it in the Fn::ImportValue statement (as NetworkName), I will get this error. I would need to change the NetworkName to match the value in Parameters, NetworkStackName to fix the error.
Parameters:
NetworkStackName:
Type: String
Default: happy-network-topology
Resources:
MySQLDatabase:
Type: AWS::RDS::DBInstance
Properties:
Engine: MySQL
DBSubnetGroupName:
Fn::ImportValue:
!Sub "${NetworkName}-DBSubnetGroup"
I had a problem where I needed to get my artifact bucket name from my prerequisite stack, I tried this:
Fn::ImportValue:
- 'arn:aws:s3:::${ArtifactStore}/*'
turns out you can do this and it will work. Hope his helps someone out one day!
- !Sub
- 'arn:aws:s3:::${BucketName}/*'
- BucketName : !ImportValue 'ArtifactStore'
Currently, Cloudformation didn't support dynamic default value. It's not possible to have a dynamic default value for CloudFormation. As the template has not executed at the time all parameters are being collected. However, you can use SSM parameter for as the workaround, something like below.
Parameters
PagerDutyUrl:
Type: AWS::SSM::Parameter::Value<String>
Description: The Pagerduty url
Going back to your current cloudformation, I am thinking that value ${fmMainVpc} might not be initialized correctly.
I'm my case, I had the follow resource:
# removed for brevity
Subnets:
- !ImportValue: parent-stack-subnet-a
- !ImportValue: parent-stack-subnet-b
I forgot to remove the : when changing the syntax from Fn::ImportValue to the shorthand !ImportValue. Confusing error message, but removing the : resolved it because that was incorrect usage on my part.

IAM nested stack fails to complete due to undefined resource policies

I have created a nested IAM stack, which constists of 3 templates:
- iam-policies
- iam-roles
-iam user/groups
the masterstack template looks like this:
Resources:
Policies:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_policies.yaml
UserGroups:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_user_groups.yaml
Roles:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://s3.amazonaws.com/xxx/iam/iam_roles.yaml
The policy ARNs are exported via Outputs section like:
Outputs:
StackName:
Description: Name of the Stack
Value: !Ref AWS::StackName
CodeBuildServiceRolePolicy:
Description: ARN of the managed policy
Value: !Ref CodeBuildServiceRolePolicy
in the Role template the policies ARNs are imported like
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub ${EnvironmentName}-CodeBuildRole
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- 'sts:AssumeRole'
Effect: Allow
Principal:
Service:
- codebuild.amazonaws.com
Path: /
ManagedPolicyArns:
- !GetAtt
- Policies
- Outputs.CodeBuildServiceRolePolicy
But when I try create the stack, it fails saying the Roles stack cannot be created because
Template error: instance of Fn::GetAtt references undefined resource Policies
How can I force the creation of the policies first so the second and third template can use the policies to create roles and user/ groups? Or is the issue elsewhere?
merci A
Your question,
How can I force the creation of the policies first so the second and
third template can use the policies to create roles and user/ groups?
Or is the issue elsewhere?
You can use "DependsOn" attribute. It automatically determines which resources in a template can be parallelized and which have dependencies that require other operations to finish first. You can use DependsOn to explicitly specify dependencies, which overrides the default parallelism and directs CloudFormation to operate on those resources in a specified order.
In your case second and third template DependsOn Policies
More details : DependsOn
The reason on why you aren't able to access the outputs is that, you haven't exposed the outputs for other stacks.
Update your Outputs with the data you want to export. Ref - Outputs for the same.
Then, use the function Fn::ImportValue in the dependent stacks to consume the required data. Ref - ImportValue for the same.
Hope this helps.