Filebeat : drop fields kubernetes again again - kubernetes

I m trying to remove some fields, I use filebeat 7.14 on Kubernetes
I tried as described in the doc
processors:
- drop_fields:
when:
contains
fields: ["host.os.name", "host.os.codename", "host.os.family"]
ignore_missing: false
container failed "ERROR instance/beat.go:989
Exiting: Failed to start crawler:
starting input failed: Error while initializing input:
missing or invalid condition
failed to initialize condition"
ignore_missing still messing
- drop_fields:
fields: ["host.os.name", "host.os.codename", "host.os.family"]
fields are still present

you don't seem to have a condition set under the when. take a look at https://www.elastic.co/guide/en/beats/filebeat/7.14/defining-processors.html#conditions and make sure you've got something for it to match

Related

Falco k8s, when add an exception, some fields become null

If I add an exception to the rule 'The docker client is executed in a container' like:
exceptions:
- name: kube_mon
fields: [container.image.repository, k8s.ns.name, k8s.pod.name]
comps: [=, =, startswith]
values:
- [repo/myimg, myns, my-pod-]
I start receiving Warnings where the mentioned fields are null (instead of not receiving them at all) :
screen:
[1]: https://i.stack.imgur.com/1RTiJ.png
Same exceptions added to the rule 'Contact K8S API Server From Container' works ok and my pods are filtered out from logging.
How can I solve it?
Thanks.
Falco 0.31.1
Chart falco-1.17.4

Unable to view the contents of file in ansible managed nodes

I'm trying to view the contents of a file from managed node and control node, here I see the syntax works fine for localhost (172.17.254.200) but not for the remote hosts. Below is the task I have written using lookup / query plugin, can you please suggest the fix:
---
- name: Report Test
hosts: all
roles:
- patching
tasks:
- name: Display the Pre and Post check Differences
debug:
msg: "{{ query('file', '/tmp/check/{{ inventory_hostname }}_Comparison') }}"
Below is the output
TASK [patching : Display the Pre and Post check Differences] ***********************************************************************************************************
ok: [172.17.254.200] =>
msg:
- |-
free_m - YES
sysctl_all - YES
uptime - YES
[WARNING]: Unable to find '/tmp/check/172.17.254.207_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.207]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.207_Comparison'
[WARNING]: Unable to find '/tmp/check/172.17.254.208_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.208]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.208_Comparison'
Lookups are executed on Ansible controller (as pointed out by #Vladimir Botka). If you just want to view the contents of a file on remote hosts, you can cat the file through ansible and debug the stdout_lines.
- command: "cat /tmp/check/{{ inventory_hostname }}_Comparison"
register: file_cat
changed_when: false
- debug:
var: file_cat.stdout_lines
lookup and query "execute and are evaluated on the Ansible control machine."
Use slurp. Quoting:
This module returns an ‘in memory’ base64 encoded version of the file, take into account that this will require at least twice the RAM as the original file size.
For larger files use fetch. Quoting:
It is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

Is there any placeholder notation in mta.yaml that removes spaces from the CF org name parameter?

We are using mta to structure our application and deploying it using the SAP Cloud SDK Pipeline and Transport Management landscape.
In the mta.yaml, we are referencing the org (organization) parameter value using the placeholder notation ${org}.
The issue is that the org name contains spaces between the characters (viz. Sample Org Name) and that is causing error during the application deployment to Cloud Foundry.
We do not want to rename the org name.
Is there any other placeholder notation that removes the spaces between the characters?
We have observed that ${default-host} removes the spaces from the organization name but its scope is limited to only modules and not resources.
We need the substitution variable in the resources scope.
Appreciate if someone can help us here to resolve the issue.
Please find snippet of the mta.yaml and the error message.
resources:
- name: uaa_test_app
parameters:
path: ./xs-security.json
service-plan: application
service: xsuaa
config:
xsappname: 'test-app-${org}-${space}'
type: org.cloudfoundry.managed-service
Error Message:
Service operation failed: Controller operation failed: 502 Updating service "uaa_test_app" failed: Bad Gateway: Service broker error: Service broker xsuaa failed with: org.springframework.cloud.servicebroker.exception.ServiceBrokerException: Error updating application null (Error parsing xs-security.json data: Inconsistent xs-security.json: Invalid xsappname "Test-App-Sample Org Name-test": May only include characters 'a'-'z', 'A'-'Z', '0'-'9', '_', '-', '', and '/'.)

Visualize Jobber tasks on ELK (via Filebeat)

A Jobber Docker container (running periodic tasks) outputs on stdout, which is captured by Filebeat (with Docker containers autodiscovery flag on) and then sent to Logstash (within an ELK stack) or to Elasticsearch directly.
Now on Kibana, the document looks as such:
#timestamp Jan 20, 2020 # 20:15:07.752
...
agent.type filebeat
container.image.name jobber_jobber
...
message {
"job": {
"command":"curl http://my.service/run","name":"myperiodictask",
"status":"Good",
"time":"0 */5 * * * *"
},
"startTime":1579540500,
"stdout":"{\"startDate\":\"2020-01-20T16:35:00.000Z\",\"endDate\":\"2020-01-20T17:00:00.000Z\",\"zipped\":true,\"size\":3397}",
"succeeded":true,
"user":"jobberuser",
"version":"1.4"
}
...
Note: above 'message' field is a simple string reflecting a json object; above displayed format is for clearer readability.
My goal is to be able to request Elastic on the message fields, so I can filter by Jobber tasks for instance.
How can I make that happen ?
I know Filebeat uses plugins and the container tags to apply this or that filter: are there any for Jobber? If not, how to do this?
Even better would be to be able to exploit the fields of the Jobber task result (under the 'stdout' field)! Could you please direct me to ways to implement that?
Filebeat provides processors to handle such tasks.
Below's a configuration to handle the needs "Decode the json in the 'message' field", "Decode the json in the 'stdout' within" (both using the decode_json_fields processor), and other Jobber-related needs.
Note that given example filter the events going through Filebeat by a 'custom-tag' label given to the Docker container hosting the Jobber process. The docker.container.labels.custom-tag: jobber condition should be replaced according to your usecase.
filebeat.yml:
processors:
# === Jobber events processing ===
- if:
equals:
docker.container.labels.custom-tag: jobber
then:
# Drop Jobber events which are not job results
- drop_event:
when:
not:
regexp:
message: "{.*"
# Json-decode event's message part
- decode_json_fields:
when:
regexp:
message: "{.*"
fields: ["message"]
target: "jobbertask"
# Json-decode message's stdout part
- decode_json_fields:
when:
has_fields: ["jobbertask.stdout"]
fields: ["jobbertask.stdout"]
target: "jobbertask.result"
# Drop event's decoded fields
- drop_fields:
fields: ["message"]
- drop_fields:
when:
has_fields: ["jobbertask.stdout"]
fields: ["jobbertask.stdout"]
The decoded fields are placed in the "jobbertask" field. This is to avoid index-mapping collision on the root fields. Feel free to replace "jobbertask" by any other field name, keeping care of mapping collisions.
In my case, this works whether Filebeat addresses the events to Logstash or to Elasticsearch directly.

StackCreate ValidationError: Condition token can only be used in Conditions block

I am trying to apply my cloudformation template and I am getting the following cryptic error:
botocore.exceptions.ClientError: An error occurred (ValidationError)
when calling the CreateStack operation: Template error: Condition
token can only be used in Conditions block
The stack trace is
File "/Users/user/.env/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/user/.env/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
The code looks like
cf_client = session.client('cloudformation')
cf_client.create_stack(
StackName=stack_name,
TemplateBody=template_body,
Parameters=aws_parameters,
TimeoutInMinutes=10,
OnFailure='DELETE',
Tags=aws_tags,
Capabilities=['CAPABILITY_IAM'],
)
The cloudformation template is massive and not appropriate to paste here. It stands up an application with service discovery, app mesh, fargate, etc.
What is this Condition they're referring to and what is wrong?
The error is rather cryptic and unhelpful but in my case, it was a typo in my ECS task definition.
My container has a depends on relationship and I had misconstructed the
DependsOn:
- ContainerName: envoy
- Condition: HEALTHY
Depends on is a list of maps so there should not be a - in front of Condition.
This corrects my problem:
DependsOn:
- ContainerName: envoy
Condition: HEALTHY