ELK, File beat cut some text from message - elastic-stack

I have ELK(filebeat->logstash->elasticsearch<-kibana) running on win10. I gave the following two lines, then I found filebeat not sending whole text, rather some head/front texts are cut.
2018-04-27 10:42:49 [http-nio-8088-exec-1] - INFO - app-info - injectip ip 192.168.16.89
2018-04-27 10:42:23 [RMI TCP Connection(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'
In filebeat console, I notice following text:
2018-05-24T09:02:50.361+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:02:50.361Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97083,
"message": "xec-1] - INFO - app-info - injectip ip 192.168.16.89",
"prospector": {
"type": "log"
},
"beat": {
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I",
"version": "6.2.3"
}
}
and
2018-05-24T09:11:10.374+0800 DEBUG [publish] pipeline/processor.go:275 Publish event: {
"#timestamp": "2018-05-24T01:11:10.373Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"beat": {
"version": "6.2.3",
"name": "DESKTOP-M4AFV3I",
"hostname": "DESKTOP-M4AFV3I"
},
"source": "e:\\sjj\\xxx\\YKT\\ELK\\twoFormats.log",
"offset": 97272,
"message": "n(10)-127.0.0.1] - INFO - org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring FrameworkServlet 'dispatcherServlet'"
}
In the console, one could see message part, some front text is cut off. In first case, '2018-04-27 10:42:49 [http-nio-8088-e' is cut, in the second case, '2018-04-27 10:42:23 [RMI TCP Connectio' is cut.
Why filebeat will do this? this makes my regex generates parserexception in logstash.
I list my filebeat.yml file as follows:
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
#enabled: false
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- e:\sjj\xxx\YKT\ELK\twoFormats.log
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#----------------------------- Logstash output --------------------------------
output.logstash:
hosts: ["localhost:5044"]

Related

Replace a line in a config file using variables with ansible

Question is similar to this one: Replace a line in a config file with ansible . Difference is that my playbook is first copying a file to a destination and then editing that same file after it's been copied. Also I'm using variables to replace the string, however it isn't changing the lines that contain the particular string site_name in the conf file.
Playbook;
---
- hosts: server-test2
become: true
vars:
site_name: bokucasinon.com
tasks:
- name: Configuring nginx for the new site
template:
src: ../provision-server/nginx.j2
dest: /etc/nginx/conf.d/{{site_name}}.conf
mode: 064
- name: Configuring nginx for the new site
become: true
lineinfile:
dest: /etc/nginx/conf.d/{{site_name}}.conf
regexp: '^(.*)site_name(.*)$'
line: "{{site_name}}"
backrefs: yes
Output:
TASK [Configuring nginx for the new site] **************************************************************
task path: /home/melvmagr/repos/ansible/provision-server/wp-db-nginx-conf.yml:10
ok: [server-test2] => {"changed": false, "checksum": "904d19dde94ad38672d751246fd2680ce297244d", "dest": "/etc/nginx/conf.d/bokucasinon.com.conf", "gid": 0, "group": "root", "mode": "0064", "owner": "root", "path": "/etc/nginx/conf.d/bokucasinon.com.conf", "size": 4232, "state": "file", "uid": 0}
TASK [Configuringg nginx for the new site] *************************************************************
task path: /home/melvmagr/repos/ansible/provision-server/wp-db-nginx-conf.yml:15
ok: [server-test2] => {"backup": "", "changed": false, "msg": ""}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************
server-test2 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As one can see, changed=0 and upon checking the conf file it remains site_name instead of bokucasinon.com
Another thing I tried was to use the replace module but got same output.
replace:
path: /etc/nginx/conf.d/{{site_name}}.conf
regexp: '(^site_name)(.*)$'
replace: '{{site_name}}'
Any ideas why this is happening or what I'm doing wrong?
Thanks in advance
Appreciate all of you for the help but I've managed to find what I was looking for, after lots of trials and errors. I did indeed need to use the ansible.builtin.replace module. Apparently what I was using (the lineinfile module) was not made for changing ALL the lines that contain a particular string (reference: https://www.middlewareinventory.com/blog/ansible-lineinfile-examples/) so basically just to put things into perspective, I needed to change my playbook with the following;
- name: Configuring nginx for the new site
become: true
template:
src: ../provision-server/nginx.j2
dest: /etc/nginx/conf.d/{{site_name}}.conf
mode: 064
- name: Configuring nginx for the new site
become: yes
become_user: root
ansible.builtin.replace:
path: /etc/nginx/conf.d/{{site_name}}.conf
regexp: 'sitename.com'
replace: "{{site_name}}"

Mystery "guest" user for rabbitMQ

I know the "guest" user is the default for RabbitMQ, but I thought I'd configured everything to use different names.
My stack is Django / Celery / RabbitMQ, running in Docker.
First up, the error - I jst get loads of these - every few seconds:
rabbitmq_1 | 2020-07-29 08:28:00.775 [warning] <0.1234.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:05.775 [warning] <0.1240.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:10.776 [warning] <0.1246.0> HTTP access denied: user 'guest' - invalid credentials
rabbitmq_1 | 2020-07-29 08:28:15.776 [warning] <0.1252.0> HTTP access denied: user 'guest' - invalid credentials
rabbitMQ Dockerfile
FROM rabbitmq:management-alpine
ENV RABBITMQ_USER rabbit_user
ENV RABBITMQ_PASSWORD rabbit_user
ADD rabbitmq.conf /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.conf /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
rabbitmq.conf
management.load_definitions = /etc/rabbitmq/definitions.json
definitions.json
{
"users": [
{
"name": "rabbit_user",
"password": "rabbit_user",
"tags": ""
},
{
"name": "admin",
"password": "admin",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "\/phoenix"
}
],
"permissions": [
{
"user": "rabbit_user",
"vhost": "\/phoenix",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"parameters": [],
"policies": [],
"exchanges": [],
"bindings": [],
"queues": [
{
"name": "high_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
},
{
"name": "low_prio",
"vhost": "\/phoenix",
"durable": true,
"auto_delete": false,
"arguments": {}
}
]
}
docker-compose.yml
rabbitmq:
build:
context: ./rabbitmq
dockerfile: Dockerfile
# image: rabbitmq:3-management-alpine
ports:
- "15672:15672" # RabbitMQ management plugin
environment:
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
- RABBITMQ_DEFAULT_VHOST=phoenix
expose:
- "5672" # Port exposed between docker containers
depends_on:
- db
- cache
celery_worker:
<<: *django
command: bash -c "celery -A phoenix.celery worker --loglevel=INFO -n worker1#%h"
environment:
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- EMAIL_HOST_PASSWORD=${EMAIL_HOST_PASSWORD}
- DJANGO_SETTINGS=${DJANGO_SETTINGS}
# HC the rabbit user. Not secure obvs, but OK for PoC.
- RABBITMQ_DEFAULT_USER=rabbit_user
- RABBITMQ_DEFAULT_PASS=rabbit_user
ports: []
links:
- rabbitmq
- cache
depends_on:
- db
- cache
- rabbitmq
settings.py
CELERY_BROKER_URL = "amqp://rabbit_user:rabbit_user#rabbitmq:5672/phoenix"
CELERY_BROKER_VHOST = "phoenix"
CELERY_RESULT_BACKEND = "django-db"
CELERY_CACHE_BACKEND = "default"
CELERY_TIME_ZONE = TIME_ZONE
I had it all working before when I just pulled the default rabbitMQ container in the docker-compose yaml file. Now I've created a specific Dockerfile for rabbitMQ, and setup rabbit_user and the vhost "phoenix". It all seems to be working - tasks are run, I see the message stats in the rabbit console, but I'm suffering these random "guest" login attempts. The word "guest" appears nowhere in my codebase, so somewhere RabbitMQ is using the default not "rabbit_user", but I can't see where.
Rather typical that I solve this by "fixing" something else ..
I noticed in my RMQ panel that the low_prio and high_prio queues had vhost "/phoenix", while the celery workers had vhost "phoenix" (I'd thought the RMQ config required the leading slash from my reading). I amended this so that all queues were allocated to "phoenix", and the mystery guest login disappeared.
I can only assume that since Celery was configured for the vhost "phoenix", that "/phoenix" was treated as s different vhost, with no users assigned to it, so RabbitMQ tried to use the "guest" default.
Not entirely sure why things were connecting to it - I'd sent nothing to those queues yet - but in case somebody else has this issue, this is what solved it for me.

How to access CloudWatch Event data from triggered Fargate task?

I read the docs on how to Run an Amazon ECS Task When a File is Uploaded to an Amazon S3 Bucket. However, this document stops short of explaining how to get the bucket/key values from the triggering event from within the Fargate task code itself. How can that be done?
I am not sure if you still need the answer for this one. But I did something similar to what Steven1978 mentioned but only using CloudFormation.
The config you're looking for is the InputTransformer. Check this example for a YAML CloudFormation template for an Event Rule:
rEventRuleForFileUpload:
Type: AWS::Events::Rule
Properties:
Description: "EventRule"
State: "ENABLED"
EventPattern:
source:
- "aws.s3"
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- "PutObject"
- "CompleteMultipartUpload"
requestParameters:
bucketName: "{YOUR_BUCKET_NAME}"
Targets:
- Id: '{YOUR_ECS_CLUSTER_ID}'
Arn: !Sub "arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:cluster/${NAME_OF_YOUR_CLUSTER_RESOURCE}"
RoleArn: !GetAtt {YOUR_ROLE}.Arn
EcsParameters:
TaskCount: 1
TaskDefinitionArn: !Ref {YOUR_TASK_DEFINITION}
LaunchType: FARGATE
{... WHATEVER CONFIG YOU MIGHT HAVE...}
InputTransformer:
InputPathsMap:
s3_bucket: "$.detail.requestParameters.bucketName"
s3_key: "$.detail.requestParameters.key"
InputTemplate: '{ "containerOverrides": [ { "name": "{THE_NAME_OF_YOUR_CONTAINER_DEFINITION}", "environment": [ { "name": "EVENT_BUCKET", "value": <s3_bucket> }, { "name": "EVENT_OBJECT_KEY", "value": <s3_key> }] } ] }'
With this approach, you'll be able to get the s3 bucket name (EVENT_BUCKET) and the s3 object key (EVENT_OBJECT_KEY) as environment variables inside your container.
The info isn't very clear, indeed, but here are some sources I used to finally get it working:
Container Override;
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html
InputTransformer:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html#API_InputTransformer_Contents

How to pass extra configuration to RabbitMQ with Helm?

I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)

Is it possible to change a variable's value in ansible?

I wrote a playbook that read a content of two files. The first one is responsible for holding switches interfaces dynamically that have the protocol CDP.
example.cdp:
0/0
14/0
The second one (.cfg), is a file the contains also dynamically a bunch of interfaces that I need to push to a device using the cisco command "shutdown" to test my master/backup environment. If the interfaces of the example.cdp are here, I need to remove them because I cannot lose the communication with this device since the management is in-band.
example.cfg:
interface FastEthernet0/0
shutdown
interface FastEthernet1/0
shutdown
interface FastEthernet2/0
shutdown
interface FastEthernet2/1
shutdown
...
interface FastEthernet14/0
shutdown
playbook:
- name: Looping file
debug:
msg: "{{ item }}"
register: items
with_file:
- ~/ANSIBLE/{{ inventory_hostname }}.cfg
- debug: var=items.results[0].item
- name: capturing interfaces with cdp
raw: egrep '[0-9]+\/[0-9]+ ' -o ~/ANSIBLE/{{ inventory_hostname }}.cdp
register: cdp
- debug: var=cdp.stdout_lines
- set_fact:
cdp: "{{cdp.stdout_lines}}"
- debug: var=cdp
- name: Removing interfaces with cdp
raw: sed 's/interface FastEthernet{{item}}//' ~/ANSIBLE/{{ inventory_hostname }}.cfg
with_items:
- "{{cdp}}"
register: items
- debug: var=items
- name: Applying The Shutdown Template
ios_config:
lines:
- "{{ items.results[0].item }}"
provider: "{{cli}}"
register: shut1
- debug: var=shut1
tags: shut1
running the playbook:
<169.255.0.1> EXEC sed 's/interface FastEthernet0/0 //' ~/ANSIBLE /169.255.0.1.cfg
failed: [169.255.0.1] (item=0/0 ) => {
"changed": true,
"failed": true,
"item": "0/0 ",
"rc": 1,
"stderr": "sed: -e expression #1, char 30: unknown option to `s'\n",
"stdout": "",
"stdout_lines": []
}
<169.255.0.1> EXEC sed 's/interface FastEthernet14/0 //' ~/ANSIBLE/169.255.0.1.cfg
failed: [169.255.0.1] (item=14/0 ) => {
"changed": true,
"failed": true,
"item": "14/0 ",
"rc": 1,
"stderr": "sed: -e expression #1, char 31: unknown option to `s'\n",
"stdout": "",
"stdout_lines": []
}
As you can see, the problem is the content of the var "cdp". The interfaces have the symbol "/", wich is use in "sed" command and I should backslashed this one to solve my problem using ansible. Is there a way to open a variable and make some regsub on it?
sed can use any character as the regex tokenizer, so solve your issue quickly, turn it into (for instance using # character):
sed 's#interface FastEthernet{{item}}##' ~/ANSIBLE/{{ inventory_hostname }}.cfg
I have the impression templating would be a better way to write your tasks though.