How to collect more than 22 event ids with winlogbeat? - elastic-stack

I've got a task to collect over 500 events from DC with winlogbeat. But windows got a limit 22 events to query. I'm using version 6.1.2. I've tried with processors like this:
winlogbeat.event_logs:
- name: Security
processors:
- drop_event.when.not.or:
- equals.event_id: 4618
...
but with these settings client doesn't work, nothing in logs. If I run it from exe file it just starts and stops with no error.
If I try to do like it was written in the official manual:
winlogbeat.event_logs:
- name: Security
event_id: ...
processors:
- drop_event.when.not.or:
- equals.event_id: 4618
...
client just crashes with "invalid event log key processors found". Also I've tried to create new custom view and take event from there, but apparently it also has query limit to 22 events.

Related

How do I modify existing jobs to switch owner?

I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.

How can one configure retry for IOExceptions in Spring Cloud Gateway?

I see that Retry Filter supports retries based on http status codes. I would like to configure retries in case of io exceptions such as connection resets. Is that possible with Spring Cloud Gateway 2?
I was using 2.0.0.RC1. Looks like latest build snapshot has support for retry based on exceptions. Fingers crossed for the next release.
Here is an example that retries twice for 500 series errors or IOExceptions:
filters:
- name: Retry
args:
retries: 2
series:
- SERVER_ERROR
exceptions:
- java.io.IOException

Saltstack - Schedule to ensure service is running not working

I'm trying to set up a Saltstack schedule that will check to ensure that a service is running on the minion. However, it doesn't seem like service.running is working as a function on the schedule.
Here's my run.sls file:
test-service-sched:
schedule.present:
- name: test-service-sched
- function: service.running
- seconds: 60
- job_kwargs:
name: test-service
- persist: True
- enabled: True
- run_on_start: True
And I execute the following: salt 'service*' state.apply run
This ends up with the following error on the minion:
2017-03-28 02:47:11,493 [salt.utils.schedule ][ERROR ][6172] Unhandled exception running service.running
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/salt/utils/schedule.py", line 826, in handle_func
message=self.functions.missing_fun_string(func))
File "/usr/lib/python2.6/site-packages/salt/utils/error.py", line 36, in raise_error
raise ex(message)
Exception: 'service.running' is not available.
I haven't seen anything in the documentation that says I can't run service.running from a schedule. Is it a known limitation of Salt? Or am I just doing it wrong?
I can use cmd.run, but it ends up spamming the logs with errors if the service is already running.
So, I was pointed in the right direction on the Salt Google Group. There's a difference between execution modules and state modules. Since service.running is an execution module, and schedule only supports state modules, I had to reference it indirectly. I used 2 files:
schedule.sls:
service_schedule:
schedule.present:
- function: state.apply
- minutes: 1
- job_args:
- running
running.sls:
service_running:
service.running:
- name: test_service
Now, running salt 'service*' state.apply schedule did exactly what I wanted it to.

Cannot figure out how to get the twilio API to work in home assistant

Ok, so I am using home assistant to automatically send myself a message at certain times of the day using the twilio api.
https://home-assistant.io/getting-started/troubleshooting-configuration/
It is all done in the configuration.yaml file, so here is what mine looks like:
notify:
- name: Cody Wirth
platform: twilio_sms
account_sid: AC8a4f2f40331bdad5c95265f2cefe26a2
auth_token: 33a693e18dcad513d4791c51f1071227
from_number: "+16142896777"
automation:
- alias: Send message at a given time
trigger:
platform: time
hours: 24
minutes: 47
seconds: 15
action:
service: notify.twilio_sms
data:
message: 'The sun has set'
target:
- "+16147059227"
Is there anything wrong with my syntax? Is there something I need to configure on Twilio's end to make the messages come through to my phone? Nothing at all is happening when I automate the messages to send.
Ok, so this is the error that it returns:
"17-01-12 08:17:44 WARNING (MainThread) [homeassistant.core] Unable to find service notify/twilio_sms"
To be able to use twilio, you first need to enable/configure the twilio component.
The automation does not configure or start the twilio component, it just uses it.
You can find the configuration documentation on the twilio component page:
twilio:
account_sid: ACCOUNT_SID_FROM_TWILIO
auth_token: AUTH_TOKEN_FROM_TWILIO
After you configured it (and restarted home-assistant), your automation should be able to send notifications over it.

VSTS Build jobs freeze sporadically

using visual studio team services online with an in house build agent. The build agent while running a job will randomly just freeze, the job is still active but there are no updates to the console, not errors in event logs etc. If I open the agent's _diag folder and look it will just repeat what is below until it decides to continue work.
17:02:19.850546 LogFileTimer_Callback - enter (20)
17:02:19.850546 LogFileTimer_Callback - processing job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:19.850546 LogFileTimer_Callback - found 0 records for job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:19.850546 LogFileTimer_Callback - leave
17:02:20.100159 StatusTimer_Callback - enter (27)
17:02:20.100159 StatusTimer_Callback - processing job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:20.100159 StatusTimer_Callback - leave
17:02:20.240566 ConsoleTimer_Callback - enter (17)
17:02:20.240566 ConsoleTimer_Callback - Inside Lock
17:02:20.240566 ConsoleTimer_Callback - processing job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:20.240566 ConsoleTimer_Callback - leave
17:02:20.755392 ConsoleTimer_Callback - enter (22)
17:02:20.755392 ConsoleTimer_Callback - Inside Lock
17:02:20.755392 ConsoleTimer_Callback - processing job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:20.755392 ConsoleTimer_Callback - leave
17:02:20.864598 StatusTimer_Callback - enter (18)
17:02:20.864598 StatusTimer_Callback - processing job 7b9229d0-524e-4138-b6b3-33f630d109c6
17:02:20.864598 StatusTimer_Callback - leave
We have tried deleting the work folder, uninstalling the agent and reinstalling and it still just seems to freeze on random jobs. Any idea what else I could look into as why this is happening?
Just checked one log, and found these information existed in the log file here and there. Such as restore packages, upload logs, or retrieve files, etc.
These information don't mean there is an error. You may try to create a new agent on another machine to see whether this phenomenon would occur.