celery beat timezone problems - celery

So, I have been using celery/beat for a number of years, and have been ofsetting manually, the schedule of my tasks due to DST issues etc. As my codebase has become larger, the script that I run to change the times is getting bigger and bigger, and I have decided to sort the problem out.
So in short, my system clock updates automatically, from my shell I can run:
┌─[luke#freebsd] - [~/py3-apps/intranet] - [Thu Mar 29, 12:24]
└─[$]> date
Thu Mar 29 12:37:22 BST 2018
So presently I have a task to run at 10:30am, it will run at 11:30am. So I thought this would be easy, I added the following to my configuration:
CELERY_TIMEZONE = Europe/London
CELERY_ENABLE_UTC = False
When I run my celery beat schedule, via:
celery worker --beat -A pyramid_celery.celery_app --ini development.ini -n celeryIntranetAPI
Now I thought this would solve my problems, however my cron tasks are still an hour behind, how can I make celery keep up with the system clock?
Note I have tried:
CELERY_TIMEZONE = UTC
CELERY_ENABLE_UTC = True
As per a few suggestions, but this did not work either.
Can anyone can shed some light on how I can link my celery cron timings to the system clock?
This was fixed in celery here: https://github.com/celery/celery/commit/be55de622381816d087993f1c7f9afcf7f44ab33

Turns out this was a bug with celery, fixed here

Related

celery flower web received and started timezone not in my set time-zone.still UTC time-zone

I already search same question, but it's different.
the tasks show received time and started time still are UTC time;
the worker config in flower check, timezone is Asia/Shanghai.
in my celery_task config also give already:
enable_utc = False
timezone = 'Asia/Shanghai'
new version is like this config,not before
CELERY_TIMEZONE = 'Asia/Shanghai'
CELERY_ENABLE_UTC = False
i also try old conf, its same not effective;
i also try run flower command includ: flower -A celery_task
still not effective.
so how can show right time-zone time in web received time and started time?
its not load celery config, the flower start command must use:"celery -A celery_task",the '-A xxxxx' must in 'celery' after, not 'flower' after.
just can right load;
the problem from 'flower\views\tasks.py' 118 row,
'flower\static\js\flower.js' 397 row;

Fail2Ban not working on Ubuntu 16.04 (Date issues)

I have a problem with Fail2Ban
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Matched time template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Got time 1519352628.000000 for "'Feb 23 10:23:48'" using template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Processing line with time:1519352628.0 and ip:158.140.140.217
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Ignore line since time 1519352628.0 < 1519381428.727771 - 600
It says "ignoring Line" because the time skew is greater than the inspection period. However, this is not the case.
If indeed 1519352628.0 is derived from Feb 23, 10:23:48, then the other date: 1519381428.727771 must be wrong.
I have run tests for 'invalid user' hitting this repeatedly. But Fail2ban is always ignoring the line.
I am positive I am getting Filter Matches within 1 second.
This is Ubuntu 16.04 and Fail2ban 0.9.3
Thanks for any help you might have!
Looks like there is a time zone issue on your machine that might cause the confusion. Try to set the correct time zone and restart both rsyslogd and fail2ban.
Regarding your debug log:
1519352628.0 = Feb 23 02:23:48
-> timestamp parsed from line in log file with time Feb 23 10:23:48 - 08:00 time zone offset!
1519381428.727771 = Feb 23 10:23:48
-> timestamp of current time when fail2ban processed the log.
Coincidently this is the same time as the time in the log file. That's what makes it so confusing in this case.
1519381428.727771 - 600 = Feb 23 10:13:48
-> limit for how long to look backwards in time in the log file since you've set findtime = 10m in jail.conf.
Fail2ban 'correctly' ignores the log entry that appears to be older than 10 minutes, because of the set time zone -08:00.
btw:
If you need IPv6 support for banning, consider upgrading fail2ban to v0.10.x.
And there is also a brand new fail2ban version v0.11 (not yet marked stable, but running without issue for 1+ month on my machines) that has this wonderful new auto-increment bantime feature.

Django celery db scheduler not working after version upgrade

I'm upgrading celery and django-celery from:
celery==2.4.5
django-celery==2.3.3
To:
celery==3.0.24
django-celery==3.0.23
After the pip upgrade i run the migrations and all is well.
I then restarted celery worker and celery beat with the below commands:
django-admin.py celery worker --loglevel=DEBUG --config=portal.settings.development -E
django-admin.py celery beat --loglevel=DEBUG --config=portal.settings.development
The celery beat initial output shows it knows about the tasks:
__ - ... __ - _
Configuration ->
. broker -> amqp://zonza:**#localhost:5672/zonza
. loader -> djcelery.loaders.DjangoLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%DEBUG
. maxinterval -> now (0s)
[INFO] Wed, 18 Jun 2014 13:31:18 +0000 celery.beat 2184 140177823078144 beat: Starting...
[2014-06-18 13:31:18,332: DEBUG/MainProcess] DatabaseScheduler: intial read
[2014-06-18 13:31:18,332: INFO/MainProcess] Writing entries...
[2014-06-18 13:31:18,333: DEBUG/MainProcess] DatabaseScheduler: Fetching database schedule
[2014-06-18 13:31:18,366: DEBUG/MainProcess] Current schedule:
<ModelEntry: SOON_EXPIRY_ALERT SOON_EXPIRY_ALERT(*[], **{}) {4}>
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) {4}>
<ModelEntry: REFRESH_DB_CACHE REFRESH_DB_CACHE(*[], **{}) {4}>
Now none of my Periodic Tasks run :/ Any ideas?
edit: if i change the scheduler setting to the default 'celery.beat.PersistentScheduler' one, the tasks will work. but i think we need to use the djcelery one in this project for a number of reasons
edit2: after about 40mins of nothing the tasks now start running properly, this obviously is not ideal, i have no idea why
It should be in the changelogs somewhere, but Celery changed from storing dates in local time to storing them in UTC.
The database scheduler is not able to automatically convert to the new format, so you need to reset the last_run_at fields for every periodic task.
Something like:
UPDATE djcelery_periodic_task SET last_run_at=NULL

Fabric take long time with ssh

I am running fabric to automate deployment. It is painfully slow.
My local environment:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > uname -a
Darwin sh.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May 1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
My fab file:
#!/usr/bin/env python
import logging
import paramiko as ssh
from fabric.api import env, run
env.hosts = [ 'examplesite']
env.use_ssh_config = True
#env.forward_agent = True
logging.basicConfig(level=logging.INFO)
ssh.util.log_to_file('/tmp/paramiko.log')
def uptime():
run('uptime')
Here is the portion of the debug logs:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > date;fab -f /Users/bob/code/somenv/somenv/fabfile/pefabfile.py uptime
Sun Aug 11 22:25:03 EDT 2013
[examplesite] Executing task 'uptime'
[examplesite] run: uptime
DEB [20130811-22:25:23.610] thr=1 paramiko.transport: starting thread (client mode): 0x13e4650L
INF [20130811-22:25:23.630] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.9p1)
DEB [20130811-22:25:23.641] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-grou
It takes 20 seconds before paramiko is even starting the thread. Surely, Executing task 'uptime' does not take that long. I can manually log in through ssh, type in uptime, and exit in 5-6 seconds. I'd appreciate any help on how to extract mode debug information. I made the changes mentioned here, but no difference.
Try:
env.disable_known_hosts = True
See:
https://github.com/paramiko/paramiko/pull/192
&
Slow public key authentication with paramiko
Maybe it is a problem with DNS resolution and/or IPv6.
A few things you can try:
replacing the server name by its IP address in env.hosts
disabling IPv6
use another DNS server (e.g. OpenDNS)
For anyone looking at this post-2014, paramiko, which was the slow component when checking known hosts, introduced a fix in March 2014 (v1.13), which was allowed as requirement by Fabric in v1.9.0, and backported to v1.8.4 and v1.7.4.
So, upgrade !

Can a sleeping Perl program be killed using kill(SIGKILL, $$)?

I am running a Perl program, there is a module in the program which is triggered by an external process to kill all the child processes and terminate its execution.
This works fine.
But, when a certain function say xyz() is executing there is a sleep(60) statement on a condition.
Right now the function is executed repeatedly as it is waiting for some value.
When I trigger the kill process as mentioned above the process does not take place.
Does anybody have a clue as to why this is happening?
I don't understand how you are trying to kill a process from within itself (your $$ in question subject) when it's sleeping.
If you are killing from a DIFFERENT process, then it will have its own $$. You need to find out the PID of the original process to kill first (by trolling process list or by somehow communicating it from the original process).
Killing a sleeping process works very well
$ ( date ; perl5.8 -e 'sleep(100);' ; date ) &
Wed Sep 14 09:48:29 EDT 2011
$ kill -KILL 8897
Wed Sep 14 09:48:54 EDT 2011
This also works with other "killish" signals ('INT', 'ABRT', 'QUIT', 'TERM')
UPDATE: Upon re-reading, may be the issue you meant was that "triggered by an external process" part doesn't happen. If that's the case, you need to:
Set up a CATCHABLE signal handler in your process before going to sleep ($SIG{'INT'}) - SIGKILL can not be caught by a handler.
Send SIGINT from said "external process"
Do all the needed cleanup once sleep() is interrupted by SIGINT from SIGINT handler.