celery flower web received and started timezone not in my set time-zone.still UTC time-zone - celery

I already search same question, but it's different.
the tasks show received time and started time still are UTC time;
the worker config in flower check, timezone is Asia/Shanghai.
in my celery_task config also give already:
enable_utc = False
timezone = 'Asia/Shanghai'
new version is like this config,not before
CELERY_TIMEZONE = 'Asia/Shanghai'
CELERY_ENABLE_UTC = False
i also try old conf, its same not effective;
i also try run flower command includ: flower -A celery_task
still not effective.
so how can show right time-zone time in web received time and started time?

its not load celery config, the flower start command must use:"celery -A celery_task",the '-A xxxxx' must in 'celery' after, not 'flower' after.
just can right load;
the problem from 'flower\views\tasks.py' 118 row,
'flower\static\js\flower.js' 397 row;

Related

celery beat timezone problems

So, I have been using celery/beat for a number of years, and have been ofsetting manually, the schedule of my tasks due to DST issues etc. As my codebase has become larger, the script that I run to change the times is getting bigger and bigger, and I have decided to sort the problem out.
So in short, my system clock updates automatically, from my shell I can run:
┌─[luke#freebsd] - [~/py3-apps/intranet] - [Thu Mar 29, 12:24]
└─[$]> date
Thu Mar 29 12:37:22 BST 2018
So presently I have a task to run at 10:30am, it will run at 11:30am. So I thought this would be easy, I added the following to my configuration:
CELERY_TIMEZONE = Europe/London
CELERY_ENABLE_UTC = False
When I run my celery beat schedule, via:
celery worker --beat -A pyramid_celery.celery_app --ini development.ini -n celeryIntranetAPI
Now I thought this would solve my problems, however my cron tasks are still an hour behind, how can I make celery keep up with the system clock?
Note I have tried:
CELERY_TIMEZONE = UTC
CELERY_ENABLE_UTC = True
As per a few suggestions, but this did not work either.
Can anyone can shed some light on how I can link my celery cron timings to the system clock?
This was fixed in celery here: https://github.com/celery/celery/commit/be55de622381816d087993f1c7f9afcf7f44ab33
Turns out this was a bug with celery, fixed here

PlayFramework Hangs After Days

The server run successfully at one time, but it hangs after days with no error logs. Then, all requests would not get the response.
This is the start command with options
sudo /opt/dev -Dhttps.port=443 -Dhttp.port=9000 -J-Xms3277m -J-Xmx3277m -J-XX:ParallelGCThreads=2 -J-Xmn2574M -J-XX:+UseConcMarkScMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-server &
/opt/dev is the script file generated from activator stage
===========server info==========
linux: Ubuntu 14.04.5 LTS
ram: 4G
openjdk version "1.8.0_141"
===========process info========
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME COMMAND
15037 root 20 0 5978800 2.280g 31216 S 0.0 58.3 63:33.82 java
===========port info ===================
tcp6 :::9000 :::* LISTEN 15037/java
tcp6 :::443 :::* LISTEN 15037/java
===========other info==========
play version 2.3.2
scala version 2.11.1
akka setting
akka.jvm-exit-on-fatal-error = false
play.akka.jvm-exit-on-fatal-error = false
akka.default-dispatcher.fork-join-executor.pool-size-max =64
akka.actor.debug.receive = on
===========================================
These steps could help identify the problem.. or they could be just first steps in this direction.
Try to start with adding -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/where/to/put/hprof according you start script params think you need to use -J-XX instead of -XX. This will create heap-dump in case of OOM.
Add logging in endpoints (at start and at end) to be able to check if play receives request or even this does not happen.
While you have unresponsive play, try to check open file descriptors and compare it with your limits. To check you can find pid of your java process and call sudo ls -al /proc/7333/fd/|wc -l to see your limits use ulimit -a.
Would be nice to try to control akka queues. For the case if you use same dispatcher for frontend requests purposes and for some backoffice processing (dispatcher could be filled with long background tasks)
I would do all the diagnostic steps that Evgeny suggested, plus:
Change "akka.jvm-exit-on-fatal-error" and "play.akka.jvm-exit-on-fatal-error" to true, this may be masking your problem.
Take a stack dump of the running process when it is in this state and use that to identify the problem or post it here. See How to get a complete stack trace of a running java program that is taking 100% cpu?

Fail2Ban not working on Ubuntu 16.04 (Date issues)

I have a problem with Fail2Ban
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Matched time template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Got time 1519352628.000000 for "'Feb 23 10:23:48'" using template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Processing line with time:1519352628.0 and ip:158.140.140.217
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Ignore line since time 1519352628.0 < 1519381428.727771 - 600
It says "ignoring Line" because the time skew is greater than the inspection period. However, this is not the case.
If indeed 1519352628.0 is derived from Feb 23, 10:23:48, then the other date: 1519381428.727771 must be wrong.
I have run tests for 'invalid user' hitting this repeatedly. But Fail2ban is always ignoring the line.
I am positive I am getting Filter Matches within 1 second.
This is Ubuntu 16.04 and Fail2ban 0.9.3
Thanks for any help you might have!
Looks like there is a time zone issue on your machine that might cause the confusion. Try to set the correct time zone and restart both rsyslogd and fail2ban.
Regarding your debug log:
1519352628.0 = Feb 23 02:23:48
-> timestamp parsed from line in log file with time Feb 23 10:23:48 - 08:00 time zone offset!
1519381428.727771 = Feb 23 10:23:48
-> timestamp of current time when fail2ban processed the log.
Coincidently this is the same time as the time in the log file. That's what makes it so confusing in this case.
1519381428.727771 - 600 = Feb 23 10:13:48
-> limit for how long to look backwards in time in the log file since you've set findtime = 10m in jail.conf.
Fail2ban 'correctly' ignores the log entry that appears to be older than 10 minutes, because of the set time zone -08:00.
btw:
If you need IPv6 support for banning, consider upgrading fail2ban to v0.10.x.
And there is also a brand new fail2ban version v0.11 (not yet marked stable, but running without issue for 1+ month on my machines) that has this wonderful new auto-increment bantime feature.

Django celery db scheduler not working after version upgrade

I'm upgrading celery and django-celery from:
celery==2.4.5
django-celery==2.3.3
To:
celery==3.0.24
django-celery==3.0.23
After the pip upgrade i run the migrations and all is well.
I then restarted celery worker and celery beat with the below commands:
django-admin.py celery worker --loglevel=DEBUG --config=portal.settings.development -E
django-admin.py celery beat --loglevel=DEBUG --config=portal.settings.development
The celery beat initial output shows it knows about the tasks:
__ - ... __ - _
Configuration ->
. broker -> amqp://zonza:**#localhost:5672/zonza
. loader -> djcelery.loaders.DjangoLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%DEBUG
. maxinterval -> now (0s)
[INFO] Wed, 18 Jun 2014 13:31:18 +0000 celery.beat 2184 140177823078144 beat: Starting...
[2014-06-18 13:31:18,332: DEBUG/MainProcess] DatabaseScheduler: intial read
[2014-06-18 13:31:18,332: INFO/MainProcess] Writing entries...
[2014-06-18 13:31:18,333: DEBUG/MainProcess] DatabaseScheduler: Fetching database schedule
[2014-06-18 13:31:18,366: DEBUG/MainProcess] Current schedule:
<ModelEntry: SOON_EXPIRY_ALERT SOON_EXPIRY_ALERT(*[], **{}) {4}>
<ModelEntry: celery.backend_cleanup celery.backend_cleanup(*[], **{}) {4}>
<ModelEntry: REFRESH_DB_CACHE REFRESH_DB_CACHE(*[], **{}) {4}>
Now none of my Periodic Tasks run :/ Any ideas?
edit: if i change the scheduler setting to the default 'celery.beat.PersistentScheduler' one, the tasks will work. but i think we need to use the djcelery one in this project for a number of reasons
edit2: after about 40mins of nothing the tasks now start running properly, this obviously is not ideal, i have no idea why
It should be in the changelogs somewhere, but Celery changed from storing dates in local time to storing them in UTC.
The database scheduler is not able to automatically convert to the new format, so you need to reset the last_run_at fields for every periodic task.
Something like:
UPDATE djcelery_periodic_task SET last_run_at=NULL

Custom Munin plugin won't report

I've built my first Munin plugin to give us the size of our Redis queue, but it won't report for some reason. Every other plugin on the node, including other Redis-centric plugins work fine.
Here's the plugin code:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
multigraph redis_queue_size
graph_title Redis Queue Size
graph_info The size of Redis queue
graph_category redis
graph_vlabel Messages
redisqueue.label redisqueue
redisqueue.type GAUGE
redisqueue.min 0
EOM
exit 0;;
esac
queuelength=`redis-cli llen mykeyname`
printf "redisqueue.value "
echo $queuelength
The plugin is in /usr/share/munin/plugins/redis_queue_
The plugin is symlinked to /etc/munin/plugins/redis_queue_
I made sure to restart the service
$ sudo service munin-node force-reload
If I run sudo munin-run redis_queue_ I get the correct output:
redisqueue.value 1567595
If I run munin-node-config I get the following:
redis_queue_ | yes |
If I connect to the instance from the master using telnet to fetch the plugin, I get:
$ telnet 10.101.21.56 4949
Trying 10.101.21.56...
Connected to 10.101.21.56.
Escape character is '^]'.
# munin node at redis01.example.com
fetch redis_queue_
redisqueue.value 1035336
The master shows an empty graph for it, but the "last updated" time isn't increasing. I initially had the plugin configured a little differently (it wasn't producing good output) so all the values are -nan. Once I fixed the output, I expected the plugin to start working, but all efforts have failed.
Everything looks right, but yet still no values in the graph.
Edit: Munin v1.4.6