Celery multi not working as expected - celery

I have this my project folder structure
api
-- __init__.py
--jobs/
-- __init__.py
-- celery.py
-- celeyconfig.py
-- tasks.py
--api_helpers/
--views/
tasks has a task called ExamineColumns
I launch the worker using celery worker -A api.jobs --loglevel=Info
It works fine and I can run the tasks.
This is the ourput of celery examine command
$ celery inspect registered
-> ranjith-ThinkPad-T420: OK
* ExamineColumns
* celery.backend_cleanup
* celery.chain
* celery.chord
* celery.chord_unlock
* celery.chunks
* celery.group
* celery.map
* celery.starmap
But when I try the multi mode it simply does not work. I am trying to run by running
celery multi start w1 -c3 -A api.jobs --loglevel=Info
But it does not start at all.
$ celery inspect registered
Error: No nodes replied within time constraint.
I am not sure why it is not working

You can try to run as:
/usr/bin/celery multi start w1 w2 --uid=www --loglevel=INFO --pidfile=/var/run/%n.pid --logfile=/var/log/%n.log --quiet
--uid must be user/group from your server. Not recommended use root
--quiet will not output data to console
%n.log will replace itself on w1.log and w2.log
For checking you can use ps uax | grep celery Result will be as so:
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w1.domain.ru --loglevel=DEBUG --logfile=/var/log/w1.log --pidfile=/var/run/w1.pid
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w2.domain.ru --loglevel=DEBUG --logfile=/var/log/w2.log --pidfile=/var/run/w2.pid

Related

How do I specify celery app location in the celery -A argument?

I have an app.py with a celery definition located in /foo/app.py.
/foo/app.py
from agent import create_app, ext_celery
app = create_app()
celery = ext_celery.celery
if __name__ == '__main__':
app.run()
If I cd into /foo and run celery -A app.celery worker everything starts as expected.
If I am somewhere else, like ~, the following fails celery -A /foo/app.celery worker
How do I give a path to the celery -A argument?
I am trying to specify celery as a service, but it fails because it is not being run in the project folder.
You can always use $PYTHONPATH . Something like PYTHONPATH=/foo celery -A app.celery worker should work.
Or alternatively:
export PYTHONPATH=/foo
celery -A app.celery worker

The support for this usage was removed in Celery 5.0. Instead you should use `-A` as a global option: celery -A celeryapp worker <...>

./celery.sh 2 тип
You are using -A as an option of the worker sub-command:
celery worker -A celeryapp <...>
The support for this usage was removed in Celery 5.0. Instead you should use -A as a global option:
celery -A celeryapp worker <...>
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A
As mentioned in the release notes (breaking changes) documentation:
The global options can no longer be positioned after the sub-command.
Instead, they must be positioned as an option for the celery command
That means that you need to change from:
celery worker -A celeryapp <...>
to
celery -A celeryapp worker <...>
Is your first argument to Celery being the same as your file name?
app = Celery('tasks', broker='redis://localhost:6379/0')
I changed [tasks] same as my filename and run the command again, then it works.
celery -A myfilename worker

Celery start worker and beats at once

Got stuck for a while using custom scheduler for celery beats:
celery -A myapp worker -S redbeat.RedBeatScheduler -E -B -l info
My though was that this would launch both celery worker and celery beats using the redbeat.RedBeatScheduler as its scheduler. It even says beat: Staring.., however it does not use the specified scheduler apparently. No cron tasks are executed like this.
However when I split this command into separate worker and beats, that means
celery -A myapp worker -E -l info
celery -A myapp beat -S redbeat.RedBeatScheduler
Everything works as expected.
Is there any way to merge those two commands?
I do not think the Celery worker has the -S parameter like beat does. Here is what --helps says:
--scheduler SCHEDULER
Scheduler class to use. Default is
celery.beat.PersistentScheduler
So I suggest you use the --scheduler option and run celery -A myapp worker --scheduler redbeat.RedBeatScheduler -E -B -l info instead.

how to run celery flower with config file?

For my project. I want to use flower config file to instead of use command line options.
But I write a file named flowerconfig.py, like follows:
# RabbitMQ management
broker_api = 'http://user:passwd#localhost:15672/api/'
# Enable debug logging
logging = 'DEBUG'
# view address
address = '0.0.0.0'
port = 10006
basic_auth = ["user:passwd"]
persistent = True
db = "var/flower_db"
But when I run flower with the command flower --conf=flowerconfig. I found this broker not work.
I replace the command with celery flower -A celery_worker.celery_app --conf=flowerconfig. celery_worker is my celery file.
the broker is running normally. but still the flowerconfig basic auth not work .enter code here
So I don't know if flower support file config. or other methods.
the versions:
flower==0.9.2
celery==4.2.1
You can create a bash script to run. For example:
#!/bin/bash
celery -A project flower \
--basic_auth=monitor:password \
--persistent=True \
--max_tasks=9999 \
-l info \
--address=0.0.0.0 \
--broker=redis://localhost:6379/0

upstart script. shell arithmetic in script stanza producing incorrect values. equivalent /bin/sh script works

I have an upstart init script, but my dev/testing/production have different numbers of cpus/cores. I'd like to compute the number of worker processes to be 4 * number of cores within the init script
The upstart docs say that the script stanzas use /bin/sh syntax.
I created /bin/sh script to see what was going on. I'm getting drastically different results than my upstart script.
script stanza from my upstart script:
script
# get the number of cores
CORES=`lscpu | grep -v '#' | wc -l`
# set the number of worker processes to 4 * num cores
WORKERS=$(($CORES * 4))
echo exec gunicorn -b localhost:8000 --workers $WORKERS tutalk_site.wsgi > tmp/gunicorn.txt
end script
which outputs:
exec gunicorn -b localhost:8000 --workers 76 tutalk_site.wsgi
my equivalent /bin/sh script
#!/bin/sh
CORES=`lscpu -p | grep -v '#' | wc -l`
WORKERS=$(($CORES * 4))
echo exec gunicorn -b localhost:8000 --workers $WORKERS tutalk_site.wsgi
which outputs:
exec gunicorn -b localhost:8000 --workers 8 tutalk_site.wsgi
I'm hoping this is a rather simple problem and a few other pairs of eyes will locate the issue.
Any help would be appreciated.
I suppose I should have answered this several days ago. I first attempted using environment variables instead but didn't have any luck.
I solved the issue by replacing the computation with a python one-liner
WORKERS=$(python -c "import os; print os.sysconf('SC_NPROCESSORS_ONLN') * 2")
and that worked out just fine.
still curious why my bourne-shell script came up with the correct value while the upstart script, whose docs say use bourne-shell syntax didn't