The support for this usage was removed in Celery 5.0. Instead you should use `-A` as a global option: celery -A celeryapp worker <...> - celery

./celery.sh 2 тип
You are using -A as an option of the worker sub-command:
celery worker -A celeryapp <...>
The support for this usage was removed in Celery 5.0. Instead you should use -A as a global option:
celery -A celeryapp worker <...>
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A

As mentioned in the release notes (breaking changes) documentation:
The global options can no longer be positioned after the sub-command.
Instead, they must be positioned as an option for the celery command
That means that you need to change from:
celery worker -A celeryapp <...>
to
celery -A celeryapp worker <...>

Is your first argument to Celery being the same as your file name?
app = Celery('tasks', broker='redis://localhost:6379/0')
I changed [tasks] same as my filename and run the command again, then it works.
celery -A myfilename worker

Related

How do I specify celery app location in the celery -A argument?

I have an app.py with a celery definition located in /foo/app.py.
/foo/app.py
from agent import create_app, ext_celery
app = create_app()
celery = ext_celery.celery
if __name__ == '__main__':
app.run()
If I cd into /foo and run celery -A app.celery worker everything starts as expected.
If I am somewhere else, like ~, the following fails celery -A /foo/app.celery worker
How do I give a path to the celery -A argument?
I am trying to specify celery as a service, but it fails because it is not being run in the project folder.
You can always use $PYTHONPATH . Something like PYTHONPATH=/foo celery -A app.celery worker should work.
Or alternatively:
export PYTHONPATH=/foo
celery -A app.celery worker

Celery start worker and beats at once

Got stuck for a while using custom scheduler for celery beats:
celery -A myapp worker -S redbeat.RedBeatScheduler -E -B -l info
My though was that this would launch both celery worker and celery beats using the redbeat.RedBeatScheduler as its scheduler. It even says beat: Staring.., however it does not use the specified scheduler apparently. No cron tasks are executed like this.
However when I split this command into separate worker and beats, that means
celery -A myapp worker -E -l info
celery -A myapp beat -S redbeat.RedBeatScheduler
Everything works as expected.
Is there any way to merge those two commands?
I do not think the Celery worker has the -S parameter like beat does. Here is what --helps says:
--scheduler SCHEDULER
Scheduler class to use. Default is
celery.beat.PersistentScheduler
So I suggest you use the --scheduler option and run celery -A myapp worker --scheduler redbeat.RedBeatScheduler -E -B -l info instead.

kubectl exec fails with the error "Unable to use a TTY - input is not a terminal or the right kind of file"

I am running a jenkins pipeline with the following command:
kubectl exec -it kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
which is running fine on the terminal of the machine the pipeline is running on, but on the actual pipeline I get the following error: "Unable to use a TTY - input is not a terminal or the right kind of file"
Any tips on how to go about resolving this?
When the flags -it are used with kubectl exec, it enables the TTY interactive mode. Given the error that you mentioned, it seems that Jenkins doesn't allocate a TTY.
Since you are running the command in a Jenkins job, I would assume that your command is not necessarily interactive. A possible solution for the problem would be to simply remove the -t flag and try to execute the following instead:
kubectl exec -i kafkacat-5f8fcfcc57-2txhc -- kafkacat -b cord-kafka -C -t BBSim-OLT-0-Events -o s#1585031458
For windows git bash:
alias kubectl='winpty kubectl'
$ kubectl exec -it <container>
Or just use winpty before the desired command.
For Windows GitBash users, use Powershell and NOT GitBash
Remove the -t option. That requests a TTY, which as you noted does not exist in Jenkins.
Just a hint for anyone that gets stuck like I did with kafkacat suddenly returning no data after removing the -t.
Turns out if there's no tty then kafkacat defaults to Producer mode, I never used the -C flag because it's the default to be a Consumer, but in this case it's required.

Forcing LSF to execute jobs on different hosts

I have a setup consisting from 3 workers and a management node, which I use for submitting tasks. I would like to execute concurrently a setup script at all workers:
bsub -q queue -n 3 -m 'h0 h1 h2' -J "%J_%I" mpirun setup.sh
As far as I understand, I could use 'ptile' resource constraint to force execution at all workers:
bsub -q queue -n 3 -m 'h0 h1 h2' -J "%J_%I" -R 'span[ptile=1]' mpirun setup.sh
However, occasionally I face an issue that my script got executed several times at the same worker.
Is it expected behavior? Or there is a bug in my setup? Is there a better way for enforcing multi worker execution?
Your understanding of span[ptile=1] is correct. LSF will only use 1 core per host for your job. If there aren't enough hosts based on the -n then the job will pend until something frees up.
However, occasionally I face an issue that my script got executed
several times at the same worker.
I suspect that its something with your script. e.g., LSF appends to the stdout file by default. Use -oo to overwrite.

Celery multi not working as expected

I have this my project folder structure
api
-- __init__.py
--jobs/
-- __init__.py
-- celery.py
-- celeyconfig.py
-- tasks.py
--api_helpers/
--views/
tasks has a task called ExamineColumns
I launch the worker using celery worker -A api.jobs --loglevel=Info
It works fine and I can run the tasks.
This is the ourput of celery examine command
$ celery inspect registered
-> ranjith-ThinkPad-T420: OK
* ExamineColumns
* celery.backend_cleanup
* celery.chain
* celery.chord
* celery.chord_unlock
* celery.chunks
* celery.group
* celery.map
* celery.starmap
But when I try the multi mode it simply does not work. I am trying to run by running
celery multi start w1 -c3 -A api.jobs --loglevel=Info
But it does not start at all.
$ celery inspect registered
Error: No nodes replied within time constraint.
I am not sure why it is not working
You can try to run as:
/usr/bin/celery multi start w1 w2 --uid=www --loglevel=INFO --pidfile=/var/run/%n.pid --logfile=/var/log/%n.log --quiet
--uid must be user/group from your server. Not recommended use root
--quiet will not output data to console
%n.log will replace itself on w1.log and w2.log
For checking you can use ps uax | grep celery Result will be as so:
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w1.domain.ru --loglevel=DEBUG --logfile=/var/log/w1.log --pidfile=/var/run/w1.pid
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w2.domain.ru --loglevel=DEBUG --logfile=/var/log/w2.log --pidfile=/var/run/w2.pid