How do I specify celery app location in the celery -A argument? - celery

I have an app.py with a celery definition located in /foo/app.py.
/foo/app.py
from agent import create_app, ext_celery
app = create_app()
celery = ext_celery.celery
if __name__ == '__main__':
app.run()
If I cd into /foo and run celery -A app.celery worker everything starts as expected.
If I am somewhere else, like ~, the following fails celery -A /foo/app.celery worker
How do I give a path to the celery -A argument?
I am trying to specify celery as a service, but it fails because it is not being run in the project folder.

You can always use $PYTHONPATH . Something like PYTHONPATH=/foo celery -A app.celery worker should work.
Or alternatively:
export PYTHONPATH=/foo
celery -A app.celery worker

Related

The support for this usage was removed in Celery 5.0. Instead you should use `-A` as a global option: celery -A celeryapp worker <...>

./celery.sh 2 тип
You are using -A as an option of the worker sub-command:
celery worker -A celeryapp <...>
The support for this usage was removed in Celery 5.0. Instead you should use -A as a global option:
celery -A celeryapp worker <...>
Usage: celery worker [OPTIONS]
Try 'celery worker --help' for help.
Error: no such option: -A
As mentioned in the release notes (breaking changes) documentation:
The global options can no longer be positioned after the sub-command.
Instead, they must be positioned as an option for the celery command
That means that you need to change from:
celery worker -A celeryapp <...>
to
celery -A celeryapp worker <...>
Is your first argument to Celery being the same as your file name?
app = Celery('tasks', broker='redis://localhost:6379/0')
I changed [tasks] same as my filename and run the command again, then it works.
celery -A myfilename worker

Celery start worker and beats at once

Got stuck for a while using custom scheduler for celery beats:
celery -A myapp worker -S redbeat.RedBeatScheduler -E -B -l info
My though was that this would launch both celery worker and celery beats using the redbeat.RedBeatScheduler as its scheduler. It even says beat: Staring.., however it does not use the specified scheduler apparently. No cron tasks are executed like this.
However when I split this command into separate worker and beats, that means
celery -A myapp worker -E -l info
celery -A myapp beat -S redbeat.RedBeatScheduler
Everything works as expected.
Is there any way to merge those two commands?
I do not think the Celery worker has the -S parameter like beat does. Here is what --helps says:
--scheduler SCHEDULER
Scheduler class to use. Default is
celery.beat.PersistentScheduler
So I suggest you use the --scheduler option and run celery -A myapp worker --scheduler redbeat.RedBeatScheduler -E -B -l info instead.

how to run celery flower with config file?

For my project. I want to use flower config file to instead of use command line options.
But I write a file named flowerconfig.py, like follows:
# RabbitMQ management
broker_api = 'http://user:passwd#localhost:15672/api/'
# Enable debug logging
logging = 'DEBUG'
# view address
address = '0.0.0.0'
port = 10006
basic_auth = ["user:passwd"]
persistent = True
db = "var/flower_db"
But when I run flower with the command flower --conf=flowerconfig. I found this broker not work.
I replace the command with celery flower -A celery_worker.celery_app --conf=flowerconfig. celery_worker is my celery file.
the broker is running normally. but still the flowerconfig basic auth not work .enter code here
So I don't know if flower support file config. or other methods.
the versions:
flower==0.9.2
celery==4.2.1
You can create a bash script to run. For example:
#!/bin/bash
celery -A project flower \
--basic_auth=monitor:password \
--persistent=True \
--max_tasks=9999 \
-l info \
--address=0.0.0.0 \
--broker=redis://localhost:6379/0

How do avoid a docker container stop after the application is stopped

There is a docker container with Postgres server. Ones postgres is stopped or crashed (doesn't matter) I need to check some environment variables and the state of a few files.
By default, the container stops after an application is finished.
I know there is an option to change the default behavior in dockerfile but I no longer to find it ((
If somebody knows that please give me an Dockerfile example like this :
FROM something
RUN something ...
ENTRYPOINT [something]
You can simply run non exiting process in the end of entrypoint to keep the container alive, even if the main process exits.
For example use
tail -f 'some log file'
There isn't an "option" to keep a container running when the main process has stopped or died. You can run something different in the container while debugging the actual startup scripts. Sometimes you need to override an entrypoint to do this.
docker run -ti $IMAGE /bin/sh
docker run -ti --entrypoint=/bin/sh $IMAGE
If the main process will not stay running when you docker start the existing container then you won't be able to use that container interactively, otherwise you could:
docker start $CID
docker exec -ti $CID sh
For getting files from an existing container, you can docker cp anything you need from the stopped container.
docker cp $CID:/a/path /some/local/path
You can also docker export a tar archive of the complete container.
docker export $CID -o $CID.tar
tar -tvf $CID.tar | grep afile
The environment Docker injects can be seen with docker inspect, but this won't give you anything the process has added to the environment.
docker inspect $CID --format '{{ json .Config.Env }}'
In general, Docker requires a process to keep running in the foreground. Otherwise, it assumes that the application is stopped and the container is shut down. Below, I outline a few ways, that I'm aware of, which can prevent a container from stopping:
Use a process manager such as runit or systemd to run a process inside a container:
As an example, here you can find a Redhat article about running systemd within a docker container.
A few possible approaches for debugging purposes:
a) Add an artificial sleep or pause to the entrypoint:
For example, in bash, you can use this to create an infinite pause:
while true; do sleep 1; done
b) For a fast workaround, one can run the tail command in the container:
As an example, with the command below, we start a new container in detached/background mode (-d) and executing the tail -f /dev/null command inside the container. As a result, this will force the container to run forever.
docker run -d ubuntu:18.04 tail -f /dev/null
And if the main process crashed/exited, you may still look up the ENV variable or check out files with exec and the basic commands like cd, ls. A few relevant commands for that:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' name-of-container
docker exec -it name-of-container bash

Celery multi not working as expected

I have this my project folder structure
api
-- __init__.py
--jobs/
-- __init__.py
-- celery.py
-- celeyconfig.py
-- tasks.py
--api_helpers/
--views/
tasks has a task called ExamineColumns
I launch the worker using celery worker -A api.jobs --loglevel=Info
It works fine and I can run the tasks.
This is the ourput of celery examine command
$ celery inspect registered
-> ranjith-ThinkPad-T420: OK
* ExamineColumns
* celery.backend_cleanup
* celery.chain
* celery.chord
* celery.chord_unlock
* celery.chunks
* celery.group
* celery.map
* celery.starmap
But when I try the multi mode it simply does not work. I am trying to run by running
celery multi start w1 -c3 -A api.jobs --loglevel=Info
But it does not start at all.
$ celery inspect registered
Error: No nodes replied within time constraint.
I am not sure why it is not working
You can try to run as:
/usr/bin/celery multi start w1 w2 --uid=www --loglevel=INFO --pidfile=/var/run/%n.pid --logfile=/var/log/%n.log --quiet
--uid must be user/group from your server. Not recommended use root
--quiet will not output data to console
%n.log will replace itself on w1.log and w2.log
For checking you can use ps uax | grep celery Result will be as so:
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w1.domain.ru --loglevel=DEBUG --logfile=/var/log/w1.log --pidfile=/var/run/w1.pid
www ... /usr/local/bin/python2.7 -m celery.bin.celeryd -n w2.domain.ru --loglevel=DEBUG --logfile=/var/log/w2.log --pidfile=/var/run/w2.pid