Check celery config with command line - celery

I have celery running in a docker container and I want to check that the option CELERY_TASK_RESULT_EXPIRES = '3600' has been applied.
I tried using celery inspect conf and celery inspect stats but the commands never end. Other than that celery is running fine and doing its work.

You can get that from celery inspect. Try this
celery -A app inspect report --timeout 10

Found flower. It is installed with
pip install flower
flower -A celery-app-name --port=5555
And then celery can be accessed via REST API. The following will give the workers config
curl -w "\n" http://localhost:5555/api/workers

Related

celery not working with pid file in daemon

in celery.service, When i use like this ExecStart=/usr/local/bin/pipenv run celery -A proj worker -B it works well
but when I use like this, ExecStart=/usr/local/bin/pipenv run celery -A proj worker -B multi start w1 --pidfile=/var/run/celery/beat.pid --logfile=/var/run/celery/beat.log --loglevel=info it doesnt work
I am running it with systemmd
celery.service
Can anyone tell me what is the reason of not working wiht pid file.

using multi-CPU platforms with locust

I am running htop on the same machine where locust is running. during the tests I have been running this morning, I see one CPU (of 4) hit 100% while the other CPUs are largely idle. I have also observed up to 8 locust tasks running. This is not running distributed. How does locust implement threading and multiprocessing to maximize the available capabilities of the machine?
See https://docs.locust.io/en/stable/running-locust-distributed.html
This applies both for running distributed over multiple machines or just multiple cores.
You need one worker process per core in order to fully utilize the machine.
You can use this bash script for running locust in distributed mode:
echo -e "\nStart LOCUST MASTER\n"
locust -f locust_scenario.py --headless -L $LOG_LEVEL --logfile=$LOG --master-bind-port=$MASTER_PORT \
--master-bind-host=$MASTER_IP -u $COUNT_OF_USERS --print-stats --master --expect-workers=$cores --host=$SERVER_HOST&
PID_MASTER=$!
echo "LOCAST MASTER PID = $PID_MASTER"
sleep 5
# start SLAVE (clients)
echo -e "\nStart LOCUST SLAVES\n"
PID_SLAVES=( )
for ((i = 1; i <= $cores; i++));do
locust -f locust_scenario.py --worker --master-host=$MASTER_IP --master-port=$MASTER_PORT -L $LOG_LEVEL --logfile=$LOG &
PID_SLAVES+=( $! )
done
echo "LOCAST SLAVE PIDs = ${PID_SLAVES[#]}"
I think the best option to run locust on multiple cores locally is to run locust master and workers with docker and docker-compose file as described here https://docs.locust.io/en/stable/running-in-docker.html
Simply mount your locustfile.py inside containers and start it using docker compose command. The number of workers can be easily changed using this command:
docker-compose up --scale worker=4

celery ImportError: No module named 'tasks'

I am trying to learn how to use celery to later integrate into my flask app. I am just trying to execute the basic example found on the Celery Docs I have created a file called task.py and from within that folder where the file task.py is existing i am running celery -A tasks worker --loglevel=info but it is giving an error. I can't seem to figure out what is wrong.
from celery import Celery
app = Celery('tasks', broker='amqp://localhost')
#app.task
def add(x, y):
return x + y
error I am seeing
celery -A tasks worker --loglevel=info
ImportError: No module named 'tasks'
Try executing the command from application folder level. If your tasks.py is inside flask_app/configs/tasks.py, then run the following command from inside flask_app folder.
celery worker --app=configs.tasks:app --loglevel=info
if you want to daemonize celery use following command
celery multi start worker --app=configs.tasks:app --loglevel=info
** multi start will daemonize the celery,
and be sure to activate virtualenv before running the command, if the application is running inside one.
I am successfully running celery in django with django-celery, had faced the same issue.

Limit number of processes in Celery with supervisor

I'm running Celery in a small instance in AWS Elastic Beanstalk.
However, when I do top, I see there are 3 celery processes running. I want to have only.
I'm running this using supervisor and in my config file I have (only showing relevant lines):
[program:celeryd]
directory=/opt/python/current/app/src
command=/opt/python/run/venv/bin/celery worker -A ..."
user=celery
numprocs=1
killasgroup=true
I've also followed the suggestion in this answer and created a file /etc/default/celeryd with this content:
# Extra arguments to celeryd
CELERYD_OPTS="--concurrency=1"
After restarting Celery (with supervisorctl -c config-file-path.conf restart celeryd), I see the 3 processes again. Any ideas? Thanks!
You are starting worker with celery command. Changing /etc/default/celeryd won't have any effect on celery command. Moreover celeryd is deprecated.
When a worker is started, celery launches a default process and n(concurrency) subprocesses.
You can start the worker with
[program:celery]
command=/opt/python/run/venv/bin/celery worker -c 1 -A foo"
This will start a worker with concurrency of 1 and there will be 2 processes.

Celery multi doesn't start workers

I'm trying to start multiple workers on my server with command from celery docs celery multi start Leslie -E.
But it only shows:
celery multi v3.1.17 (Cipater)
> Starting nodes...
> Leslie#test: OK
and exits.
And there are no workers in output of ps aux | grep celery.
Also I tried to start it on local machine and it works fine, I see 5 workers as expected.
So, what is the reason?
I had unsatisfactory results with the celery multi command. I think that supervisord works a lot better. You can find an example supervisord config file here