celery not working with pid file in daemon - celery

in celery.service, When i use like this ExecStart=/usr/local/bin/pipenv run celery -A proj worker -B it works well
but when I use like this, ExecStart=/usr/local/bin/pipenv run celery -A proj worker -B multi start w1 --pidfile=/var/run/celery/beat.pid --logfile=/var/run/celery/beat.log --loglevel=info it doesnt work
I am running it with systemmd
celery.service
Can anyone tell me what is the reason of not working wiht pid file.

Related

Check celery config with command line

I have celery running in a docker container and I want to check that the option CELERY_TASK_RESULT_EXPIRES = '3600' has been applied.
I tried using celery inspect conf and celery inspect stats but the commands never end. Other than that celery is running fine and doing its work.
You can get that from celery inspect. Try this
celery -A app inspect report --timeout 10
Found flower. It is installed with
pip install flower
flower -A celery-app-name --port=5555
And then celery can be accessed via REST API. The following will give the workers config
curl -w "\n" http://localhost:5555/api/workers

Celery remains process opened

I have a problem with Celery. I have set Celery to run demonized. This is the command:
/home/user/sitios/incidencias/env/bin/python3 -m celery worker --concurrency=4 --time-limit=200 --app=incidencias --loglevel=INFO --logfile=/var/log/celery/worker1%I.log --pidfile …
The problem is that celery run too many processes and finally collapse the system.
Here a htop snapshot:
I have set a time-limit of 200 but it seems be ignored. How can I prevent this? I'm noob with celery.
Note: I have tasks calling tasks.

Limit number of processes in Celery with supervisor

I'm running Celery in a small instance in AWS Elastic Beanstalk.
However, when I do top, I see there are 3 celery processes running. I want to have only.
I'm running this using supervisor and in my config file I have (only showing relevant lines):
[program:celeryd]
directory=/opt/python/current/app/src
command=/opt/python/run/venv/bin/celery worker -A ..."
user=celery
numprocs=1
killasgroup=true
I've also followed the suggestion in this answer and created a file /etc/default/celeryd with this content:
# Extra arguments to celeryd
CELERYD_OPTS="--concurrency=1"
After restarting Celery (with supervisorctl -c config-file-path.conf restart celeryd), I see the 3 processes again. Any ideas? Thanks!
You are starting worker with celery command. Changing /etc/default/celeryd won't have any effect on celery command. Moreover celeryd is deprecated.
When a worker is started, celery launches a default process and n(concurrency) subprocesses.
You can start the worker with
[program:celery]
command=/opt/python/run/venv/bin/celery worker -c 1 -A foo"
This will start a worker with concurrency of 1 and there will be 2 processes.

Celery multi doesn't start workers

I'm trying to start multiple workers on my server with command from celery docs celery multi start Leslie -E.
But it only shows:
celery multi v3.1.17 (Cipater)
> Starting nodes...
> Leslie#test: OK
and exits.
And there are no workers in output of ps aux | grep celery.
Also I tried to start it on local machine and it works fine, I see 5 workers as expected.
So, what is the reason?
I had unsatisfactory results with the celery multi command. I think that supervisord works a lot better. You can find an example supervisord config file here

Can't kill celery processes started by Supervisor

I am running a VPS on Digital Ocean with Ubuntu 14.04.
I setup supervisor to run a bash script to export environment vars and then start celery:
#!/bin/bash
DJANGODIR=/webapps/myproj/myproj
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export REDIS_URL="redis://localhost:6379"
...
celery -A connectshare worker --loglevel=info --concurrency=1
Now I've noticed that supervisor does not seem to be killing these processes when I do supervisorctl stop. Furthermore, when I try to manually kill the processes they won't stop. How can I set up a better script for supervisor and how can I kill the processes that are running?
You should configurate the stopasgroup=true option into supervisord.conf file.
Because you just not only kill the parent process but also the child process.
Sending kill -9 have to kill process. If supervisorctl stop doesn't stop your process you can try setting up stopsignal to one of other values, for example QUIT or KILL.
You can see more in supervisord documentation.