Can't kill celery processes started by Supervisor - celery

I am running a VPS on Digital Ocean with Ubuntu 14.04.
I setup supervisor to run a bash script to export environment vars and then start celery:
#!/bin/bash
DJANGODIR=/webapps/myproj/myproj
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export REDIS_URL="redis://localhost:6379"
...
celery -A connectshare worker --loglevel=info --concurrency=1
Now I've noticed that supervisor does not seem to be killing these processes when I do supervisorctl stop. Furthermore, when I try to manually kill the processes they won't stop. How can I set up a better script for supervisor and how can I kill the processes that are running?

You should configurate the stopasgroup=true option into supervisord.conf file.
Because you just not only kill the parent process but also the child process.

Sending kill -9 have to kill process. If supervisorctl stop doesn't stop your process you can try setting up stopsignal to one of other values, for example QUIT or KILL.
You can see more in supervisord documentation.

Related

Why does `killall node` executed on SSH terminal require a restart of the VS Code instance running on Windows - Is there a better option?

I'm running VS Code on Windows, and SSH into a Ubuntu machine.
Executing killall node and sending the command to the remote machine causes the local VS Code instance to require restart - Presumably to rebind the SSH connection locally(?).
This is bad for workflow.
Is there a better way to kill all node processes on the remote machine without destroying VD Code requiring to reconnect?
lsof -i -P -n | grep LISTEN reveals that we might be able to get away with just killing IPv6-bound processes - Can these be targeted as a group (Somehting like killall node ipv6)?
A note that killall node is the only way ensure the node process is killed and a port conflict doesn't arise. Every other conceivable method, kill -9 on the process etc, both on the command line and in the code base through SIGINT have been tried.
Suggesting to try commands pkill and learn about pgrep on remote machine.
pkill -9 -f node
Also suggesting to write a bash script:
nodes_killer
#!/bin/bash
killall node
Than give nodes_killer execution permissions
chmod a+x nodes_killer
Than try to call nodes_killer remotely.
This might guard your VSC from the killall command.

Limit number of processes in Celery with supervisor

I'm running Celery in a small instance in AWS Elastic Beanstalk.
However, when I do top, I see there are 3 celery processes running. I want to have only.
I'm running this using supervisor and in my config file I have (only showing relevant lines):
[program:celeryd]
directory=/opt/python/current/app/src
command=/opt/python/run/venv/bin/celery worker -A ..."
user=celery
numprocs=1
killasgroup=true
I've also followed the suggestion in this answer and created a file /etc/default/celeryd with this content:
# Extra arguments to celeryd
CELERYD_OPTS="--concurrency=1"
After restarting Celery (with supervisorctl -c config-file-path.conf restart celeryd), I see the 3 processes again. Any ideas? Thanks!
You are starting worker with celery command. Changing /etc/default/celeryd won't have any effect on celery command. Moreover celeryd is deprecated.
When a worker is started, celery launches a default process and n(concurrency) subprocesses.
You can start the worker with
[program:celery]
command=/opt/python/run/venv/bin/celery worker -c 1 -A foo"
This will start a worker with concurrency of 1 and there will be 2 processes.

supervisord: How to stop supervisord on PROCESS_STATE_FATAL

I'm using supervisord to manage multiple processes in a docker container.
However, one process is always the 'master', and the others are monitoring and reporting processes.
What I want to do is kill supervisord if the master process fails to start after startretries.
What I tried to do is use eventlistener to kill the process:
[eventlistener:master]
events=PROCESS_STATE_FAIL
command=supervisorctl stop all
But I don't think the events subsystem is this sophisticated. I think I need to actually write an event listener to handle the events.
Is that correct? Is there a simpler way to kill the entire supervisord if one of the processes kicks?
Thanks
Another try:
[eventlistener:quit_on_failure]
events=PROCESS_STATE_FATAL
command=sh -c 'echo "READY"; while read -r line; do echo "$line"; supervisorctl shutdown; done'
Especially for docker containers, it would literaly be a killer to have a simple straightforward shutdown on errors. Container should go down when processes die.
Answered by:
supervisord event listener
The command parameter MUST be an event handler, can't be a random command.

celery stdout/stederr logging while running under supervisor

I'm running celery worker with some concurrency level (e.g. 4) under supervisord:
[program:wgusf-wotwgs1.celery]
command=/home/httpd/wgusf-wotwgs1/app/bin/celery -A roles.frontend worker -c 4 -l info
directory=/home/httpd/wgusf-wotwgs1/app/src
numprocs=1
stdout_logfile=/home/httpd/wgusf-wotwgs1/logs/supervisor_celery.log
stderr_logfile=/home/httpd/wgusf-wotwgs1/logs/supervisor_celery.log
autostart=true
autorestart=true
startsecs=3
killasgroup=true
stopsignal=QUIT
user=wgusf-wotwgs1
Problem is next: some part of stdout messages from worker (about successful execution of tasks/receiving tasks) are missing in logfile. But while running celery worker with the same concurrency level from shell - everything seems ok, messages are steadily appearing for all the tasks.
Any ideas how to fix this behavior?
I think it's because by default celery reports things to stderr instead of stdout

Supervisord can't stop celery, how to do the same using Monit

I can't stop my celery worker using Supervisord, in the config file, it looks like this:
command=/usr/local/myapp/src/manage.py celery worker --concurrency=1 --loglevel=INFO
and when I try to stop it using the following command:
sudo service supervisord stop
It shows that the worker has stopped, while it is not.
One more problem, when you restart a program outside supervisord scope, it totally loses control over that program, because of the parent-child relationship between supervisord and its child processes
My question is: how to run celery workers using Monit?