Sending general signal to child process of supervisord - supervisord

I am using supervisord to manage a bunch of processes. Is it possible to use supervisorctl to send arbitrary signals to these processes without actually stopping them and setting stopsignal?

Until 3.2.0 (released November 2015), supervisorctl had no support for sending arbitrary signals to the processes it manages.
From 3.2.0 onwards, use supervisorctl signal:
signal <signal name> <name> Signal a process
signal <signal name> <gname>:* Signal all processes in a group
signal <signal name> <name> <name> Signal multiple processes or groups
signal <signal name> all Signal all processes
so
supervisorctl signal HUP all
would send SIGHUP to all processes managed by supervisor.
Until 3.2.0, you instead could use supervisorctl status to list the pids of the managed processes. Then use kill to send signals to those pids. With a little sed magic, you can even extract those pids to be acceptable as input to the kill command:
kill -HUP `bin/supervisorctl status | sed -n '/RUNNING/s/.*pid \([[:digit:]]\+\).*/\1/p'`
would also send SIGHUP to all active processes under supervisord control.

As of 3.2.0, you CAN now send arbitrary signals to processes!
$ supervisord --version
3.2.0
$ supervisorctl signal help
Error: signal requires a signal name and a process name
signal <signal name> <name> Signal a process
signal <signal name> <gname>:* Signal all processes in a group
signal <signal name> <name> <name> Signal multiple processes or groups
signal <signal name> all Signal all processes
$ supervisorctl signal HUP gateway
gateway: signalled

There is a third-party plugin for supervisor called mr.laforge which
Lets you easily make sure that supervisord and specific processes controlled by it are running from within shell and Python scripts. Also adds a kill command to supervisor that makes it possible to send arbitrary signals to child processes.

Related

Dash Celery setup

I have docker-compose setup for my Dash application. I need suggestion or preferred way to setup my celery image.
I am using celery for following use-cases and these are cancellable/abortable/revoked task:
Upload file
Model training
Create train, test set
Case-1. Create one service as celery,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=3", "--statedb=/celery/worker.state"]
So, here we are using default queue, single worker (main) and 3 child/worker processes(ie can execute 3 tasks simultaneously)
Now, if I revoke any task, will it kill the main worker or just that child worker processes executing that task?
Case-2. Create three services as celery-{task_name} ie celery-upload etc,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=1", , "--statedb=/celery/worker.state", "--queues=upload_queue", , "--hostname=celery_worker_upload_queue"]
So, here we are using custom queue, single worker (main) and 1 child/worker processe(ie can execute 1 task) in its container. This way one service for each task.
Now, if I revoke any task, it will only kill the main worker or just the only child worker processes executing that task in respective container and rest celery containers will be alive?
I tried using below signals with command task.revoke(terminate=True)
SIGKILL and SIGTERM
In this, I observed #worker_process_shutdown.connect and #task_revoked.connect both gets fired.
Does this means main worker and concerned child worker process for whom revoke command is issued(or all child processes as main worker is down) are down?
SIGUSR1
In this, I observed only #task_revoked.connect gets fired.
Does this means main worker is still running/alive and only concerned child worker process for whom revoke command is issued is down?
Which case is preferred?
Is it possible to combine both cases? ie having single celery service with individual workers(main) and individual child worker process and individual queues Or
having single celery service with single worker (main), individual/dedicated child worker processes and individual queues for respective tasks?
One more doubt, As I think, using celery is required for above listed tasks, now say I have button for cleaning a dataframe will this too requires celery?
ie wherever I am dealing with dataframes should I need to use celery?
Please suggest.
UPDATE-2
worker processes = child-worker-process
This is how I am using as below
# Start button
result = background_task_job_one.apply_async(args=(n_clicks,), queue="upload_queue")
# Cancel button
result = result_from_tuple(data, app=celery_app)
result.revoke(terminate=True, signal=signal.SIGUSR1)
# Task
#celery_app.task(bind=True, name="job_one", base=AbortableTask)
def background_task_job_one(self, n_clicks):
msg = "Aborted"
status = False
try:
msg = job(n_clicks) # Long running task
status = True
except SoftTimeLimitExceeded as e:
self.update_state(task_id=self.request.id, state=states.REVOKED)
msg = "Aborted"
status = True
raise Ignore()
finally:
print("FINaLLY")
return status, msg
Is this way ok to handle cancellation of running task? Can you elaborate/explain this line [In practice you should not send signals directly to worker processes.]
Just for clarification from line [In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers)]
This means
celery -A app worker -P prefork -> 1 main worker and 1 child-worker-process. Is it same as below
celery -A app worker -P prefork -c 1 -> 1 main worker and 1 child-worker-process
Earlier, I tried using class AbortableTask and calling abort(), It was successfully updating the state and status as ABORTED but task was still alive/running.
I read to terminate currently executing task, it is must to pass terminate=True.
This is working, the task stops executing and I need to update task state and status manually to REVOKED, otherwise default PENDING. The only hard-decision to make is to use SIGKILL or SIGTERM or SIGUSR1. I found using SIGUSR1 the main worker process is alive and it revoked only the child worker process executing that task.
Also, luckily I found this link I can setup single celery service with multiple dedicated child-worker-process with its dedicated queues.
Case-3: Celery multi
command: ["celery", "multi", "show", "start", "default", "model", "upload", "-c", "1", "-l", "INFO", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue", "-A", "tasks", "-P", "prefork", "-p", "/proj/external/celery/%n.pid", "-f", "/proj/external/celery/%n%I.log", "-S", "/proj/external/celery/worker.state"]
But getting error,
celery service exited code 0
command: bash -c "celery multi start default model upload -c 1 -l INFO -Q:default default_queue -Q:model model_queue -Q:upload upload_queue -A tasks -P prefork -p /proj/external/celery/%n.pid -f /proj/external/celery/%n%I.log -S /proj/external/celery/worker.state"
Here also getting error,
celery | Usage: python -m celery worker [OPTIONS]
celery | Try 'python -m celery worker --help' for help.
celery | Error: No such option: -p
celery | * Child terminated with exit code 2
celery | FAILED
Some doubts, what is preferred 1 worker vs multi worker?
If multi worker with dedicated queues, creating docker service for each task increases the docker-file and services too. So I am trying single celery service with multiple dedicated child-worker-process with its dedicated queues which is easy to abort/revoke/cancel a task.
But getting error with case-3 i.e. celery multi.
Please suggest.
If you revoke a task, it may terminate the working process that was executing the task. The Celery worker will continue working as it needs to coordinate other worker processes. If the life of container is tied to the Celery worker, then container will continue running.
In practice you should not send signals directly to worker processes.
In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers).
To answer the last question we may need more details. It would be easier if you could run Celery task when all dataframes are available. If that is not the case, then perhaps run individual tasks to process dataframes. It is worth having a look at Celery workflows and see if you can build Chunk-ed workflow. Keep it simple, start with assumption that you have all dataframes available at once, and build from there.

Supervisor kills Prefect agent with SIGTERM unexpectedly

I'm using a rapsberry pi 4, v10(buster).
I installed supervisor per the instructions here: http://supervisord.org/installing.html
Except I changed "pip" to "pip3" because I want to monitor running things that use the python3 kernel.
I'm using Prefect, and the supervisord.conf is running the program with command=/home/pi/.local/bin/prefect "agent local start" (I tried this with and without double quotes)
Looking at the supervisord.log file it seems like the Prefect Agent does start, I see the ASCII art that normally shows up when I start it from the command line. But then it shows it was terminated by SIGTERM;not expected, WARN recieved SIGTERM inidicating exit request.
I saw this post: Supervisor gets a SIGTERM for some reason, quits and stops all its processes but I don't even have that 10Periodic file it references.
Anyone know why/how Supervisor processes are getting killed by sigterm?
It could be that your process exits immediately because you don’t have an API key in your command and this is required to connect your agent to the Prefect Cloud API. Additionally, it’s a best practice to always assign a unique label to your agents, below is an example with “raspberry” as a label.
You can also check the logs/status:
supervisorctl status
Here is a command you can try, plus you can specify a directory in your supervisor config (not sure whether environment variables are needed but I saw it from other raspberry Pi supervisor user):
[program:prefect-agent]
command=prefect agent local start -l raspberry -k YOUR_API_KEY --no-hostname-label
directory=/home/pi/.local/bin/prefect
user=pi
environment=HOME="/home/pi/.local/bin/prefect",USER="pi"

Slurm: How to restart failed worker job

If one is running an array job on a slurm cluster, how can one restart a failed worker job?
In a Sun Grid Engine queue, one can add #$ -r y to the job file to indicate the job should be restarted if it fails--what is the Slurm equivalent of this flag?
You can use --requeue
#SBATCH --requeue ### On failure, requeue for another try
--requeue
Specifies that the batch job should eligible to being requeue. The job may be requeued explicitly by a system administrator, after node failure, or upon preemption by a higher priority job. When a job is requeued, the batch script is initiated from its beginning. Also see the --no-requeue option. The JobRequeue configuration parameter controls the default behavior on the cluster.
See more here: https://slurm.schedmd.com/sbatch.html#lbAE

Upstart server killed using -9 or -15 but child processes are still alive

Upstart service is responsible for creating a gearman workers which run in parallel as number of cpus with the help of gnu-parallel. To understand the problem you can read my stackoverflow post which describes how to run workers in parallel.
Fork processes indefinetly using gnu-parallel which catch individual exit errors and respawn
Upstart service: workon.conf
# workon
description "worker load"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
exec seq 1000000 | parallel -N0 --joblog out.log ./worker
end script
Oright. so above service is started
$ sudo service workon start
workon start/running, process 4620
4620 is the process id of service workon.
4 workers will be spawned as per cpu cores. for example.
___________________
Name | PID
worker 1011
worker 1012
worker 1013
worker 1014
perl 1000
perl is the process which is running gnu-parallel.
And, gnu-parallel is responsible for running parallel worker processes.
Now, the problem is.
If I kill the workon service.
$ sudo kill 4620
The service has instruction to re-spawn if killed so it restarts. But, the processes created by the service are not killed. Which means it creates a new set of processes. Now we have 2 perl and 8 workers.
Name | PID
worker 1011
worker 1012
worker 1013
worker 1014
worker 2011
worker 2012
worker 2013
worker 2014
perl 1000
perl 2000
If you ask me, the old process which abandon by service, are they zombies?
Well, the answer is no. They are alive cuz I tested them. Everytime the service dies it creates a new set.
Well, this is one problem. Another problem is with the gnu-parallel.
Lets say I started the service as fresh. Service is running good.
I ran this command to kill the gnu-parallel, i.e. perl
$ sudo kill 1000
This doesn't kill the workers,and they again left without any parent. But, the workon service intercept the death of perl and respawn a new set of workers. This time we have 1 perl and 8 workers. All 8 workers are alive. 4 of them with parent and 4 are orphan.
Now, how do I solve this problem? I want kill all processes created by the service whenever it crashes.
Well, I was able to solve this issue by post-stop. It is an event listener I believe which executes after a service ends. In my case, if I run kill -9 -pid- (pid of the service), post-stop block is executed after the service process is killed. So, I can write the necessary code to remove all the processes spawned by the service.
here is my code using post-stop.
post-stop script
exec killall php & killall perl
end script

Using Celery queues with multiple apps

How do you use a Celery queue with the same name for multiple apps?
I have an application with N client databases, which all require Celery task processing on a specific queue M.
For each client database, I have a separate celery worker that I launch like:
celery worker -A client1 -n client1#%h -P solo -Q long
celery worker -A client2 -n client2#%h -P solo -Q long
celery worker -A client3 -n client3#%h -P solo -Q long
When I ran all the workers at once, and tried to kick off a task to client1, I found it never seemed to execute. Then I killed all workers except for the first, and now the first worker receives and executes the task. It turned out that even though each worker's app used a different BROKER_URL, using the same queue caused them to steal each others tasks.
This surprised me, because if I don't specify -Q, meaning Celery pulls from the "default" queue, this doesn't happen.
How do I prevent this with my custom queue? Is the only solution to include a client ID in the queue name? Or is there a more "proper" solution?
For multiple applications I use different Redis databases like
redis://localhost:6379/0
redis://localhost:6379/1
etc.