Making the right worker execute a task via send_task - celery

How do I make Celery send a task to the right worker when using send_task?
For instance, given the following services:
service_add.py
from celery import Celery
celery = Celery('service_add', backend='redis://localhost', broker='pyamqp://')
#celery.task
def add(x, y):
return x + y
service_sub.py
from celery import Celery
celery = Celery('service_sub', backend='redis://localhost', broker='pyamqp://') #redis backend,rabbitmq for messaging
#celery.task
def sub(x, y):
return x - y
the following code is guaranteed to fail:
main.py
from celery.execute import send_task
result1 = send_task('service_sub.sub',(1,1)).get()
result2 = send_task('service_sub.sub',(1,1)).get()
With the exception celery.exceptions.NotRegistered: 'service_sub.sub' because Celery sends each process the tasks in a round-robin fashion, even though service_sub belongs to just one process.
For the question to be complete, here's how I ran the processes and the config file:
celery -A service_sub worker --loglevel=INFO --pool=solo -n worker1
celery -A service_add worker --loglevel=INFO --pool=solo -n worker2
celeryconfig.py
## Broker settings.
broker_url = 'pyamqp://'
# List of modules to import when the Celery worker starts.
imports = ('service_add.py','service_sub.py')

If you're using two different apps service_add / service_sub only to achieve the routing of tasks to a dedicated worker, I would like to suggest another solution. If that's not the case and you still need two (or more apps) I would suggest better encapsulate the broker like amqp://localhost:5672/add_vhost & backend: redis://localhost/1. Having a dedicate vhost in rabbitMQ and a dedicated database id (1) in Redis might do the trick.
Having said that, I think that the right solution in such a case is using the same celery application (not splitting into two application) and use router:
task_routes = {'tasks.service_add': {'queue': 'add'}, 'tasks.service_sub': {'queue': 'sub'}}
add it to the configuration:
app.conf.task_routes = task_routes
and run your worker with Q (which queue to read messages from):
celery -A shared_app worker --loglevel=INFO --pool=solo -n worker1 -Q add
celery -A shared_app worker --loglevel=INFO --pool=solo -n worker2 -Q sub
Note that this approach has more benefits, like if you want to have some dependencies between tasks (canvas).
There are more ways to define routers, you can read more about it here.

Related

Dash Celery setup

I have docker-compose setup for my Dash application. I need suggestion or preferred way to setup my celery image.
I am using celery for following use-cases and these are cancellable/abortable/revoked task:
Upload file
Model training
Create train, test set
Case-1. Create one service as celery,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=3", "--statedb=/celery/worker.state"]
So, here we are using default queue, single worker (main) and 3 child/worker processes(ie can execute 3 tasks simultaneously)
Now, if I revoke any task, will it kill the main worker or just that child worker processes executing that task?
Case-2. Create three services as celery-{task_name} ie celery-upload etc,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=1", , "--statedb=/celery/worker.state", "--queues=upload_queue", , "--hostname=celery_worker_upload_queue"]
So, here we are using custom queue, single worker (main) and 1 child/worker processe(ie can execute 1 task) in its container. This way one service for each task.
Now, if I revoke any task, it will only kill the main worker or just the only child worker processes executing that task in respective container and rest celery containers will be alive?
I tried using below signals with command task.revoke(terminate=True)
SIGKILL and SIGTERM
In this, I observed #worker_process_shutdown.connect and #task_revoked.connect both gets fired.
Does this means main worker and concerned child worker process for whom revoke command is issued(or all child processes as main worker is down) are down?
SIGUSR1
In this, I observed only #task_revoked.connect gets fired.
Does this means main worker is still running/alive and only concerned child worker process for whom revoke command is issued is down?
Which case is preferred?
Is it possible to combine both cases? ie having single celery service with individual workers(main) and individual child worker process and individual queues Or
having single celery service with single worker (main), individual/dedicated child worker processes and individual queues for respective tasks?
One more doubt, As I think, using celery is required for above listed tasks, now say I have button for cleaning a dataframe will this too requires celery?
ie wherever I am dealing with dataframes should I need to use celery?
Please suggest.
UPDATE-2
worker processes = child-worker-process
This is how I am using as below
# Start button
result = background_task_job_one.apply_async(args=(n_clicks,), queue="upload_queue")
# Cancel button
result = result_from_tuple(data, app=celery_app)
result.revoke(terminate=True, signal=signal.SIGUSR1)
# Task
#celery_app.task(bind=True, name="job_one", base=AbortableTask)
def background_task_job_one(self, n_clicks):
msg = "Aborted"
status = False
try:
msg = job(n_clicks) # Long running task
status = True
except SoftTimeLimitExceeded as e:
self.update_state(task_id=self.request.id, state=states.REVOKED)
msg = "Aborted"
status = True
raise Ignore()
finally:
print("FINaLLY")
return status, msg
Is this way ok to handle cancellation of running task? Can you elaborate/explain this line [In practice you should not send signals directly to worker processes.]
Just for clarification from line [In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers)]
This means
celery -A app worker -P prefork -> 1 main worker and 1 child-worker-process. Is it same as below
celery -A app worker -P prefork -c 1 -> 1 main worker and 1 child-worker-process
Earlier, I tried using class AbortableTask and calling abort(), It was successfully updating the state and status as ABORTED but task was still alive/running.
I read to terminate currently executing task, it is must to pass terminate=True.
This is working, the task stops executing and I need to update task state and status manually to REVOKED, otherwise default PENDING. The only hard-decision to make is to use SIGKILL or SIGTERM or SIGUSR1. I found using SIGUSR1 the main worker process is alive and it revoked only the child worker process executing that task.
Also, luckily I found this link I can setup single celery service with multiple dedicated child-worker-process with its dedicated queues.
Case-3: Celery multi
command: ["celery", "multi", "show", "start", "default", "model", "upload", "-c", "1", "-l", "INFO", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue", "-A", "tasks", "-P", "prefork", "-p", "/proj/external/celery/%n.pid", "-f", "/proj/external/celery/%n%I.log", "-S", "/proj/external/celery/worker.state"]
But getting error,
celery service exited code 0
command: bash -c "celery multi start default model upload -c 1 -l INFO -Q:default default_queue -Q:model model_queue -Q:upload upload_queue -A tasks -P prefork -p /proj/external/celery/%n.pid -f /proj/external/celery/%n%I.log -S /proj/external/celery/worker.state"
Here also getting error,
celery | Usage: python -m celery worker [OPTIONS]
celery | Try 'python -m celery worker --help' for help.
celery | Error: No such option: -p
celery | * Child terminated with exit code 2
celery | FAILED
Some doubts, what is preferred 1 worker vs multi worker?
If multi worker with dedicated queues, creating docker service for each task increases the docker-file and services too. So I am trying single celery service with multiple dedicated child-worker-process with its dedicated queues which is easy to abort/revoke/cancel a task.
But getting error with case-3 i.e. celery multi.
Please suggest.
If you revoke a task, it may terminate the working process that was executing the task. The Celery worker will continue working as it needs to coordinate other worker processes. If the life of container is tied to the Celery worker, then container will continue running.
In practice you should not send signals directly to worker processes.
In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers).
To answer the last question we may need more details. It would be easier if you could run Celery task when all dataframes are available. If that is not the case, then perhaps run individual tasks to process dataframes. It is worth having a look at Celery workflows and see if you can build Chunk-ed workflow. Keep it simple, start with assumption that you have all dataframes available at once, and build from there.

How Flower determines Celery workers?

Celery workers are being ran like this:
celery -A backend worker --broker=$REDIS_URL
Flower:
celery -A backend flower --broker=$REDIS_URL
When one run another worker Flower determines it. But how? Is there information stored about workers in Redis for example?
When Flower starts, it subscribes itself to be notified of most (if not all) task and worker events ( https://docs.celeryproject.org/en/stable/userguide/monitoring.html#event-reference ). When you run a new Celery worker, the moment it connects to the broker Flower will receive a new worker-online event. - That is how it finds out there is a "new worker in town"...

Using Celery queues with multiple apps

How do you use a Celery queue with the same name for multiple apps?
I have an application with N client databases, which all require Celery task processing on a specific queue M.
For each client database, I have a separate celery worker that I launch like:
celery worker -A client1 -n client1#%h -P solo -Q long
celery worker -A client2 -n client2#%h -P solo -Q long
celery worker -A client3 -n client3#%h -P solo -Q long
When I ran all the workers at once, and tried to kick off a task to client1, I found it never seemed to execute. Then I killed all workers except for the first, and now the first worker receives and executes the task. It turned out that even though each worker's app used a different BROKER_URL, using the same queue caused them to steal each others tasks.
This surprised me, because if I don't specify -Q, meaning Celery pulls from the "default" queue, this doesn't happen.
How do I prevent this with my custom queue? Is the only solution to include a client ID in the queue name? Or is there a more "proper" solution?
For multiple applications I use different Redis databases like
redis://localhost:6379/0
redis://localhost:6379/1
etc.

Where is a task running with multiple nodes having python Celery installed?

If I have multiple workers running on different nodes, how can I know a task is assigned to which worker then?
e.g. here are two workers 10.0.3.101 and 10.0.3.102; a Redis backend runs on 10.0.3.100; when a task is sent to the task queue to Redis backend, a worker gets and executes it. The worker is 10.0.3.101 or 10.0.3.102?
In addition, if a worker saying it is 10.0.3.101 running a task and suddenly halt, how can I know the failure? i.e. Is there any built-in fail over mechanism inside Celery?
Thanks.
I solved the problem by searching on Google.
The knowledge is mainly from Celery documentation.
We can get the hostname of a task-executing worker in task context or use command to get worker machine IP. The task defined as:
import time
import subprocess
from celery import current_task
#app.task
def report():
id = current_task.request.id
ip = subprocess.check_output(
"ip addr | grep eth0 | grep inet |" +
" cut -d t -f 2 | cut -d / -f 1", shell=True)
ip = ip.split('\n')[0].split(' ')[-1]
hostname = current_task.request.hostname
current_task.backend.store_result(
id, result={"ip": ip, "hostname": hostname}, status="READY")
time.sleep(100)
return {
"ip": ip,
"hostname": hostname
}
If start a worker on a machine or in a container:
celery worker -A node.tasks --hostname="visible_hostname_in_request.hostname"
Then we can use following lines to get worker's IP or hostname:
# python
>>> from node.tasks import report
>>> r = report.delay()
>>> r.result
As far as I know, there is no built-in fail-over mechanism in Celery, thus we need to implement it by self; also we can use 3rd party libs like dispy ...

cannot run celery subtasks if queues are specified

I am running celery 3.0.19 with mongodb as backend and broker. I would like to use queues in sub-tasks but it does not work. Here is how to reproduce the problem from the example add task.
Start a celery worker with the command
celery -A tasks worker --loglevel=info --queue=foo
Then create a task that never gets done like that
from tasks import add
sub_task = add.s(queue="foo")
sub_task_async_result = sub_task.apply_async((2,2))
note the following task will get executed normally.
async_result = add.apply_async((2,2), queue="foo")
what do I do wrong?
Thanks!