End Celery worker task on, time limit, job stage or instruction from client - celery

I'm new to celery and I would appreciate a little help with a design pattern(or example code) for a worker I have yet to write.
Below is a description of the desired characteristics of the worker.
The worker will run a task that collects data from an endless source, a generator.
The worker task will run forever feeding from the generator unless it is directed to stop.
The worker task should stop gracefully on the occurrence of any one of the following triggers.
It exceeds an execution time limit in seconds.
It exceeds a number of iterations of the endless generator loop.
The client sends a message instructing the worker task to finish immediately.
Below is some sudo code for how I believe I need to handle trigger scenarios 1 and 2.
What I don't know is how I send the 'finish immediately' signal from the client and how it is received and executed in the worker task.
Any advice or sample code would be appreciated.
from celery.task import task
from celery.exceptions import SoftTimeLimitExceeded
COUNTLIMIT = # some value sent to the worker task by the client
#task()
def getData():
try:
for count, data in enumerate(endlessGeneratorThing()):
# process data here
if count > COUNTLIMIT: # Handle trigger scenario 2
clean_up_task_nicely()
break
except SoftTimeLimitExceeded: # Handle trigger scenario 1
clean_up_task_nicely()

My understanding of revoke is that it only revokes a task prior to its execution. For (3), I think what you want to do is use an AbortableTask, which provides a cooperative way to end a task:
http://docs.celeryproject.org/en/latest/reference/celery.contrib.abortable.html
On the client end you are able to call task.abort(), on the task end, you are able to poll task.is_aborted()

Related

Celery chain's place of passing arguments

1 ) Celery chain.
On the doc I read this:
Here’s a simple chain, the first task executes passing its return value to the next task in the chain, and so on.
>>> from celery import chain
>>> # 2 + 2 + 4 + 8
>>> res = chain(add.s(2, 2), add.s(4), add.s(8))()
>>> res.get()
16
But where exactly is chain item's result passed to next chain item? On the celery server side, or it passed to my app and then my app pass it to the next chain item?
It's important to me, because my results is quite big to pass them to app, and I want to do all this messaging right into celery server.
2 ) Celery group.
>>> g = group(add.s(i) for i in xrange(10))
>>> g(10).get()
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
Can I be sure that these tasks will be executed as much as possible together. Will celery give priority certain group since the first task of the group start to be being executed?
For example I have 100 requests and each request run group of task, and I don't want to mix task from different groups between each other. First started request to be processed can be the last completed, while his the last task are waiting for free workers which are busy with tasks from others requests. It seems to be better if group of task will be executed as much as possible together.
I will really appreciate if you can help me.
1. Celery Chain
Results are passed on celery side using message passing broker such as rabbitmq. Result are stored using result backend(explicitly required for chord execution). You could verify this information by running your celery worker with loglevel 'INFO' and identify how tasks are invoked.
Celery maintains dependency graph once you invoke tasks, so it exactly knows how to chain your tasks.
Consider callbacks where you link two different tasks,
http://docs.celeryproject.org/en/latest/userguide/canvas.html#callbacks
2. Celery Group
When you call tasks in group celery executes(invokes) them in parallel. Celery worker will try to pick up them depending upon workload it can pick up. If you invoke large number of tasks than your worker can handle, it is certainly possible your first few tasks will get executed first then celery worker will pick rest gradually.
If you have very large no. of task to be invoked in parallel better to invoke then in chunks of certain pool size,
You can mention priority of tasks as mentioned in answer
Completion of tasks in group depends on how much time each task takes. Celery tries to do fair task scheduling as much as possible.

In an IPython cluster how can I gracefully interrupt a worker

I want to run some jobs in a cluster, but I want to be able to kill the job if it is taking too long. Can I do this gracefully from the client, and still have the worker available to do more jobs?
My scenario is that I want to investigate how different machine learning classifiers and hyperparameters affect the time to run .fit(). If the time takes too long, I just want to abandon the task and move on to the next one.
I can find the PIDs of the workers, and I can use kill() to send a signal from the client, but sending SIGINT, SIGHUP and SIGABRT all seem to ruthlessly kill the worker, not just interrupt it. I can't put any logic in the worker code because it's the atomic call to .fit() that I want to time and interrupt.

celery signals not being received from the celery batch processing

I am using celery.contrib.batches to execute a batch of celery tasks. I know its experimental but still wanted to give it a try and I am pretty close. While executing individual tasks in the batch and I am sending signals like backend.mark_as_started(request.id), backend.mark_as_done(request.id, True) deliberately. But the signals are not being received at the worker. Note that everything works if I get rid of batches and execute task one a time. Meaning, my signal handler functions do get executed.
The celery.contrib.Batches indeed do not send these signals. The solution is to send those signals from inside the Batch task.

How to set Akka actors run only for specific time period?

I have a big task,which i break down into smaller task and analyse them. I have a basic model.
Master,worker and listener .
Master creates the tasks,give them to worker actors. Once an worker actor completes,it asks for another task from the master. Once all task is completed ,they inform the listener. They usually take around less than 2 minutes to complete 1000 tasks.
Now,Some time the time taken for some tasks might be more than others. I want to set timer for each task,and if a task takes more time,then worker task should be aborted by the master and the task has to be resubmitted later as new one. How to implement this? I can calculate the time taken by a worker task,but how Master actor keeps tab on time taken by all worker actors in real time?
One way of handling this would be for each worker, on receipt of a task to start on, sets a timeout before changing state to process the task, eg:
context.setReceiveTimeout(5 minutes) // for the '5 minutes' notation - import scala.concurrent.duration._
If the timeout is received, the worker can abort the task (or whatever other action you deem appropriate - eg. kill itself, or pass a notification message back to the master). Don't forget to cancel the timeout (set duration = Duration.Undefined) if the task is completed or the like.

MPI Task Scheduling

I want to develop a task scheduler using MPI where there is a single master processor and there are worker/client processors. Each worker has all the data it needs to compute, but gets the index to work on from the master. After the computation the worker returns some data to the master. The problem is that some processes will be fast and some will be slow.
If I run a loop so that at each iteration the master sends and receives (blocking/non-blocking) data then it can't proceed to next step till it has received data from the current worker from the previous index assigned to it. The bottom line is if a worker takes too long to compute then it becomes the limiting factor and the master can't move on to assign an index to the next worker even if non-blocking techniques are used. Is it possible to skip assigning to a worker and move on to next.
I'm beginning to think that MPI might not be the paradigm to do this. Would python be a nice platform to do task scheduling?
This is absolutely possible using MPI_Irecv() and MPI_Test(). All the master process needs to do is post a non-blocking receive for each worker process, then in a loop test each one for incoming data. If a process is done, send it a new index, post a new non-blocking receive for it, and continue.
One MPI_IRecv for each process is one solution. This has the downside of needing to cancel unmatched MPI_IRecv when the work is complete.
MPI_ANY_SOURCE is an alternate path. This will allow the manager process to have a single MPI_IRecv outstanding at any given time, and the "next" process to MPI_Send will be matched with MPI_ANY_SOURCE. This has the downside of several ranks blocking in MPI_Send when there is no additional work to be done. Some kind of "nothing more to do" signal needs to be worked out, so the ranks can do a clean exit.