We use Flower for celery monitoring. But how to set the monitor range for let's say a week?
Currently looks like it reloads each time on refresh.
UPDATE: image "mher/flower:0.9.5"
Related
I 1 celery broker and several celery workers, all communicating with rabbitMQ. In my setup, I send several tasks to my celery workers, they'll process all the tasks (it takes ~1 hour), and then I'll manually terminate my celery workers.
I want to move towards a system where if a celery worker id 'idle' (which I define as: has 0 active tasks for a time period of timeout_seconds, which I will define beforehand), the worker will be terminated programatically. All workers will have approx the same # of tasks to run, and will all go 'idle' around the same time.
I have code set up that lets me terminate workers, but I am not sure how to detect that a worker is 'idle' and ready for termination. I think I want to use a signal but it doesn't look like there is one that fits my requirement
Here where I work we have a task that is doing basically what you want - automatically scales up/down the cluster depending on the "situation". The key in this process is the Celery inspect/control API, so I suggest you get familiar with it. This is an area that is not well-documented so start with the following:
insp = celery_app.control.inspect()
active_queues = insp.active_queues()
# Note: between these two calls some nodes may shut down and disappear
# from the dictionary so may need to deal with this...
active_stats = insp.active()
You can do this in a separate IPython session while your Celery cluster runs tasks, and look at what is there...
I use below command to create one worker:
celery -A proj worker -l info --concurrency=50 -Q celery,token_1 -n token_1
And in my task, I set the rate limit to 4000/m.
However, when I start running the collection, I noticed the average task processed is just around 10-20/s (with rate limit rule 4000/m enabled).
Then, I removed the rate limit rule, now the task rates goes to around 60/s.
I am confused, since my rate limit is 4000/m, which is relatively 65/s. Why it finally goes just 10-20/s????? (I have already set 50 threads for the worker....)
You're misunderstanding how the rate limits operate in celery. 'According to the documentation on version 4.2:
The rate limits can be specified in seconds, minutes or hours by appending “/s”`, “/m” or “/h” to the value. Tasks will be evenly distributed over the specified time frame.
Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of 600ms between starting two tasks on the same worker instance.
In essence, celery was adding a forced delay between your tasks. Since each task was already processing in about 16ms (1/60 of a second), adding another 16 ms forced delay between tasks reduced the rate at which they would process.
I read this in the celery documentation for Task.rate_limit:
Note that this is a per worker instance rate limit, and not a global rate limit. To enforce a global rate limit (e.g., for an API with a maximum number of requests per second), you must restrict to a given queue.
How do I put a rate limit on a celery queue?
Turns out it cant be done at queue level for multiple workers.
IT can be done at queue level for 1 worker. Or at queue level for each worker.
So if u say 10 jobs/ minute on 5 workers. Your workers will process upto 50 jobs per minute collectively.
So to have only 10 jobs running at a time you either chose one worker. Or chose 5 workers with a limit of 2/minute.
Update: How to exactly put the limit in settings/configuration:
task_annotations = {'tasks.<task_name>': {'rate_limit': '10/m'}}
or change the same for all tasks:
task_annotations = {'*': {'rate_limit': '10/m'}}
10/m means 10 tasks per minute, /s would mean per second. More details here: Task annotations setting
hey I am trying to find a way to do rate limit on queue, and I find out Celery can't do that, however Celery can control the rate per tasks, see this:
http://docs.celeryproject.org/en/latest/userguide/workers.html#rate-limits
so for a workaround, maybe you can set up one tasks per queue(which makes sense in a lot of situations), and put the limit on task.
You can set this limit in the flower > worker pane.
there is a specified blank space for entering your limit there.
The format that is suggested to be used is also like the below:
The rate limits can be specified in seconds, minutes or hours by appending “/s”, >“/m” or “/h” to the value. Tasks will be evenly distributed over the specified >time frame.
Example: “100/m” (hundred tasks a minute). This will enforce a minimum delay of >600ms between starting two tasks on the same worker instance.
I have a big task,which i break down into smaller task and analyse them. I have a basic model.
Master,worker and listener .
Master creates the tasks,give them to worker actors. Once an worker actor completes,it asks for another task from the master. Once all task is completed ,they inform the listener. They usually take around less than 2 minutes to complete 1000 tasks.
Now,Some time the time taken for some tasks might be more than others. I want to set timer for each task,and if a task takes more time,then worker task should be aborted by the master and the task has to be resubmitted later as new one. How to implement this? I can calculate the time taken by a worker task,but how Master actor keeps tab on time taken by all worker actors in real time?
One way of handling this would be for each worker, on receipt of a task to start on, sets a timeout before changing state to process the task, eg:
context.setReceiveTimeout(5 minutes) // for the '5 minutes' notation - import scala.concurrent.duration._
If the timeout is received, the worker can abort the task (or whatever other action you deem appropriate - eg. kill itself, or pass a notification message back to the master). Don't forget to cancel the timeout (set duration = Duration.Undefined) if the task is completed or the like.
I 'm working of project that use celery, rabbitmq. I want to have right to control interval that queue push task to worker(celeryd).
It sounds like you're looking for this documentation on Periodic Tasks.
Essentially, you configure and run celerybeat, which fires off task executions at intervals.
Word of warning:
If it's undesirable to be running your task multiple times concurrently, I'd suggest you follow a task locking recipe. If your workers are busy or offline, you may end up with a backlog of periodic tasks.