Set Celery worker logging level - celery

In version 4.1.0 of Celery, there was a --loglevel flag which set the log level of the Celery worker.
This worked for things like celery -A myapp worker --loglevel INFO.
But, as of version 5.0.2, this flag has been removed from the documentation.
As of right now, if I Google "Celery worker set log level" I get links to the Celery source code, and to this SO question which assumes its existence.
So how do you set the log level of a Celery worker now?

Although this is no longer in the documentation, --loglevel is still a valid parameter to worker in Celery 5.0.2.
celery --app my_app worker --loglevel INFO

Related

Dash Celery setup

I have docker-compose setup for my Dash application. I need suggestion or preferred way to setup my celery image.
I am using celery for following use-cases and these are cancellable/abortable/revoked task:
Upload file
Model training
Create train, test set
Case-1. Create one service as celery,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=3", "--statedb=/celery/worker.state"]
So, here we are using default queue, single worker (main) and 3 child/worker processes(ie can execute 3 tasks simultaneously)
Now, if I revoke any task, will it kill the main worker or just that child worker processes executing that task?
Case-2. Create three services as celery-{task_name} ie celery-upload etc,
command: ["celery", "-A", "tasks", "worker", "--loglevel=INFO", "--pool=prefork", "--concurrency=1", , "--statedb=/celery/worker.state", "--queues=upload_queue", , "--hostname=celery_worker_upload_queue"]
So, here we are using custom queue, single worker (main) and 1 child/worker processe(ie can execute 1 task) in its container. This way one service for each task.
Now, if I revoke any task, it will only kill the main worker or just the only child worker processes executing that task in respective container and rest celery containers will be alive?
I tried using below signals with command task.revoke(terminate=True)
SIGKILL and SIGTERM
In this, I observed #worker_process_shutdown.connect and #task_revoked.connect both gets fired.
Does this means main worker and concerned child worker process for whom revoke command is issued(or all child processes as main worker is down) are down?
SIGUSR1
In this, I observed only #task_revoked.connect gets fired.
Does this means main worker is still running/alive and only concerned child worker process for whom revoke command is issued is down?
Which case is preferred?
Is it possible to combine both cases? ie having single celery service with individual workers(main) and individual child worker process and individual queues Or
having single celery service with single worker (main), individual/dedicated child worker processes and individual queues for respective tasks?
One more doubt, As I think, using celery is required for above listed tasks, now say I have button for cleaning a dataframe will this too requires celery?
ie wherever I am dealing with dataframes should I need to use celery?
Please suggest.
UPDATE-2
worker processes = child-worker-process
This is how I am using as below
# Start button
result = background_task_job_one.apply_async(args=(n_clicks,), queue="upload_queue")
# Cancel button
result = result_from_tuple(data, app=celery_app)
result.revoke(terminate=True, signal=signal.SIGUSR1)
# Task
#celery_app.task(bind=True, name="job_one", base=AbortableTask)
def background_task_job_one(self, n_clicks):
msg = "Aborted"
status = False
try:
msg = job(n_clicks) # Long running task
status = True
except SoftTimeLimitExceeded as e:
self.update_state(task_id=self.request.id, state=states.REVOKED)
msg = "Aborted"
status = True
raise Ignore()
finally:
print("FINaLLY")
return status, msg
Is this way ok to handle cancellation of running task? Can you elaborate/explain this line [In practice you should not send signals directly to worker processes.]
Just for clarification from line [In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers)]
This means
celery -A app worker -P prefork -> 1 main worker and 1 child-worker-process. Is it same as below
celery -A app worker -P prefork -c 1 -> 1 main worker and 1 child-worker-process
Earlier, I tried using class AbortableTask and calling abort(), It was successfully updating the state and status as ABORTED but task was still alive/running.
I read to terminate currently executing task, it is must to pass terminate=True.
This is working, the task stops executing and I need to update task state and status manually to REVOKED, otherwise default PENDING. The only hard-decision to make is to use SIGKILL or SIGTERM or SIGUSR1. I found using SIGUSR1 the main worker process is alive and it revoked only the child worker process executing that task.
Also, luckily I found this link I can setup single celery service with multiple dedicated child-worker-process with its dedicated queues.
Case-3: Celery multi
command: ["celery", "multi", "show", "start", "default", "model", "upload", "-c", "1", "-l", "INFO", "-Q:default", "default_queue", "-Q:model", "model_queue", "-Q:upload", "upload_queue", "-A", "tasks", "-P", "prefork", "-p", "/proj/external/celery/%n.pid", "-f", "/proj/external/celery/%n%I.log", "-S", "/proj/external/celery/worker.state"]
But getting error,
celery service exited code 0
command: bash -c "celery multi start default model upload -c 1 -l INFO -Q:default default_queue -Q:model model_queue -Q:upload upload_queue -A tasks -P prefork -p /proj/external/celery/%n.pid -f /proj/external/celery/%n%I.log -S /proj/external/celery/worker.state"
Here also getting error,
celery | Usage: python -m celery worker [OPTIONS]
celery | Try 'python -m celery worker --help' for help.
celery | Error: No such option: -p
celery | * Child terminated with exit code 2
celery | FAILED
Some doubts, what is preferred 1 worker vs multi worker?
If multi worker with dedicated queues, creating docker service for each task increases the docker-file and services too. So I am trying single celery service with multiple dedicated child-worker-process with its dedicated queues which is easy to abort/revoke/cancel a task.
But getting error with case-3 i.e. celery multi.
Please suggest.
If you revoke a task, it may terminate the working process that was executing the task. The Celery worker will continue working as it needs to coordinate other worker processes. If the life of container is tied to the Celery worker, then container will continue running.
In practice you should not send signals directly to worker processes.
In prefork concurrency (the default) you will always have at least two processes running - Celery worker (coordinator) and one or more Celery worker-processes (workers).
To answer the last question we may need more details. It would be easier if you could run Celery task when all dataframes are available. If that is not the case, then perhaps run individual tasks to process dataframes. It is worth having a look at Celery workflows and see if you can build Chunk-ed workflow. Keep it simple, start with assumption that you have all dataframes available at once, and build from there.

Airflow Celery Worker celery-hostname

On airflow 2.1.3, looking at the documentation for the CLI, airflow celery -h
it shows:
-H CELERY_HOSTNAME, --celery-hostname CELERY_HOSTNAME
Set the hostname of celery worker if you have multiple workers on a single machine
I am familiar with Celery and I know you can run multiple workers on the same machine. But with Airflow (and the celery executor) I don't understand how to do so.
if you do, on the same machine:
> airflow celery worker -H 'foo'
> airflow celery worker -H 'bar'
The second command will fail, complaining about the pid, so then:
> airflow celery worker -H 'bar' --pid some-other-pid-file
This will run another worker and it will sync with the first worker BUT you will get a port binding error since airflow binds the worker process to http://0.0.0.0:8793/ no matter what (unless I missed a parameter?).
It seems you are not suppose to run multiple workers per machines... Then my question is, what is the '-H' (--celery-hostname) option for? How would I use it?
The port that celery listens to is also configurable - it is used to serve logs to the webserver:
https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#worker-log-server-port
You can run multiple celery workers with different settings for that port or run them with --skip-serve-logs to not open the webserver in the first place (for example if you send logs to sentry/s3/gcs etc.).
BTW. This sounds strange by the way to run several celery workers because you can utilise the machine CPUS by using parallelism. This seems much more practical an easier to manage.

Purge Celery tasks on GCP K8s

I want to purge Celery tasks. Celery is running on GCP Kubernetes in my case. Is there a way to do it from terminal? For example via kubectl?
The solution I found was to write to file shared by both API and
Celery containers. In this file, whenever an interruption is captured,
a flag is set to true. Inside the celery containers I keep
periodically checking the contents of such file. If the flag is set to
true, then I gracefully clear things up and raise an error.
Does this solve your problem? How can I properly kill a celery task in a kubernetes environment?
an alternate solution may be:
$ celery -A proj purge
or
from proj.celery import app
app.control.purge()

Why Celery queues randomly stuck on starting using Supervisord?

There are over 20 workers managed by supervisord.
My celery worker command:
celery worker myproject.server.celery -l INFO --pool=gevent --concurrency=10 --config=myproject.celeryconfig -n default_worker.%%h -Q default
the problem is: each time deploy new code and then restart each supervisor task, few workers would stuck on staring randomly, which is confused. You can check the Flower dashboard and found the stuck worker:
Image: flower dashboard worker status
Then, you can find the more strange in htop, the ldconfig.real started, instead of the failed celery worker:
Image: htop monitor celery worker
I appreciate any suggestion!
If you are gracefully restarting, then this is a normal behaviour because at the time of deployment you may have some active tasks. Celery will not kill the worker processes when in the graceful shutdown state. It will wait for all of them to finish, and then stop.

Celery Flower Broker Tab not populating with broker_api set for rabbitmq api

I'm trying to populate the Broker tab on Celery Flower but when I pass a broker_api like the following example:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api/
I get the following error:
state.py:108 (run) Failed to inspect the broker: 'list' object is not callable
I'm confident the credentials I'm using are correct and the RabbitMQ Management Plugin is enabled. I'm able to access the RabbitMQ monitoring page through the browser.
flower==0.6.0
RabbitMQ 3.2.1
Does anyone know how to fix this?
Try removing the slash after /api/:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api
Had the same issue on an Airflow setup with Celery 5.2.6 and Flower 1.0.0. The solution for me was to launch Flower using:
airflow celery flower --broker-api=http://guest:guest#rabbitmq:15672/api/
For non-Airflow readers, I believe the command should be:
celery flower --broker=amqp://guest:guest#rabbitmq:5672 --broker_api=http://guest:guest#rabbitmq:15672/api/
A few remarks:
The above assumes a shared Docker network. If that's not the case, every #rabbitmq should be replaced with e.g. #localhost
--broker is not needed if running under Airflow's umbrella (it's passed from the Airflow config)
A good test to verify the API works is to access http://guest:guest#localhost:15672/api/index.html locally