Celery Flower Broker Tab not populating with broker_api set for rabbitmq api - celery

I'm trying to populate the Broker tab on Celery Flower but when I pass a broker_api like the following example:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api/
I get the following error:
state.py:108 (run) Failed to inspect the broker: 'list' object is not callable
I'm confident the credentials I'm using are correct and the RabbitMQ Management Plugin is enabled. I'm able to access the RabbitMQ monitoring page through the browser.
flower==0.6.0
RabbitMQ 3.2.1
Does anyone know how to fix this?

Try removing the slash after /api/:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api

Had the same issue on an Airflow setup with Celery 5.2.6 and Flower 1.0.0. The solution for me was to launch Flower using:
airflow celery flower --broker-api=http://guest:guest#rabbitmq:15672/api/
For non-Airflow readers, I believe the command should be:
celery flower --broker=amqp://guest:guest#rabbitmq:5672 --broker_api=http://guest:guest#rabbitmq:15672/api/
A few remarks:
The above assumes a shared Docker network. If that's not the case, every #rabbitmq should be replaced with e.g. #localhost
--broker is not needed if running under Airflow's umbrella (it's passed from the Airflow config)
A good test to verify the API works is to access http://guest:guest#localhost:15672/api/index.html locally

Related

Purge Celery tasks on GCP K8s

I want to purge Celery tasks. Celery is running on GCP Kubernetes in my case. Is there a way to do it from terminal? For example via kubectl?
The solution I found was to write to file shared by both API and
Celery containers. In this file, whenever an interruption is captured,
a flag is set to true. Inside the celery containers I keep
periodically checking the contents of such file. If the flag is set to
true, then I gracefully clear things up and raise an error.
Does this solve your problem? How can I properly kill a celery task in a kubernetes environment?
an alternate solution may be:
$ celery -A proj purge
or
from proj.celery import app
app.control.purge()

Set Celery worker logging level

In version 4.1.0 of Celery, there was a --loglevel flag which set the log level of the Celery worker.
This worked for things like celery -A myapp worker --loglevel INFO.
But, as of version 5.0.2, this flag has been removed from the documentation.
As of right now, if I Google "Celery worker set log level" I get links to the Celery source code, and to this SO question which assumes its existence.
So how do you set the log level of a Celery worker now?
Although this is no longer in the documentation, --loglevel is still a valid parameter to worker in Celery 5.0.2.
celery --app my_app worker --loglevel INFO

Brokers for Celery Executor in Airflow

Is it possible to use the following brokers instead of Redis or RabbitMQ:
Zookeeper
IBM MQ
Kafka
Megacache
If so, how would I be able to use it ?
Thanks
As per Celery documentation in a part of transport brokers support, RabbitMQ and Redis are fully featured and qualified as a stable solutions.
According to the list you've provided for any alternatives around, Zookeper might be also adopted as an Celery executor in Airflow but only as an experimental option with some functional limitations.
Installation details for Zookeper broker implementation you can find here.
Using Python package:
$ pip install "celery[zookeeper]"
You can check out all the available extensions in the source setup.py code.
Referencing Airflow documentation:
CeleryExecutor is one of the ways you can scale out the number of workers. For this to work, you need to setup a Celery backend
(RabbitMQ, Redis, …) and change your airflow.cfg to point the executor parameter to CeleryExecutor and provide the related Celery
settings.
After particular Celery backend being prepared, adjust appropriate settings in airflow.cfg file, for any incoming doubts refer to this example.

Airflow distributed model services

Switching from localexecutor to celeryexecutor.
In this model, I have
Masternode1 - airflow webserver, airflow scheduler, rabbitmq
Masternode2 - airflow webserver, rabbitmq
Workernode1 - airflowworker
Workernode2 - airflowworker
Workernode3 - airflowworker
Question:
Where does the Flower service run for celery? Is it required to run that in all nodes or just any one of the nodes (since its only a UI)
Is there any other components trivial to manage a production workload ?
Is using Kafka for broker a reality and available to use ?
Thank you
Celery Flower is yet another (optional) service that you may want to independently either on a dedicated machine, or share one machine among few Airflow services.
You may, for an example, run the webserver and flower on one machine, scheduler and few Airflow workers each on a dedicated machine.
Kafka as broker for Celery is something people talk about quite a lot but as far as I know there is no concrete work in Celery for it. However, considering there is an interest to have Kafka support in Kombu, I would assume that the moment Kombu gets Kafka support, Celery soon follow as Kombu is the core Celery dependency.

Celery delay().get() doesnt return when worker is running on Windows

I setup basic celery example myapp.py of celery using redis as baker and backend.
When I run worker on Windows, running the delay().get() doesnt return.
Instead of specifying result backend when initialize the app, put it in app.conf.update code (https://github.com/celery/celery/issues/897#issuecomment-57350929) worked for me