celery flower gives unknown worker - celery

I'm using celery with redis backend.
i am passing CELERY_RESULT_BACKEND with the correct redis url and also the broker url to the Celery app config.
I start flower by giving the path to my Celery app with -A and also set the --inspect_timeout=30 argument to allow for slow response from worker. I get Unknown worker 'celery#' when clicking on the worker in the UI.
Any ideas how to get this working?

There is a Refresh button on the dashboard page. It refreshes workers by resending inspect command. If you launch workers after flower just refresh workers.
The Refresh button is a choice under the drop down menu that has Shut Down selected by default. In order to Refresh, you have to select a worker (or all of them) first.According here:
https://github.com/mher/flower/issues/395

Related

Celery flower Persistent data

I want to keep the tasks, that is, if my service is stopped and run again
I can see the previous tasks again
Apparently, I have to set it up in flower, I tried this but it doesn't work, have you had any experience with it?
celery -A tasks flower --persistent=True --db=flower.db --state_save_interval=5
I used the
result backend
and save it in mongo, but I have a problem to display it
After it was reset, I can see the previous ones again
--persistent=True
--db=flower.db
--state_save_interval=5your text
result backend

How to make celery worker find the tasks to include?

I have a fastapi app and I use celery for some async tasks. I also use docker, so fastapi runs in one container and celery on another. Now I am moving to break the workers into different queues and they will run in different containers. Right now I am using almost the same image for fastapi and celery, but for this new worker I will end up with a image way bigger than it should be since I have code and packages that the worker don't need. To get around that I now have 2 different dockerfiles, one for each worker, but in both of them I have the exact same file for setting up the celery app.
This is the celery I was setting up:
celery_app = Celery(
broker=config.celery_settings.broker_url,
backend=config.celery_settings.result_backend,
include=[
"src.iam.service_layer.tasks",
"src.receipt_tracking.service_layer.tasks",
"src.cfe_scraper.tasks",
],
)
And the idea is to spin off src.cfe_scraper.tasks and in that docker image I will not have src.iam.service_layer.tasks neither src.receipt_tracking.service_layer.tasks. But when I try to build the image I get an error saying that those paths don't exists, which is correct for that case, but if I simple delete the include argument the worker won't have any task registered. Is there an easy way to solve that without having to have two modules to setup different celery apps?

How to check if symfony messenger is working

I have a pod running in kubernetes / aws cloud. Due to limited configuration options in a custom deployment process (not my fault!!) I cannot start the symfony messenger as you usually would start it. What I have to do after a deployment is log into the shell and manually do
bin/console messenger:consume my_kafka_messages
Of course as soon as the pod for any reason is automatically restarted my worker isn't running. So until we can change the company deployment process I have to make sure to at least get notice if the worker isn't running.
Is there any option to e.g. run an symfony command which checks if the worker is running? If that was possible I could let the system start the worker or at least send me a notification.
How about
bin/console debug:messenger
?
If I do that and get e.g. this output is this sign that the worker is running? Or is it just the configuration of a worker, which could run, if it were started and may or may not run currently?
$ bin/console deb:mess
Messenger
=========
events
------
The following messages can be dispatched:
--------------------------------------------------
#codeCoverageIgnore
App\Domain\KafkaEvents\ProductPictureCollection
handled by App\Handler\ProductPictureHandler
--------------------------------------------------
Of course I can do a crude approach and check the db, which logs the processed datasets. But t is always possible that for e.g. 5 days there are no data to process. In that case I would get false alarms although everything is fine.
So checking directly if the worker is running would be much better, but I have no idea how to do it.

Load and use a Service Worker in Karma test

We want to write a Service Worker that performs source code transformation on the loaded files. In order to test this functionality, we use Karma.
Our tests import source files, on which the source code transformation is performed. The tests only succeed if the Service Worker performs the transformation and fail when the Service Worker is not active.
Locally, we can start Karma with singleRun: false and watch for changed files to restart the tests. However, Service Workers are not active for the page that originally loaded them. Therefore, every test case succeeds but the first one.
However, for continuous integration, we need a single-run mode. So, our Service Worker is not active during the run of the test, which fail accordingly.
Also, two consecutive runs do not solve this issue, as Karma restarts the used browser (so we lose the Service Worker).
So, the question is, how to make the Service Worker available in the test run?
E.g., by preserving the browser instance used by karma.
Calling self.clients.claim() within your service worker's activate hander signals to the browser that you'd like your service worker to take control on the initial page load in which the service worker is first registered. You can see an example of this in action in Service Worker Sample: Immediate Control.
I would recommend that in the JavaScript of your controlled page, you wait for the navigator.serviceWorker.ready promise to resolve before running your test code. Once that promise does resolve, you'll know that there's an active service worker controlling your page. The test for the <platinum-sw-register> Polymer element uses this technique.

Celery Flower Broker Tab not populating with broker_api set for rabbitmq api

I'm trying to populate the Broker tab on Celery Flower but when I pass a broker_api like the following example:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api/
I get the following error:
state.py:108 (run) Failed to inspect the broker: 'list' object is not callable
I'm confident the credentials I'm using are correct and the RabbitMQ Management Plugin is enabled. I'm able to access the RabbitMQ monitoring page through the browser.
flower==0.6.0
RabbitMQ 3.2.1
Does anyone know how to fix this?
Try removing the slash after /api/:
python manage.py celery flower --broker_api=http://guest:guest#localhost:15672/api
Had the same issue on an Airflow setup with Celery 5.2.6 and Flower 1.0.0. The solution for me was to launch Flower using:
airflow celery flower --broker-api=http://guest:guest#rabbitmq:15672/api/
For non-Airflow readers, I believe the command should be:
celery flower --broker=amqp://guest:guest#rabbitmq:5672 --broker_api=http://guest:guest#rabbitmq:15672/api/
A few remarks:
The above assumes a shared Docker network. If that's not the case, every #rabbitmq should be replaced with e.g. #localhost
--broker is not needed if running under Airflow's umbrella (it's passed from the Airflow config)
A good test to verify the API works is to access http://guest:guest#localhost:15672/api/index.html locally