how to config prometheus + celery flower auth? - celery

I use celery flower for task monitoring and recently I decided to add prometheus and everything worked fine until I added --auth option to my flower command, after this prometheus stopped getting data from celery flower, so my guess is that auth also stopped prometheus from getting data. Is there a way to auth prometheus in flower without compromising security?

Related

Celery Upgrade - Workers not working with RabbitMQ

Problem description
I have a working django application using Celery along with Mongo and
RMQ (3.7.17-management-alpine)
The application runs on kubernetes
cluster The application works fine in general
But when I upgrade Celery (3.1.25) and Kombu (3.0.37) to Celery (4.1.0) and Kombu
(4.1.0), I face following issue:
Celery worker pods come up but do
not receive the tasks
I have verified that RMQ receives the messages
needed to run tasks in celery workers
There is no error in RMQ or celery worker pod
In fact, celery pod mentions that it is connected
to RMQ
Strangely, when I restart the RMQ pod after Celery worker pod
comes up things become fine
I am able to run the new tasks in celery
workers after I restart RMQ pod
So I guess something happened wrt Celery/Kombu/RMQ after upgrade to 4.1.0
The code works fine with older version of Celery and Kombu.
Can someone please help me wrt this?

Airflow with KubernetesExecutor workers (EKS) and webserver+scheduler on EC2

I wanted to know if it's possible to setup a KubernetesExecutor on Airflow but having the webserver and scheduler running on an EC2?
Meaning that tasks would run on Kubernetes pods (EKS in my case) but the base services on a regular EC2.
I tried to find information about the issue but failed short...
the following quote is from Airflow's docs, and it's the reason I'm asking this question
KubernetesExecutor runs as a process in the Airflow Scheduler. The scheduler itself does not necessarily need to be running on Kubernetes, but does need access to a Kubernetes cluster.
Thanks in advance!
Yes, this is entirely possible.
You just need to run your airflow scheduler and airflow webserver on EC2 and configure the EC2 instance to have all the necessary acces (via service account likely - but this is your decision and deployment configuration) to be able to spin pods on your EKS cluster.
Nothing special about it besides that you will have to learn how to run and configure the components to talk to each other - there are no ready-to-use recipes, you will have to simply follow theconfiguration parameters of Airflo, and authentication schemes that you need to have.

Airflow on k8s and Google Operators: creds verification

I have Apache Airflow on k8s.
Earlier, when Airflow was running on my local server (not k8s) i didn't have troubles with oauth2 creds verification: when Google Operators (based on GoogleCloudHook) starts, my browser opens and redirects me to Google Auth page. It was one-time procedure.
With Airflow on k8s my tasks running on separate pods and there are troubles with this oauth2 creds verification, i cant "open browser" inside pod, and i dont want to do it every time when my task will be running.
Can I somehow disable this procedure or automatizate this?
Is there any solution?
In order to authenticate you shoukd firstly be using the correct operator and executor in Airflow. In your case this would be the Kubernetes Executor. When using this executor you need to set up secret/s for use with k8s.
Refer to the documentation here Kubernetes Executor
Overview

How to scale RabbitMQ across multiple Kubernetes Clusters

I have an application running in a Kubernetes cluster Azure AKS which is made up of a website running in one deployment, background worker processes running as scheduled tasks in Kubernetes, RabbitMQ running as another deployment and a SQL Azure DB which is not part of the Kubernetes.
I would like to deploy achieve load balancing and failover by deploying another kubernetes cluster in another region and placing a Traffic Manager DNS Load Balancer in front of the web site.
The problem that I see is that if the two rabbit instances are in separate kubernetes clusters then items queued in one will not be available in the other.
Is there a way to cluster the rabbitmq instances running in each kubernetes cluster or something besides clustering?
Or is there a common design pattern that might avoid problems from having seperate queues?
I should also note that currently there is only one node running RabbitMq in the current kuberntes cluster but as part of this upgrade it seems like a good idea to run multiple nodes in each cluster which I think the current Helm charts support.
You shouldn't cluster RabbitMQ nodes across regions. Your cluster will get split brain because of network delays. To synchronise RabbitMQ queues, exchanges between clusters you can use federation or shovel plugin, depending on your use case.
Federation plugin can be enabled on a cluster by running commands:
rabbitmqctl stop_app
rabbitmq-plugins enable rabbitmq_federation
rabbitmq-plugins enable rabbitmq_federation_management
rabbitmqctl start_app
Mode details on Federation.
For shovel:
rabbitmq-plugins enable rabbitmq_shovel
rabbitmq-plugins enable rabbitmq_shovel_management
rabbitmqctl stop_app
rabbitmqctl start_app
Mode details on Shovel.
Full example on how to setup Federation on RabbitMQ cluster can be found here.

How to setup a health check for a celery-sqs cluster on AWS

I'm trying to setup an ELB for my celery cluster that uses SQS as the broker. The instance is only running celery and not a webserver.
How can I go about setting up an ELB to health check the availability of the cluster. I've seen tutorials online suggesting a TCP check on 5672 for a celery cluster with rabbitmq, but since I'm using SQS here which works over HTTPS, I'm not quite sure how to go about with this?