Airflow Tasks are not getting triggered - scheduler

I am scheduling the dag and it shows in the running state but tasks are not getting triggered.Airflow scheduler and web server are up and running. I toggled the dag as ON on the UI. Still i cant able to fix the issue.I am using CeleryExecutor tried changing to the SequentialExecutor but no luck.

If you are using the CeleryExecutor you have to start the airflow workers too.
cmd: airflow worker
You need the following commands:
airflow worker
airflow scheduler
airflow webserver
If it still doesn't probably you have set start_date: datetime.today().

Related

Can I "recycle" Pods when using Kubernetes Executor on Airflow, to avoid repeating the Initialization?

Context: Running Airflow v2.2.2 with Kubernetes Executor.
I am running a process that creates a burst of quick tasks.
The tasks are short enough that the Kubernetes Pod initialization takes up the majority of runtime for many of them.
Is there a way to re-utilize pre-initialized pods for multiple tasks?
I've found a comment in an old issue that states that when running the Subdag operator, all subtasks will be run on the same pod, but I cannot find any further information. Is this true?
I have searched the following resources:
Airflow Documentation: Kubernetes
Airflow Documentation: KubernetesExecutor
Airflow Documentation: KubernetesPodOperator
StackOverflow threads: Run two jobs on same pod, Best Run Airflow on Kube
Google Search: airflow kubernetes reuse initialization
But haven't really found anything that directly addresses my problem.
I don't think this is possible in Airlfow even with Subdag operator which runs a separate dag as a part of the current dag with the same way used for the other tasks.
To solve your problem, you can use CeleryKubernetesExecutor which combine CeleryExecutor and KubernetesExecutor. By default the tasks will be queued in Celery queue but for the heavy tasks, you can choose K8S queue in order to run them in isolated pods. In this case you will be able to use the Celery workers which are up all the time.
kubernetes_task= BashOperator(
task_id='kubernetes_executor_task',
bash_command='echo "Hello from Kubernetes"',
queue = 'kubernetes'
)
celery_task = BashOperator(
task_id='celery_executor_task',
bash_command='echo "Hello from Celery"',
)
If you are worry about the scalability/cost of Celery workers, you can use KEDA to scale the Celery workers from 0 workers to a maximum number of workers based on queued tasks count.

Airflow DAGS running fine with CLI but failing in airflow UI

I am new to Airflow, I have created a DAG that triggers the shell script, but I am able to run it and see the output from CLI but when I run it from the UI it is failing, Also, I am not able to see any logs,

Airflow kubernetes architecture understanding

I'm trying to understand the Arch of Airflow on Kubernetes.
Using the helm and Kubernetes executor, the installation mounts 3 pods called: Trigger, WebServer, and Scheduler...
When I run a dag using the Kubernetes pod operator, it also mounts 2 pods more: one with the dag name and another one with the task name...
I want to understand the communication between pods... So far I know the only the expressed in the image:
Note: I'm using the git sync option
Thanks in advance for the help that you can give me!!
Airflow application has some components that require for it to operate normally: Webserver, Database, Scheduler, Trigger, Worker(s), Executor. You can read about it here.
Lets go over the options:
Kubernetes Executor (As you choose):
In your instance since you are deploying on Kubernetes with Kubernetes Executor then each task being executed is a pod. Airflow wraps the task with a Pod no matter what task it is. This brings to the front the isolation that Kubernetes offer, this also bring the overhead of creating a pod for each task. Choosing Kubernetes Executor often goes with case where many/most of your tasks takes long time to execute - as if your tasks takes 5 seconds to complete it might not be worth to pay the overhead of creating pod for each task. As for what you see as the DAG -> Task1 in your diagram. Consider that the Scheduler launches the Airflow workers. The workers are starting the tasks in new pods. So the worker needs to monitor the execution of the task.
Celery Executor - Setting up a Worker/Pod which tasks can run in it. This gives you speed as there is no need to create pod for each task but there is no isolation for each task. Noting that using this executor doesn't mean that you can't run tasks in their own Pod. User can run KubernetesPodOperator and it will create a Pod for the task.
CeleryKubernetes Executor - Enjoying both worlds. You decide which tasks will be executed by Celery and which by Kubernetes. So for example you can set small short tasks to Celery and longer tasks to Kubernetes.
How will it look like Pod wise?
Kubernetes Executor - Every task creates a pod. PythonOperator, BashOperator - all of them will be wrapped with pods (user doesn't need to change anything on his DAG code).
Celery Executor - Every task will be executed in a Celery worker(pod). So the pod is always in Running waiting to get tasks. You can create a dedicated pod for a task if you will explicitly use KubernetesPodOperator.
CeleryKubernetes - Combining both of the above.
Note again that you can use each one of these executors with Kubernetes environment. Keep in mind that all of these are just executors. Airflow has other components like mentioned earlier so it's very OK to deploy Airflow on Kubernetes (Scheduler, Webserver) but to use CeleryExecutor thus the user code (tasks) are not creating new pods automatically.
As for Triggers since you asked about it specifically - It's a feature added in Airflow 2.2: Deferrable Operators & Triggers it allows tasks to defer and release worker slot.

Airflow scheduler fails to start tasks

My problem:
Airflow scheduler is not assigning tasks.
Background:
I have Airflow running successfully on my local machine with sqlitedb. The sample dags as well as my custom DAGs ran without any issues.
When I try to migrate from sqlite database to Postgres (using this guide), the scheduler no longer seems to be assigning tasks. The DAG get stuck on "running" state but no task in any DAGs ever gets assigned a state.
Troubleshooting steps I've taken
The web server and the scheduler are running
The DAG is set to "ON".
After running airflow initdb, the public schema is populated with all of the airflow tables.
The user in my connection string owns the database as well as every table in the public schema.
Scheduler Log
The scheduler log keeps posting out this WARNING but I have not been able to use it to find any useful information aside form this other post with no responses.
[2020-04-08 09:39:17,907] {dag_processing.py:556} INFO - Launched DagFileProcessorManager with pid: 44144
[2020-04-08 09:39:17,916] {settings.py:54} INFO - Configured default timezone <Timezone [UTC]>
[2020-04-08 09:39:17,927] {settings.py:253} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=44144
[2020-04-08 09:39:19,914] {dag_processing.py:663} WARNING - DagFileProcessorManager (PID=44144) exited with exit code -11 - re-launching
Environment
PostgreSQL version 12.1
Airflow v1.10.9
This is all running on a MacBook Pro (Catalina) in a conda virtual environment.
Postgres was installed using postgresapp. Updated postgresapp to version 2.3.3e. PostgresSQL is still version 12.1 but by updating the postgresapp, the issue was solved.

Airflow 1.9 - Tasks stuck in queue

Latest Apache-Airflow install from PyPy (1.9.0)
Set up includes:
Apache-Airflow
Apache-Airflow[celery]
RabbitMQ 3.7.5
Celery 4.1.1
Postgres
I have the installation across 3 hosts.
Host #1
Airflow Webserver
Airflow Scheduler
RabbitMQ Server
Postgres Server
Host #2
Airflow Worker
Host #3
Airflow Worker
I have a simple DAG that executes a BashOperator Task that runs every 1 minute. I can see the scheduler "queue" the job however, it nevers gets added to a Celery/RabbitMQ queue and gets picked up by the workers. I have a custom RabbitMQ user, authentication seems fine. Flower, however, doesn't show any of the queues populating with data. It does see the two worker machines listening on their respective queues.
Things I've checked:
Airflow Pool configuration
Airflow environment variables
Upgrade/Downgrade Celery and RabbitMQ
Postgres permissions
RabbitMQ Permissions
DEBUG level airflow logs
I read the documentation section about jobs not running. My "start_date" variable is a static date that exists before the current date.
OS: Centos 7
I was able to figure it out but I'm not sure why this is the answer.
Changing the "broker_url" variable to use "pyamqp" instead of "amqp" was the fix.