How can I pass data from python_callable to on_success_callback? - callback

I have a DAG with a PythonOperator calling a function. I also have a callback function. I want to pass some information from the callable to the callback. How can I do this?
I tried using the context dictionary:
from datetime import datetime
from pprint import pprint
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
args = {"owner": "airflow",
"start_date": datetime(2019, 11, 18)}
dag = DAG(dag_id="context_demo",
default_args=args)
def test_callable(**context):
print('Starting callable...')
context['z_my_var'] = 'my_val'
pprint(context)
print('Callable finished.')
def test_callback(context):
print('Starting callback...')
pprint(context)
print('Callback finished.')
print_exec_date = PythonOperator(
task_id="print_exec_date",
python_callable=test_callable,
on_success_callback=test_callback,
provide_context=True,
dag=dag,
)
But the value I set in test_callable does not persist into test_callback.

In my case, I use xCom as a work around. Here is a complete example from Official airflow Docs. You can read more about xcome here.
# inside a PythonOperator called 'pushing_task'
def push_function():
return value
# inside another PythonOperator where provide_context=True
def pull_function(**context):
value = context['task_instance'].xcom_pull(task_ids='pushing_task')

Related

Dynamically generated apache airflow operator in V1.10.14

My team is trying to create a "sharded" operator. In its simplest form, it's just a KubernetesPodOperator with a few extra arguments { totalPartitions: number, partitionNumber: number } that the job implementation will use to partition it's data source and execute some code.
We would like to be able to auto generate these partitioned jobs in airflow rather than in the job implementation, something like this in (2.3.x):
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(...) as dag:
#task
def sharded_job(total_shards, shard_number):
return Operator(
...
job_id = "same_job_id_for_all"
total_shards = total_shards,
shard_number = shard_number,
)
sharded_job.expand(total_shards = [5], shard_number = [0, 1, 2, 3, 4])
but have it work? Additionally, is there a nice way to package the above into a util or ShardedOperator class that can spin up total_shards number of job instances?
If not the above, does airflow have other primitives for partitioning jobs/creating a cluster of same-type jobs?

How to test on_failure_callback Airflow operators

I have a scenario where I want to make an integration test for operators called through on_failure_callback in an Airflow DAG.
A minimal example of this DAG is as follows:
def failure_callback(context):
# CustomOperator in this case links to an external K8s service
handle_failure = CustomOperator(
task_id="handle_failure",
timestamp=context["ts"]
)
handle_failure.execute(context=context)
args = {
"catchup": False,
"retries": 3,
"retry_delay": timedelta(seconds=30),
"start_date": START_DATE,
"on_failure_callback": failure_callback,
}
with DAG("foo", schedule_interval=None, default_args=args) as dag:
task_to_fail = SomeOperator()
My first thought for testing would be to run task_to_fail, let it fail, and validate the outcome of the failure_callback with some other process, attempt below:
import pytest
from airflow.models import DagBag, TaskInstance
from dateutil import parser
#pytest.fixture
def foo_dag():
dag_id = "foo"
dag_bag = DagBag("dags")
return dag_bag.dags[dag_id]
#pytest.mark.integration
def test_task_to_fail(foo_dag):
execution_date = parser.parse("2000-01-01T00:00+00:00")
task_id = "task_to_fail"
task = foo_dag.get_task(task_id=task_id)
task_instance = TaskInstance(task, execution_date)
with pytest.raises(Exception):
task_instance.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
assert "INSERT DESIRED OUTCOME OF `failure_callback` HERE"
The issue I'm having is that it doesn't appear that failure_callback is being called when running pytest. I suspect this is due to how TaskInstance is being called (i.e not running the on_failure_callback, but am not sure.
My questions:
Is this the correct way to validate the behavior of this callback? If not, how should this be handled?
Upstream of the task_to_fail task, there are many expensive operations that I want to avoid running during tests. Is it possible to have a full-run of a DAG, executed with pytest, starting from a particular task (in this case, task_to_fail?

How do you write to Kafka using the ABRiS library in PySpark?

Has anyone been able to write to Kafka using this library using PySpark?
I've been able to successfully read using the code from the README documentation:
import logging, traceback
import requests
from pyspark.sql import Column
from pyspark.sql.column import *
jvm_gateway = spark_context._gateway.jvm
abris_avro = jvm_gateway.za.co.absa.abris.avro
naming_strategy = getattr(getattr(abris_avro.read.confluent.SchemaManager, "SchemaStorageNamingStrategies$"), "MODULE$").TOPIC_NAME()
schema_registry_config_dict = {"schema.registry.url": schema_registry_url,
"schema.registry.topic": topic,
"value.schema.id": "latest",
"value.schema.naming.strategy": naming_strategy}
conf_map = getattr(getattr(jvm_gateway.scala.collection.immutable.Map, "EmptyMap$"), "MODULE$")
for k, v in schema_registry_config_dict.items():
conf_map = getattr(conf_map, "$plus")(jvm_gateway.scala.Tuple2(k, v))
deserialized_df = data_frame.select(Column(abris_avro.functions.from_confluent_avro(data_frame._jdf.col("value"), conf_map))
.alias("data")).select("data.*")
However, I am struggling to extend the behaviour by writing to topics via the to_confluent_avro function.

Empty array returned when calling AlphaVantage APIs for NSE symbols tickers

I cannot get any NSE Symbol data from the AlphaVantage, they return always empty array.
Query:
https://www.alphavantage.co/query?function=TIME_SERIES_INTRADAY&symbol=NSE:TITAN&apikey=Q134IXR7RVWU5AQL&outputsize=full&interval=1min
Response:
{}
A month back the same query was returning data.
Looks something has changed on the AlphaVantage server end recently.
Your help is much appreciated in advance!
Try this
import pandas as pd
import json
import requests
import datetime
from pandas import DataFrame
from datetime import datetime as dt
from alpha_vantage.timeseries import TimeSeries
stock_ticker = 'SPY'
api_key = 'Q134IXR7RVWU5AQL'
ts = TimeSeries (key=api_key, output_format = "pandas")
data_daily, meta_data = ts.get_intraday(symbol=stock_ticker, interval ='1min', outputsize ='full')
print(data_daily)
The output for me looks like this

airflow TriggerDagRunOperator how to change the execution date

I noticed that for scheduled task the execution date is set in the past according to
Airflow was developed as a solution for ETL needs. In the ETL world,
you typically summarize data. So, if I want to summarize data for
2016-02-19, I would do it at 2016-02-20 midnight GMT, which would be
right after all data for 2016-02-19 becomes available.
however, when a dag triggers another dag the execution time is set to now().
Is there a way to have the triggered dags with the same execution time of triggering dag? Of course, I can rewrite the template and use yesterday_ds, however, this is a tricky solution.
The following class expands on TriggerDagRunOperator to allow passing the execution date as a string that then gets converted back into a datetime. It's a bit hacky but it is the only way I found to get the job done.
from datetime import datetime
import logging
from airflow import settings
from airflow.utils.state import State
from airflow.models import DagBag
from airflow.operators.dagrun_operator import TriggerDagRunOperator, DagRunOrder
class MMTTriggerDagRunOperator(TriggerDagRunOperator):
"""
MMT-patched for passing explicit execution date
(otherwise it's hard to hook the datetime.now() date).
Use when you want to explicity set the execution date on the target DAG
from the controller DAG.
Adapted from Paul Elliot's solution on airflow-dev mailing list archives:
http://mail-archives.apache.org/mod_mbox/airflow-dev/201711.mbox/%3cCAJuWvXgLfipPmMhkbf63puPGfi_ezj8vHYWoSHpBXysXhF_oZQ#mail.gmail.com%3e
Parameters
------------------
execution_date: str
the custom execution date (jinja'd)
Usage Example:
-------------------
my_dag_trigger_operator = MMTTriggerDagRunOperator(
execution_date="{{execution_date}}"
task_id='my_dag_trigger_operator',
trigger_dag_id='my_target_dag_id',
python_callable=lambda: random.getrandbits(1),
params={},
dag=my_controller_dag
)
"""
template_fields = ('execution_date',)
def __init__(
self, trigger_dag_id, python_callable, execution_date,
*args, **kwargs
):
self.execution_date = execution_date
super(MMTTriggerDagRunOperator, self).__init__(
trigger_dag_id=trigger_dag_id, python_callable=python_callable,
*args, **kwargs
)
def execute(self, context):
run_id_dt = datetime.strptime(self.execution_date, '%Y-%m-%d %H:%M:%S')
dro = DagRunOrder(run_id='trig__' + run_id_dt.isoformat())
dro = self.python_callable(context, dro)
if dro:
session = settings.Session()
dbag = DagBag(settings.DAGS_FOLDER)
trigger_dag = dbag.get_dag(self.trigger_dag_id)
dr = trigger_dag.create_dagrun(
run_id=dro.run_id,
state=State.RUNNING,
execution_date=self.execution_date,
conf=dro.payload,
external_trigger=True)
logging.info("Creating DagRun {}".format(dr))
session.add(dr)
session.commit()
session.close()
else:
logging.info("Criteria not met, moving on")
There is an issue you may run into when using this and not setting execution_date=now(): your operator will throw a mysql error if you try to start a dag with an identical execution_date twice. This is because the execution_date and dag_id are used to create the row index and rows with identical indexes cannot be inserted.
I can't think of a reason you would ever want to run two identical dags with the same execution_date in production anyway, but it is something I ran into while testing and you should not be alarmed by it. Simply clear the old job or use a different datetime.
The TriggerDagRunOperator now has an execution_date parameter to set the execution date of the triggered run.
Unfortunately the parameter is not in the template fields.
If it will be added to template fields (or if you override the operator and change the template_fields value) it will be possible to use it like this:
my_trigger_task= TriggerDagRunOperator(task_id='my_trigger_task',
trigger_dag_id="triggered_dag_id",
python_callable=conditionally_trigger,
execution_date= '{{execution_date}}',
dag=dag)
It has not been released yet but you can see the sources here:
https://github.com/apache/incubator-airflow/blob/master/airflow/operators/dagrun_operator.py
The commit that did the change was:
https://github.com/apache/incubator-airflow/commit/089c996fbd9ecb0014dbefedff232e8699ce6283#diff-41f9029188bd5e500dec9804fed26fb4
I improved a bit the MMTTriggerDagRunOperator. The function checks if the dag_run already exists, if found, restart the dag using the clear function of airflow. This allows us to create a dependency between dags because the possibility to have the execution date moved to the triggered dag opens a whole universe of amazing possibilities. I wonder why this is not the default behavior in airflow.
def execute(self, context):
run_id_dt = datetime.strptime(self.execution_date, '%Y-%m-%d %H:%M:%S')
dro = DagRunOrder(run_id='trig__' + run_id_dt.isoformat())
dro = self.python_callable(context, dro)
if dro:
session = settings.Session()
dbag = DagBag(settings.DAGS_FOLDER)
trigger_dag = dbag.get_dag(self.trigger_dag_id)
if not trigger_dag.get_dagrun( self.execution_date ):
dr = trigger_dag.create_dagrun(
run_id=dro.run_id,
state=State.RUNNING,
execution_date=self.execution_date,
conf=dro.payload,
external_trigger=True
)
logging.info("Creating DagRun {}".format(dr))
session.add(dr)
session.commit()
else:
trigger_dag.clear(
start_date = self.execution_date,
end_date = self.execution_date,
only_failed = False,
only_running = False,
confirm_prompt = False,
reset_dag_runs = True,
include_subdags= False,
dry_run = False
)
logging.info("Cleared DagRun {}".format(trigger_dag))
session.close()
else:
logging.info("Criteria not met, moving on")
There is a function available in the experimental API section of airflow that allows you to trigger a dag with a specific execution date. https://github.com/apache/incubator-airflow/blob/master/airflow/api/common/experimental/trigger_dag.py
You can call this function as a part of PythonOperator and achieve the objective.
So it will look like
from airflow.api.common.experimental.trigger_dag import trigger_dag
trigger_operator=PythonOperator(task_id='YOUR_TASK_ID',
python_callable=trigger_dag,
op_args=['dag_id'],
op_kwargs={'execution_date': datetime.now()})