Execute a pipeline after a completion pipeline - azure-data-factory

I have a first pipeline that ingest data for multiple country from BigQuery to Azure, it's an operation that copy bigquery transformed data into azure.
On Data Factory, i create multiple folders for each country that will have multiple pipeline, for example, a specific machine learning model only for 1 or 2 countries, a data prepration pipeline for an application for only 5 countries etc.
I think i need this folder construction for each market to keep it clear for anybody that needs to implement a pipeline and avoid errors.
My main problem by doing that is how i can call, for example, a machine learning pipeline in my folder UK that can only start after the first pipeline, the bigquery copy data to azure, completed ?
I can't call the Execution Pipeline activity because my first pipeline bigquerytoazure is executed by himself, it's the very important step that needs to be executed before any other pipeline can be executed.
Is there any way to call completed pipeline without the Execution Pipeline activated ?
I thought about creating a dummy blob storage in the first pipeline that can work as a trigger for all pipeline after this first one ?
Thanks by advance, hope i was clear.

Data Factory event trigger based on the blob storage. I think that's the best way.
Another way you can think about using Logic App, add a trigger to listen the BigQuery table in SQL database, if the BigQuery table modified, then execute a data factory pipeline. Create a work flow for the pipelines run.
Work flow:
SQL Server Trigger: when an item is modified.
Add a parallel branch
Data Factory Action: Get a pipeline run
Reference: Automate workflows for SQL Server or Azure SQL Database by using
Azure Logic Apps
Hope this helps.

Related

Perform data checks in azure data factory

I have and ADF pipeline which reads data from an on-prem source and copies it to a dataset in azure.
I want to perform some datachecks:
If the data contains the features I need
If there is null in some features
If the feature is all nulls
It should fail if the conditions above dnt meet
Is there a way to do this in data factory without using a batch service and just activities in data factory or maybe a dataflow.
Many approaches to this you could do a traditional batch process running function/code in a process. You could weave together ADF activities into multiple steps combination of 'Lookup Activity' possibly followed by a 'Validation Activity' and 'Delete Activity' with your criteria and rules defined.
Azure Data Factory 'Data Flows' - https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-overview - Allows you map out data transformation as data moves through the pipeline in a codeless fashion.
A pattern with ADF Data Flows is 'Wrangling Data Flows' to work with data and prepare it for consumption. Ref Article - https://learn.microsoft.com/en-us/azure/data-factory/wrangling-overview
The Copy activity in Azure Data Factory (ADF) or Synapse Pipelines provides some basic validation checks called 'data consistency'. This can do things like: fail the activity if the number of rows read from the source is different from the number of rows in the sink, or identify the number of incompatible rows which were not copied depending on the type of copy you are doing.
This is probably not quite at the level you want so you could look at writing something custom, eg using the Stored Proc activity, or looking at Mapping Data Flows and its Assert task which could do something like this. There's a useful video in the link which shows the feature.
I tried using Assert activities but for the scope of my work this wasn't enough!
Therefore, I ended up using python code for data checks.
However, assert activity servers better if your datacheck criteria is not hard as mine.
You can try to create data flows and apply conditional split activity. This will help you to achieve your scenario.
There is no such coding for this. This is diagrammatically you can do this in ADF or Azure Synapse Data Flow.
Find my attached data flow diagram that checks a few conditions like when the year is less than the specified year or if data in a column is null, date format, etc.

How to update manually a pipeline in Azure Data Factory

Right now we are doing some tasks manually while the entire solution is prepared. One of this tasks is the updating process in some Resouce Groups. Normally, what we do is to export a template from our development environtment and, then we import that .zip file template. However, if the pipeline or other object are alredy present in the target ADF these are going to be created with the prefix 1.
Then we need to rename the objects (pipelines, data flows, data sets) to implement the new solution. We were wondering if there is a way to avoid this, or another way to do it.
If both your Test_Pipeline or Test_Pipeline1 are going to perform similar type of actions then you can create a Dynamic pipeline using parameters in ADF and create a metadata / config for the operation that you need to perform and add a Lookup activity, that would avoid creating multiple pipelines
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity

Azure Data Flow generic curation framework

I wanted to create a data curation framework using Data Flow that uses generic data flow pipelines.
I have multiple data feeds (raw tables) to validate (between 10-100) and write to sink as curated tables:
For each raw data feed, need to validate the expected schema (based on a parameterized file name)
For each raw data feed, need to provide the Data Flow Script with validation logic (some columns should not be null, some columns should have specifici data types and value ranges, etc.)
Using Python SDK, create Data Factory and mapping data flows pipelines using the Data Flow Script prepared with the parameters provided (for schema validation)
Trigger the python code that creates the pipelines for each feed, does validation, write the issues into Log Analytics workspace and tear off the resources at specific schedules.
Has anyone done something like this? What is the best approach for the above please?
My overall goal is to reduce the time to validate/curate the data feeds, thus I wanted to prepare the validation logic quickly for each feed and create python classes or Powershell scripts scheduled to run them on generic data pipelines at specific times of the day.
many thanks
CK
To validate the schema, you can have a reference dataset which will be having the same schema (first row) as of your main dataset. Then you need to use “Get Metadata” activity for each dataset and get the structure of each dataset. Your Get Metadata activity will look like this:
You can then use “If Condition” activity to matches the structure of both datasets using equal Logical Function. Your equal expression will look something like this:
If both datasets’ structure matches, your next required activity(like copy the dataset to another container) will be performed.
Your complete pipeline will look like this:
The script which you want to run on your inserted dataset could be performed using “Custom” activity. You again need to create the linked service and it’s corresponding dataset for your script which you will run to validate the raw data. Please refer: https://learn.microsoft.com/en-us/azure/batch/tutorial-run-python-batch-azure-data-factory
To schedule the pipeline as per your specific pipeline will be take care by Triggers in Azure Data Factory. A schedule trigger will take care of your requirement of auto trigger your pipeline at any specific time.

Event based trigger for a sequential run of the same data factory pipeline

I would like to use an event based trigger to run a data factory pipeline.
The trigger will check a folder in a data lake for any new file and start a pipeline once a new CSV file is copied.
The pipeline will then copy the data to an intermediate table to check its consistency (multiple checks using different data flow activities) and if everything's correct, copies it into a stage table.
It is thus very important that the intermediate table will contain the data from only one single CSV file before it is checked.
I have read though that the event based trigger will start in parallel as many pipelines as the (simultaneously) downloaded CSV files.
Is this right? in this case how can I force each Pipeline to wait until the previous one is done?
Thank you for your help.
There is a flag on the pipeline properties (accssible in the top-right of the editor pane) called concurrency. Set this to 1 and only one copy will run and any other invocations will be queued until that one finishes.

How to implement conditional branches in Azure Data Factory pipelines

I am implementing a pipeline to insert data updates from csv files to SQL DB. Plan is to first insert the data to temporary SQL table for validation and transformation, and then move processed data to actual SQL table. I would like to branch the pipeline execution depending on the validation result. If data is OK, it will be inserted to target SQL table. If there are fatal fails, insertion activity should be skipped.
Tried to find instructions / guidance but no success so far. Any ideas if pipeline activity supports conditional execution, e.g. based on some properties in input dataset?
It is possible now with Azure Data Factory ver 2.
Post execution our downstream activities can now be dependent on four possible outcomes as standard.
- On success
- On failure
- On completion
- On skip
Also, custom ‘if’ conditions will be available for branching based expressions.
Refer below links for more detail:-
https://www.purplefrogsystems.com/paul/2017/09/whats-new-in-azure-data-factory-version-2-adfv2/
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-control-flow
The short answer is no.
I think its worth pointing out that ADF is just an orchestration tool to invoke other services. The current version can't do what you want because it does not have any of its own compute. Its not an SSIS data flow engine.
If you want this behaviour you'll need to code it into the SQL DB stored procedures with flags etc on the processed datasets.
Then maybe have some boiler plate code with a parameters that are passed from ADF to perform either the insert or update or divert operation.
Handy link for called stored procedure with params from ADF: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-stored-proc-activity
Hope this helps.