I have to read 10 files from a folder in blob container with different schema(most of the schema among the table macthes) and merge them into a single SQL table
file 1: lets say there are 25 such columns
file 2: Some of the column in file2 matches with columns in file1
file 3:
output: a sql table
How to setup a pipeline in azure data factory to merge these columns into a single SQL table.
my approach:
get Metadata Activity---> for each childitems--- copy activity
for the mapping--- i constructed a json that containes all the source/sink columns from these files
You can create a JSON file which contains your each source file name and Tabular Translator. Then use Lookup activity to get this file's content(Don't check first row only). Loop this array in For Each activity and pass source file name in your dataset. Finally, create a copy data activity and use Tabular Translator as your mapping.
Related
I have an Azure Data factory trigger that is fired off when a file is placed in blob storage, this trigger will start pipeline execution and pass the file name to the data flow activity. I would like to make sure that all the column names from the header row in the file are in the sink table. There is an identity column in the sink table that should not be in the comparison. Not sure how to tackle this task, I've read about the 'derived column' activity, is that the route I should take?
You can select or filter which columns reside in sink dataset or table by using "Field mapping". You can optionally use "derived columns" transformation, however in the "sink transformation" you will have this by default and is set to "Auto mapping". Here you can add or remove which columns are written to sink.
In the below example the column "id" can be assumed as similar to "Identity" column in your table. Assuming all the files have same columns:
Once you have modified as per your need, you can confirm the same from the "inspect" tab before run.
Strategy:
Use two ADF pipelines, one to get a list of all files and another one to process each file copying its content to a specific SQL table.
Setup:
I’ve created 4 CSV files, following the pattern you need: “[CustomerID][TableName][FileID].csv” and 4 SQL tables, one for each type of file.
A_inventory_0001.csv: inventory records for customer A, to be
inserted into the SQL table “A_Inventory”.
A_sales_0003.csv: sales
records for customer A, to be inserted into the SQL table “A_Sales”.
B_inventory_0002.csv: inventory records for customer B, to be
inserted into the SQL table “B_Inventory”.
B_sales_0004.csv: sales
records for customer B, to be inserted into the SQL table “B_Sales”
Linked Services
In Azure Data Factory, the following linked services were create using Key Vault (Key Vault is optional).
Datasets
The following datasets were created. Note we have created some parameters to allow the pipeline to specify the source file and the destination SQL table.
The dataset “AzureSQLTable” has a parameter to specify the name of the destination SQL table.
The dataset “DelimitedTextFile” has a parameter to specify the name of the source CSV file.
The dataset “DelimitedTextFiles” has no parameter because it will be used to list all files from source folder.
Pipelines
The first pipeline “Get Files” will get the list of CSV files from source folder (Get Metadata activity), and then, for each file, call the second pipeline passing the CSV file name as a parameter.
Inside the foreach loop, there is a call to the second pipeline “Process File” passing the file name as a parameter.
The second pipeline has a parameter “pFileName” to receive the name of the file to be processed and a variable to calculate the name of the destination table based on the file name.
The first activity is to use a split in the file name to extract the parts we need to compose the destination table name.
In the expression bellow we are splitting the file name using the “__” separator and then using the first and second parts to compose the destination table name.
#concat(string(split(pipeline().parameters.pFileName, '_')[0]),'_',string(split(pipeline().parameters.pFileName, '_')[10]))
The second activity will then copy the file from the source “pFileName” to the desnation table “vTableName” using dynamic mapping, ie not adding specific column names as this will be dynamic.
The files I used in this example and the ADF code are available here:
https://github.com/diegoeick/stack-overflow/tree/main/69340699
I hope this will resolve your issue.
In case you still need to save the CustomerID and FileID in the database tables, you can use the dynamic mapping and use the available parameters (filename) and create a json with the dynamic mapping in the mapping tab of your copy activity. You can find more details here: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping#parameterize-mapping
I have a simple clients table in sql server that contains 2 columns - client name and city. In Azure Data Factory, how can I export this table to multiple csv files that each file will contain only a list of clients from the same city, which will be the name of the file
I already tried, and succeeded, to split it to different files using lookup and foreach, but the data remains unfiltered by the city
any ideas anyone?
You can use Data Flow to achieve that easily.
I made an example for you. I create a table as source, export this table to multiple csv files that each file will contain only a list of clients from the same city, which will be the name of the file.
Data Flow Source:
Data Flow Sink settings: File name options: as data in column and use auto mapping.
Check the output files and data in it:
HTH.
You would need to follow the below flow chart:
LookUp Activity : Query : Select distinct city from table
For each activity
Input : #activity('LookUp').output.value
a) Copy activity
i) Source : Dynamic Query Select * from t1 where city=#item().City
This should generate separate files for each country as needed
Steps:
1)
The batch job can be any nbr of parallel executions
Create a parameterised dataset:
5)
Result: I have 2 different Entities, so 2 files are generated.
Input :
Output:
Azure Data factory V2 - Copy Activity - Copy data from Changing Column names and number of columns to Destination. I have to copy data from a Flat File where number of Columns will change in each file and even the column names. How do I dynamically map them in Copy Activity to load the data into destination in Azure Data factory V2.
Suppose my destination has 20 columns, but source will come sometimes as 10 columns or 15 or sometimes 20. If the source columns are less than destination then remaining column values in destination should be passed as Null.
Use data flows in ADF. Data Flow sinks can generate the table schema on the fly if you wish. Or you can just "auto-map" any changing schema to your target. If your source schema changes often, just use "schema drift" with no schema defined in your dataset.
I have a csv file with 2 columns (id and name). The csv file has over 1 million names. I'm struggling to workout how I can filter my results to only copy data where column 2 has the name 'mary' in it.
Can anyone advise?
Add a Data Flow activity to your ADF pipeline. In that pipeline, point the Source to your CSV dataset. Next, add a Filter transformation and write an expression such as name == 'mary'. Next, add a Sink. This will copy only rows that have 'Mary' for the value in the name column.
I have a lot of JSON files in Blob Storage and what I would like to do is to load the JSON files via Data factoryV2 into SQL Data Warehouse. I would like the filename in a column for each JSON file. I know how to
do this in SSIS but I am not sure how to replicate this in Data Factory.
e.g File Name: CornerShop.csv as CornerShop in the filename column in SQL Data Warehouse
Firstly,please see the limitation in the copy activity column mapping:
Source data store query result does not have a column name that is
specified in the input dataset "structure" section.
Sink data store (if with pre-defined schema) does not have a column
name that is specified in the output dataset "structure" section.
Either fewer columns or more columns in the "structure" of sink
dataset than specified in the mapping.
Duplicate mapping.
So,i don't think you could do the data transfer plus file name at one time.My idea is:
1.First use a GetMetadata activity. It should get the filepaths of each file you want to copy. Use the "Child Items" in the Field list.
2.On success of GetMetaData activity, do ForEach activity. For the ForEach activity's Items, pass the list of filepaths.
3.Inside the ForEach activity's Activities, place the Copy activity. Reference the iterated item by #item() or #item().name on the blob storage source file name.
4.Meanwhile,configure the filename as a parameter into stored procedure. In the stored procedure, merge the filename into fileName column.