Invalid data stored in Delimited Text after running ADF Data Flow - azure-data-factory

I'm trying to store data from an input to csv file in blob storage via ADF data flow. The pipeline ran successfully. However on checking the csv file, I see some invalid data included. Here are the settings of Delimited Text and Sink. Please let me know what I am missing?

I tested and repeat the error.
The error is caused by that all the csv files in the csv/test folder have different schema.
Even if the pipeline runs with no error, but the data in to the single file will has the error.
In Data Factory, when we try to merge more files to one, or copy data from more files to single, the files in the folder must have the same schema.
Note that please using wildcard paths to filter all the csv files:
For example, I have two csv files which have same schema in the container:
Source dataset preview:
Only if the source dataset preview is correct, the output file also will be correct.

Related

Issue while updating copy activity in ADF

I want to update a source excel column with a particular string.
My source contains n columns. I need to check where the string apple exists in any one of the columns. If the value exist in any column I need to replace the apple with orange string. And output the excel. How can I do this in ADF?
Note:I cannot use dataflows since we were using a self hosted vm
Excel files has lot of limitations in ADF like it is not supported in the copy activity sink and in Data flow sink as well.
You can raise the feature request for that in ADF.
So, try the above operation with a csv and copy the result to a csv in blob which later you can change it to Excel in your local machine.
To do the operations like above, Data flow can be a better option than doing it with normal activities as Dataflow deals with the transformations.
But Data flow won't support Self hosted linked service.
So, as a workaround first copy the Excel file as csv to Blob storage using copy activity. Create a Blob linked service for that to use in dataflow.
Now follow the below process in Data flow.
Source CSV from Blob:
Derived column transformation:
give the condition for each column case(col1=="apple", "orange", col1)
Sink :
In Sink settings specify as Output to single file.
After Pipeline execution a csv will be generated in the blob. You can convert it to Excel in your local machine.

Infer Schema from .csv file appear in azure blob and generaate DDL script automatically using Azure SQL

Every time the .csv file appearing in the blob storage, i have to create DDL from that manually on azure sql. The data type is based on the value specified for that field.
The file have 400 column, and manually it is taking lots of time.
May someone please suggest how to automate this using SP or script, so when we execute the script, it will create TABLE or DDL script, based on the file in the blob storage.
I am not sure if it is possible, or is there any better way to handle such scenario.
Appreciate yours valuable suggestion.
Many Thanks
This can be achieved in multiple ways. As you mentioned about automating it, you can use Azure function as well.
Firstly create a function that reads the csv file from blob storage:
Read a CSV Blob file in Azure
Then add the code to generate the DDL statement:
Uploading and Importing CSV File to SQL Server
Azure function can be scheduled or run when new files are added to blob storage.
If this is once a day kind of requirement and can manually be done as well, we can download the file from blob and use the 'Import Flat File' functionality available within SSMS where we can just specify the csv file and it creates the schema based on existing column values.

Load ForEach SQL into single CSV in Azure Data Factory

I have an ADF where I am executing a stored procedure in a ForEach and using Copy Data to load the output into a CSV file
On each iteration of the ForEach the CSV is being cleared down and loading that iteration's data
I require it to preserve the already loaded data and insert the output from the iteration
The CSV should have a full dataset of all iterations
How can I achieve this? I tried using the "Merge Files" option in the Sink Copy Behavior but doesn't work for SQL to CSV
As #All About BI mentioned, currently the append behavior which you are looking for is not supported.
You can raise a feature request from the ADF portal.
Alternatively, you can check the below process to append data in CSV.
In my repro, I am generating the loop items using Set Variable activity and passing it to ForEach activity.
Inside ForEach activity, using copy data activity, executing the stored procedure in Source, and copying data of Stored procedure to a CSV file.
In the Copy data activity sink, generate the file name using the current item of ForEach loop, to get data into different files for each iteration. Also adding a constant to identify the file name which can be deleted at the end after merging the files.
File_name: #concat('1_sp_sql_data_',string(item()),'.csv')
Add another copy data activity after the ForEach activity, to combine all the files data from the ForEach iteration to a single file. Here I am using the wildcard path (*) to get all files from the folder.
In Sink, add the destination filename with copy behavior as Merge files to copy all source data to a single sink file.
After merging the files data is copied to a single file, but the files will not be deleted. So when you run the pipeline next time, there is a chance the old files were also been merged with new files again.
• To avoid, this adding delete activity to delete the files generated in ForEach activity.
• As I have added a constant to generate these files, it will be easy to delete the files based on the filename (deleting all files which start with “1_”).
Destination file:
You could try this -
Load each iteration data to a separate csv file.
Later union them all or merge
As right now we don't have the ability to append rows in csv.

How to remove extra files when sinking CSV files to Azure Data Lake Gen2 with Azure Data Factory data flow?

I have done data flow tutorial. Sink currently created 4 files to Azure Data Lake Gen2.
I suppose this is related to HDFS file system.
Is it possible to save without success, committed, started files?
What is best practice? Should they be removed after saving to data lake gen2?
Are then needed in further data processing?
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-data-flow
There are a couple of options available.
You can mention the output filename in Sink transformation settings.
Select Output to single file from the dropdown of file name option and give the output file name.
You could also parameterize the output file name as required. Refer to this SO thread.
You can add delete activity after the data flow activity in the pipeline and delete the files from the folder.

ADF Data Flow creates empty file

I have setup a very basic data transformation using a "Data flow". I'm taking in a CSV file and modifying one of the columns and writing to a new CSV file in an "output" directory. I noticed that after the pipeline runs not only does it create the output folder but it also creates and empty file with the same name as the output folder.
Did I setup something wrong or is this empty file normal?
Sink Settings
Sink
Settings
Mapping
Optimize
Storage
Thank you,
Scott