azure data factory copy activity failing DUE TO COLUMN MISMATCH - azure-data-factory

I am performing copy activity in ADF with source as csv file in gen1. which's copied to sql server. i am getting the below error. i thoroughly checked each column .count is matching.
Error found when processing 'Csv/Tsv Format Text' source 'opportunity.csv' with row number 224: found more columns than expected column count 136

Related

Azure Data Factory DataFlow Sink to Delta fails when schema changes (column data types and similar)

We have an Azure Data Factory dataflow, it will sink into Delta. We have Owerwrite, Allow Insert options set and Vacuum = 1.
When we run the pipeline over and over with no change in the table structure pipeline is successfull.
But when the table structure being sinked changed, ex data types changed and such the pipeline fails with below error.
Error code: DFExecutorUserError
Failure type: User configuration issue
Details: Job failed due to reason: at Sink 'ConvertToDelta': Job aborted.
We tried setting Vacuum to 0 and back, Merge Schema set and now, instead of Overwrite Truncate and back and forth, pipeline still failed.
Can you try enabling Delta Lake's schema evolution (more information)? By default, Delta Lake has schema enforcement enabled which means that the change to the source table is not allowed which would result in an error.
Even with overwrite enabled, unless you specify schema evolution, overwrite will fail because by default the schema cannot be changed.
I created ADLS Gen2 storage account and created input and output folders and uploaded parquet file into input folder.
I created pipeline and created dataflow as below:
I have taken Parquet file as source.
Dataflow Source:
Dataset of Source:
Data preview of Source:
I created derived column to change the structure of the table.
Derived column:
I updated 'difficulty' column of parquet file. I changed the datatype of 'difficulty' column from long to double using below code:
difficulty : toDouble(difficulty)
Image for reference:
I updated 'transactions_len' column of parquet file. I changed the datatype of 'transactions_len' column from Integer to float using below code:
transactions_len : toFloat(transactions_len)
I updated 'number' column of parquet file. I changed the datatype of 'number' column from long to string using below code:
number : toString(number)
Image for reference:
Data preview of Derived column:
I have taken delta as sink.
Dataflow sink:
Sink settings:
Data preview of Sink:
I run the pipeline It executed successfully.
Image for reference:
I t successfully stored in my storage account output folder.
Image for reference:
The procedure worked in my machine please recheck from your end.
The source (Ingestion) was generated to azure blob with given a specific filename. Whenever we generated to source parquet files without specifying a specific filename but only a directory the sink worked

Azure Data Factory Copy Activity Pipeline Destination Mapping String Format Date to Sql Date column Warning

I am doing copy activity to load the data from azure data factory to on premise SQL table.
I could see in copy activity column Mapping, there is warning message like source column is string with date and time value (2022-09-13 12:53:28) so that i created target SQL table column is date data type.
While import mapping in copy activity, i could see the whatever date column i mapped in SQL. there is warning message throwing in ADF. kindly advise, how do we resolve it.
screenshot:
The warning just indicates that it copy data will truncate source column data when additional data information is found in a column value. There would not be any error in this case but there might be data loss.
In your case, since the column value is 2022-09-13 12:53:28, it will be inserted without any issue into the datetime column without truncation.
The following is a demonstration where I try to insert the following source data:
id,first_name,date
1,Wenona,2022-09-13 12:53:28
2,Erhard,2022-09-13 13:53:28
3,Imelda,2022-09-13 14:53:28
The copy activity runs successfully and inserts the data. The following is my target table data after inserting:
When I insert the following data, it would be truncated to just include a precision of 2 digits of milli seconds as shown below.
id,first_name,date
1,Wenona,2022-09-13 12:53:28.11111
2,Erhard,2022-09-13 13:53:28.11111
3,Imelda,2022-09-13 14:53:28.11111

Azure Data Factory schema mapping not working with SQL sink

I have a simple pipeline that loads data from a csv file to an Azure SQL db.
I have added a data flow where I have ensured all schema matches the SQL table. I have a specific field which contains numbers with leading zeros. The data type in the source - projection is set to string. The field is mapped to the SQL sink showing as string data-type. The field in SQL has nvarchar(50) data-type.
Once the pipeline is run, all the leading zeros are lost and the field appears to be treated as decimal:
Original data: 0012345
Inserted data: 12345.0
The CSV data shown in the data preview is showing correctly, however for some reason it loses its formatting during insert.
Any ideas how I can get it to insert correctly?
I had repro’d in my lab and was able to load as expected. Please see the below repro details.
Source file (CSV file):
Sink table (SQL table):
ADF:
Connect the data flow source to the CSV source file. As my file is in text format, all the source columns in the projection are in a string.
Source data preview:
Connect sink to Azure SQL database to load the data to the destination table.
Data in Azure SQL database table.
Note: You can all add derived columns before sink to convert the value to string as the sink data type is a string.
Thank you very much for your response.
As per your post the DF dataflow appears to be working correctly. I have finally discovered an issue with the transformation - I have an Azure batch service which runs a python script, which does a basic transformation and saves the output to a csv file.
Interestingly, when I preview the data in the dataflow, it looks as expected. However, the values stored in SQL are not.
For the sake of others having a similar issue, my existing python script used to convert a 'float' datatype column to string-type. Upon conversion, it used to retain 1 decimal number but as all of my numbers are integers, they were ending up with .0.
The solution was to convert values to integer and then to string:
df['col_name'] = df['col_name'].astype('Int64').astype('str')

Need recommendation in adf pipeline source properties while loading delimited text files from azure blob to snowflake

We are trying to load a delimited file which has blank data for few columns located in azure blob and would like to get a value like NA in our target snowflake table whenever we encounter a blank value in source csv file. We have been trying to provide a NA against the Null option but it is not working, any suggestions?
Here is the screenshot of what i have mentioned above.
I have used data flow activity in Azure data factory to resolve this issue.
Source file with NULL value in “Name” column.
Now use Derived Column transformation. In Derived column's settings Select column name and use iifNull({Name}, 'NA') expression.
In data preview, Null value in Name column is replaced with NA.
You can follow the above steps to replace Null values and Sink data from blob storage to Snowflake.

Capturing Each Skipped Record - Copy Data Activity

I'm doing a large-scale project with multiple pipelines, millions of records per pipeline. I'm trying to develop a generic skipped row capture process.
What I need to do is: for every source row skipped due to any error encountered on the attempted load, I want to capture a key column value from the row and write it to a distinct log file (or separate DB table row). This can't be summary data: for each individual row that fails, I need to capture the row key from that row so we can review/re-load later (I will add in system variable values to identify pipeline, component, time stamp, etc). Pipeline must complete with all successful rows loaded, all unsuccessful rows logged.
This is no-brainer functionality in most ETL tools; I have to be overlooking something in ADF, because I can't find a way to do this. Appreciate any/all suggestions.
You can enable Fault tolerance and choose Skip incompatible rows option. It will skip the incompatible rows between source and target store during copy data. e.g. type and field mismatch or PK violation.
Then you can enable session log and choose Warning log level in copy activity to log skipped rows. Finally, you can save your log file in Azure Storage or Azure Data Lake Storage Gen2.
Reference:
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-fault-tolerance
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-log
With your first copy activity, check the fault tolerance option in 'settings' to log skipped fault rows.
Make sure to place your rows key column, as the first in the mapping definition.
Get the copy activity logFilePath from the activity output into a variable
Add another copy activity to load skipped rows into relational table
it source path will be the variable holds logFilePath
Set the file path type to: 'Wildcard file path'
Keep the 'Wildcard file path' empty
Will be the value in Wildcard file name
Make sure that the delimited file dataset escape character is set to quotations.
The OperationItem field of the lg file holds your record fields seperated by ,; because we placed the rowID first on mapping, it will appear first in OperationalItem as well.
Goodluck