ADF Staged Copy Not Applying Schema Mapping for XML - azure-data-factory

I'm trying to copy data between a SOAP Web Service and an Azure SQL Database. When I use the staging option of the copy activity, mappings are not applied and no data is copied. If I disable the stage and write directly to a text file, mappings are applied as expected. How can I make the mappings apply when the staging option is enabled?
Additional Information
Source: HTTP
Sink: Azure SQL Database
Direct copies between the source and sink do not work because of where they're located, so I need to stage the copy.
However, when staging the copy, the defined mappings are not being applied and the sink database table ends up with a single null row.
When using a delimited text sink without a staging step, the mappings work as expected.
However, as soon as I add a staging step, the same issue occurs with a delimited text sink.
Question:

I have reproduced the same issue. I used http (xml response) as source and Azure SQL DB as sink. Staging is also stored as XML file. While doing so, few columns are null and only few data are getting copied in sink. Mapping is not happening as given in mapping tab of copy activity.
This issue is not happening when another source format like delimited file is used. In those cases, Mapping occurs as given in the mapping tab.
Perhaps you can try the workaround of using two copy activities. One copy activity from HTTP source to blob and then blob to sink. And these activities should happen sequentially.
In copy data activity 1, HTTP is used as source and csv file in Blob storage is used as a sink.
In Mapping tab, I have given the corresponding mapping
In copy activity 2, I used same dataset for blob storage as a source dataset and Azure SQL db sink dataset.
In copy activity 2 also, I tested with auto-mapping and with manual mapping. Both worked in my case.
Final Sink Table

Related

Issue while updating copy activity in ADF

I want to update a source excel column with a particular string.
My source contains n columns. I need to check where the string apple exists in any one of the columns. If the value exist in any column I need to replace the apple with orange string. And output the excel. How can I do this in ADF?
Note:I cannot use dataflows since we were using a self hosted vm
Excel files has lot of limitations in ADF like it is not supported in the copy activity sink and in Data flow sink as well.
You can raise the feature request for that in ADF.
So, try the above operation with a csv and copy the result to a csv in blob which later you can change it to Excel in your local machine.
To do the operations like above, Data flow can be a better option than doing it with normal activities as Dataflow deals with the transformations.
But Data flow won't support Self hosted linked service.
So, as a workaround first copy the Excel file as csv to Blob storage using copy activity. Create a Blob linked service for that to use in dataflow.
Now follow the below process in Data flow.
Source CSV from Blob:
Derived column transformation:
give the condition for each column case(col1=="apple", "orange", col1)
Sink :
In Sink settings specify as Output to single file.
After Pipeline execution a csv will be generated in the blob. You can convert it to Excel in your local machine.

Merging data in Datalake

I'm working on a project where we need to bring data from SQL Server database into a Datalake.
I succeded that through a pipeline which ingest data from the source and load it into a DL in parquet format.
My question is how to merge new data from data source to the existing file into that data lake(Upserting).
You can use Azure data flows wherein you can map the source file with other sources and override the existing file. There is no upsert activity directly in ADF for files unlike for databases.
reference :
https://learn.microsoft.com/en-us/answers/questions/542994/azure-data-factory-merge-2-csv-files-with-differen.html

Azure data factory: Implementing the SCD2 on txt files

I have flat files in adls source,
for full load we are adding 2 columns Insert and datatimestamp.
For change load we need to Lookup with full data, the data available in full should be taken as Updated and not available data as Insert and copy.
below is the approach I tried to work out, but i'm unable to perform.
Can any one help me on this.
Thanks you and waiting for quick response.
Currently, the feature to update the existing flat file using the Azure data factory sink is not supported. You have to create a new flat file.
You can also use data flow activity to read full and incremental data and load to a new file in sink transformation.

How to remove extra files when sinking CSV files to Azure Data Lake Gen2 with Azure Data Factory data flow?

I have done data flow tutorial. Sink currently created 4 files to Azure Data Lake Gen2.
I suppose this is related to HDFS file system.
Is it possible to save without success, committed, started files?
What is best practice? Should they be removed after saving to data lake gen2?
Are then needed in further data processing?
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-data-flow
There are a couple of options available.
You can mention the output filename in Sink transformation settings.
Select Output to single file from the dropdown of file name option and give the output file name.
You could also parameterize the output file name as required. Refer to this SO thread.
You can add delete activity after the data flow activity in the pipeline and delete the files from the folder.

I am creating a copy activity in Azure Data Factory with Auto Create Table. The columns DataType and Nullable are changing

When performing copy activity with auto create table. Then few columns like DATA_TYPE, IS_NULLABLE are mismatched when compared between source and target.
I tried the same thing and it’s working fine for me. You can follow the same steps as shown below:
Firstly, you must create the source and sink databases. Follow the link to create Azure SQL Database.
You then have to create the Linked Service for source database in ADF.
Similarly, create the Linked Service for on-premises Sink database. Follow link.
Create the Source Dataset using source database Linked Service created above.
Similarly, create the sink dataset using sink database linked service.
Finally, you need to create the pipeline using copy activity, source dataset, sink dataset. Optionally, you can do mapping of the columns. Refer link to know more about mapping.
Below are my Source and Copied Datasets after running the pipeline. Both are same as expected.
Please do follow the shared steps.