How to copy data from an a csv to Azure SQL Server table? - azure-data-factory

I have a dataset based on a csv file. This exposes a data as follows:
Name,Age
John,23
I have an Azure SQL Server instance with a table named: [People]
This has columns
Name, Age
I am using the Copy Data task activity and trying to copy data from the csv data set into the azure table.
There is no option to indicate the table name as a source. Instead I have a space to input a Stored Procedure name?
How does this work? Where do I put the target table name in the image below?

You should DEFINITELY have a table name to write to. If you don't have a table, something is wrong with your setup. Anyway, make sure you have a table to write to; make sure the field names in your table match the fields in the CSV file. Then, follow the steps outlined in the description below. There are several steps to click through, but all are pretty intuitive, so just follow the instructions step by step and you should be fine.
http://normalian.hatenablog.com/entry/2017/09/04/233320

You can add records into the SQL Database table directly without stored procedures, by configuring the table value on the Sink Dataset rather than the Copy Activity which is what is happening.
Have a look at the below screenshot which shows the Table field within my dataset.

Related

ADF Pipeline include fixed text in output

The overall aim of the pipeline is to copy from XML to Oracle.
One of the source columns is a datetime that needs formatting, so I'm using an intermediate copy activity to copy from XML to CSV as instructed in this answer
From the CSV to the table is simple mapping except for the need for an additional target column with a fixed value of '365Response'
I've tried adding this as an additional column as shown below:
However, on the mapping tab, I'm not able to select the new additional column:
What did I do wrong?
Your process to add an Additional column in the copy activity looks correct. If the Additional column is not showing in the mapping, you can clear the mapping and import schema again to refresh the mapping.

How set parameters in SQL Server table from Copy Data Activity - Source: XML / Sink: SQL Server Table / Mapping: XML column

I have a question, hopefully someone in the forum could give some help here. I am able to pull data from Soap API call to SQL Server table (xml data type field actually) via Copy Data Activity. The pipeline that runs this process is metadata driven, so how could I write other parameters in the same SQL Server table for the same run? I am using a Copy Data Activity to load XML data to SQL Server table but in Mapping tab I am not able to select other parameters in order to point them to others SQL table columns.
In addition, I am using a ForEach Activity in order the Copy Data Activity iterates for several values of one column on SQL Server table.
I will appreciate any advice on this.
Thanks
David
Thank you for your interest, I will try to be more explicit with this image: Hopefully this clarify a little bit. Given the current escenario, how could I pass StoreId and CustomerNumber parameters to the table Stage.XmlDataTable?
Taking in to account in the mapping step I am just able to map XML data from the current API call and then write it into Stage.XmlDataTable - XmlData column.
Thanks in advance David
You can add your parameters using Additional Columns in the Copy data activity Source.
When you import schema in mapping you can see the additional columns added in source.
Refer to this MS document for more details on adding additional columns during the copy.

Required help in removing the column from text file using ADF

I have a sample file like this . Using data factory Where I need to create another text file with output where I can remove the 1st two columns. Is there any query where I can generate a file like as below.
Source file:
Output file :
Core Data Factory (ie not including Mapping Data Flows) is not gifted with many abilities to do data transformation (which this is) however it can do some things. It can change formats (eg .csv to JSON), it can add some metadata columns (like $$FILENAME) and it can remove columns, simply by using the mapping in the Copy activity.
Add a Copy activity to your pipeline and set the source to your main file
Set the Sink to your target file name. It can be the same name as your original file but I would make it different for audit trail purposes.
Import the schema of your file, make sure the separator in the dataset is set to semi-colon ';'
4. Now press the Trash can button to delete the mappings for columns 1 and 2.
5. Run your pipeline. The output file should not have the two columns.
My results:
You can accomplish this task by using Select transformation in mapping data flow in Azure Data Factory (ADF). You can delete any unwanted columns from your delimited text file in data flow transformation.
I tested the same in my environment and it is working fine.
Please follow the below steps:
Create the Azure Data factory using Azure Portal
Upload the data at the source (eg: Blob Container)
Create a linked service to connect the blob storage with ADF as shown below
Then, create DelimitedText datasets using the above linked service for source and sink files. In the source dataset, mark Column delimiter as Semicolon(;). Also, in the Schema tab, select Import Schema From connection/store.
Create a data flow. Select the Source dataset from your datasets list. Click on + symbol to add Select from options as shown below.
**In the settings, select the columns you want to delete and then click on delete option.
Add the sink at the end. In the Sink tab use the sink dataset you created earlier in step 4. In the Settings tab, for File name option select Output to single file and give the filename in below option.
Now create a pipeline and use the Data flow activity. Select the data flow you created. Click on Trigger Now option to run the pipeline.
Check the output file at the sink location. You can see my input and output files below.

Azure Data Factory V2 Copy Activity - Save List of All Copied Files

I have pipelines that copy files from on-premises to different sinks, such as on-premises and SFTP.
I would like to save a list of all files that were copied in each run for reporting.
I tried using Get Metadata and For Each, but not sure how to save the output to a flat file or even a database table.
Alternatively, is it possible to fine the list of object that are copied somewhere in the Data Factory logs?
Thank you
Update:
Items:#activity('Get Metadata1').output.childItems
If you want record the source file names, yes we can. As you said we need to use Get Metadata and For Each activity.
I've created a test to save the source file names of the Copy activity into a SQL table.
As we all know, we can get the file list via Child items in Get metadata activity.
The dataset of Get Metadata1 activity specify the container which contains several files.
The list of file in test container is as follows:
At inside of the ForEach activity, we can traverse this array. I set a Copy activity named Copy-Files to copy files from source to destnation.
#item().name represents every file in the test container. I key in the dynamic content #item().name to specify the file name. Then it will sequentially pass the file names in the test container. This is to execute the copy task in batches, each batch will pass in a file name to be copied. So that we can record each file name into the database table later.
Then I set another Copy activity to save the file names into a SQL table. Here I'm using Azure SQL and I've created a simple table.
create table dbo.File_Names(
Copy_File_Name varchar(max)
);
As this post also said, we can use similar syntax select '#{item().name}' as Copy_File_Name to access some activity datas in ADF. Note: the alias name should be the same as the column name in SQL table.
Then we can sink the file names into the SQL table.
Select the table which created previously.
After I run debug, I can see all the file names are saved into the table.
If you want add more infomation, you can reference the post I maintioned previously.

Copy Data - How to skip Identity columns

I'm designing a Copy Data task where the Sink SQL Server table contains an Identity column. The Copy Data task always wants me to map that column when, in my opinion, it should just not include the column in the list of columns to map. Does anyone know how I can get the ADF Copy Data task to ignore Sink Identity columns?
If you are using copy data tool, and in your sql server, the ID is set as auto-increment, then it should not show out at the mapping step. Please tell us if it is not the case.
If you are using the create pipeline/dataset, you could just go to the sink dataset schema tab, remove the id column. And then go to the copy activity mapping tab, click import schemes again. ID column should has disappeared now.
You could include a SET_IDENTITY_INSERT_ON statement for the given table before executing the copy step. After completed, set it to OFF.