Case-insensitive column names breaks the Data Preview mode in Data Flow of Data Factory - azure-data-factory

I have a csv file in my ADLS:
a,A,b
1,1,1
2,2,2
3,3,3
When I load this data into a delimited text Dataset in ADF with first row as header the data preview seems correct (see picture below). The schema has the names a, A and b for columns.
However, now I want to use this dataset in Mapping Data Flow and here does the Data Preview mode break. The second column name (A) is seen as duplicate and no preview can be loaded.
All other functionality in Data Flow keeps on working fine, it is only the Data Preview tab that gives an error. All consequent transformation nodes also gives this error in the Data Preview.
Moreover, if the data contains two "exact" same column names (e.g. a, a, b), then the Dataset recognizes the columns as duplicates and puts a "1" and "2" after each name. It is only when they are case-sensitive unequal and case-insensitive equal that the Dataset doesn't get an error and Data Flow does.
Is this a known error? Is it possible to change a specific column name in the dataset before loading into Data Flow? Or is there just something I'am missing?

I testes it and get the error in source Data Preview:
I ask Azure support for help and they are testing now. Please wait my update.
Update:
I sent Azure Support the test.csv file. They tested and replied me. If you insist to use " first row as header", Data Factory can not solve the error. The solution is that re-edit the csv file. Even in Azure SQL database, it doesn't support we create a table with same column name. Column names are case-insensitive.
For example, this code is not supported:
Here's the full email message:
Hi Leon,
Good morning! Thanks for your information.
I have tested the sample file you share with me and reproduce the issue. The data preview is alright by default when I connect to your sample file.
But I noticed when we do the trouble shooting session – a, A, b are column name, so you have checked the first row as header your source connection. Please confirm it’s alright and you want to use a, A, b as column headers. If so, it should be a error because there’s no text- transform for “A” in schema.
Hope you can understand the column name doesn’t influence the data transform and it’s okay to change it to make sure no errors block the data flow.
There’re two tips for you to remove block, one is change the column name from your source csv directly or you can click the import schema button ( in the blow screen shot) in schema tab, and you can choose a sample file to redefine the schema which also allows you to change column name.
Hope this helps.

Related

ADF Pipeline include fixed text in output

The overall aim of the pipeline is to copy from XML to Oracle.
One of the source columns is a datetime that needs formatting, so I'm using an intermediate copy activity to copy from XML to CSV as instructed in this answer
From the CSV to the table is simple mapping except for the need for an additional target column with a fixed value of '365Response'
I've tried adding this as an additional column as shown below:
However, on the mapping tab, I'm not able to select the new additional column:
What did I do wrong?
Your process to add an Additional column in the copy activity looks correct. If the Additional column is not showing in the mapping, you can clear the mapping and import schema again to refresh the mapping.

Azure Data Factory - Data Flow - Derived Column Issue

Am using Azure DataFlow - DerivedColumn to create some new columns.
Ex:
this is my source and can preview the data.
But from DerivedColumn1 i cannot see these column or even in Expression Editor
Expression Editor:
Is something changed in ADF or am I doing something wrong.
According you screenshot, the column name is set as the row. Or you will get the error in Sink column mapping. Please set "first row as header" in the excel dataset.
If you don't check it, the column name will be considered as first row:
For your issue, you could try bellow workarounds:
import the source schema in Projection and Delete the Derived column
active and add again.
Drop the data flow and create a new one. Some time data flow may
have bugs, we refresh the browser or just recreate the data flow, it
will be solved.

Azure Data Factory Copy using Variable

I am coping data from a rest api to an azure SQL database. The copy is working find but there is a column which isn't being return within the api.
What I want to do is to add this column to the source. I've got a variable called symbol which I want to use as the source column. However, this isn't working:
Mapping
Any ideas?
This functionality is available using the "Additional Columns" feature of the Copy Activity.
If you navigate to the "Source" area, the bottom of the page will show you an area where you can add Additional Columns. Clicking the "New" button will let you enter a name and a value (which can be dynamic), which will be added to the output.
Source(s):
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
Per my knowledge, the copy activity may can't meet your requirements.Please see the error conditions in the link:
Source data store query result does not have a column name that is
specified in the input dataset "structure" section.
Sink data store (if with pre-defined schema) does not have a column
name that is specified in the output dataset "structure" section.
Either fewer columns or more columns in the "structure" of sink
dataset than specified in the mapping.
Duplicate mapping.
I think Mapping Data Flow is your choice.You could add a derived column before the sink dataset and create a parameter named Symbol.
Then set the derived column as the value of Symbol.
You can use the Copy Activity with a stored proc sink to do that. See my answer here for more info.

Why does my tableau tooltip get changed when I change my data source?

I am moving from a csv to a postgresSQl for my tableau workbook. Both of them have the same field names, exact data types. However, when I change my data source the tooltip gets a random text which breaks the filter in the toolip viz
I tried replacing the csv file with the same csv file and the same thing happened. So I think this is a tableau issue and not a database issue
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Site>,<[federated.02mez2l0u2i0o018sk45f0skmrv7].[none:level_2:nk]>"> (This is what happens)
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Level 2>,<Site>"> (This is what I want)
The 'Level 2' field gets messed up for some reason
If you open Tableau desktop file in a text editor, you'll see that it's an XML. In an example, I have a file with the following line. Tableau assigns a unique id to calculated fields I create, this is the "Calculation_104990228446113793"
<column-instance column='[Calculation_104990228446113793]' derivation='None' name='[none:Calculation_104990228446113793:nk]' pivot='key' type='nominal' />
The same can be seen for data source references.
<datasource caption='my_data_source' inline='true' name='federated.0dbu8r50hqicaj1fm4f2b1r4o814' version='10.5'>
So when you swap a data source, unique id's change and cause the error you're seeing. Not sure if your issue is a bug or not, you could report it. But this is what is happening in your case.

Header Row Insert

I am trying to insert data to big query using google cloud dataprep, I did create recipe and add first row as header row, but when I am trying to run on multiple files it insert the header row to my big query table also.
Anybody facing this problem ?
Welcome to StackOverflow, Andy!
I think I'm correctly understanding your problem, but I want to make sure since I'm making some assumptions:
You have multiple files imported in Dataprep
You created a recipe for the first file and convert row 1 to be a header
You apply a UNION step to merge the additional files
Your output contains the header rows for the additional files
If that's correct, the issue is that the header rows in the other files aren't being removed simply because Dataprep doesn't know what they are. In most cases, Dataprep will detect the file structure and you won't have to manually specify the header row. When that fails, however, UNION steps get a little funny like this—but you can definitely fix it in Dataprep.
Workarounds:
Apply a Recipe to Each Input File
Simply add a recipe to each file that converts the first row to a header—then instead of selecting your original file in the main recipe's UNION, select the other recipes (Dataprep will run them before merging the data).
While this takes some extra effort, it's doable for a small number of files. The advantage here is that you don't have to worry about whether your data may contain the header value—but I'd recommend using the other option if you're able to.
Use a Custom Filter Formula to Delete All Header Rows
The other option is a bit more dependent on your data, but lets you do everything in the main recipe. For example, after setting headers from the first file and applying your UNION you would add a "Filter rows using custom formula" step (or clicking Filter Rows > On column values > Custom Filter...), then match using a column that wouldn't contain the header string (e.g. CustomerID == "CustomerID")—integer columns work great since you don't have to worry if the value could contain the header string. The Resulting wrangle script should look something like this:
header sourcerownumber: 1
[union step goes here]
filter type: custom rowType: single row: CustomerID == 'CustomerID' action: Delete
Note: You may be tempted to do this by using $sourcerownumber, but that doesn't exist due to the union. I'm hoping that they'll eventually support it for this use case though.
These aren't the only ways you could eliminate the headers, but should provide two easy options for you.
As a pro-tip, you can copy a line of the wrangle script above and paste it after clicking "New Step" in your recipe and it'll set up the filter the same way that I did so you don't have to start from scratch. Just change the column name/value and you should be good to go.
Again, welcome to the site—and if any of the assumptions above are incorrect, update your original question with the additional details and let me know in a comment and I'll be happy help you out further.