Talend tFileInputDelimited Advanced Setting --> "Check each row structure against schema" - talend

Can anyone please explain the difference of having the field "Check each row structure against schema" checked and unchecked in the Advanced Setting of tFileInputDelimited.
I tried to read a csv file as input and a tFileOutputDelimited with the options checked and unchecked, but there was no difference.

I'm guessing your file is valid, meaning the structure of your rows is that defined in your schema, that's why you don't see a difference whether the option is checked or unchecked.
Now consider this sample file :
id;name;state
1;abraham;NY
2;jeff
3;thomas
You can see that rows 2 and 3 do not have a valid structure, yet when I run my job Talend doesn't complain (with Check each row structure against schema unchecked):
It just reads all it can.
Now with Check each row structure against schema checked :
I get a nice little warning in the console saying that 2 rows have missing columns. Those rows can be captured using a Reject link on the tFileInputDelimited :
Another benefit of Check each row structure against schema option, is you can stop job execution if you have invalid rows. Just check "Die on error" in the basic settings tab of tFileInputDelimited (But doing that prevents you from using the Reject link).

Related

How can I edit table properties in tabular model in visual studio?

I want to add new columns in two tables in a tabular model. But I faced three questions in the process.
When I opened the table properties, I found here has filter rows commands. I tried to directly delete filter rows command here, but I clicked validate, it shows the credentials for this operation could not be validated. How can I renew the SQL statement?
When I open design and click import. Error appears: Cannot import the partition query because the set of columns in the partition definition does not match those in the table definition. The following required columns are mission.
The partition only sets the datetime, I do not understand what the error is here.
When I opened design in the table properties and click update, the error: cannot save changes because the partitions' schema has been changed. Please correct the schema and try again. But the table does not have any partitions. How can I fix it?

Why does my tableau tooltip get changed when I change my data source?

I am moving from a csv to a postgresSQl for my tableau workbook. Both of them have the same field names, exact data types. However, when I change my data source the tooltip gets a random text which breaks the filter in the toolip viz
I tried replacing the csv file with the same csv file and the same thing happened. So I think this is a tableau issue and not a database issue
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Site>,<[federated.02mez2l0u2i0o018sk45f0skmrv7].[none:level_2:nk]>"> (This is what happens)
<Sheet name="Tooltip: Level 2 Site Scores" maxwidth="300" maxheight="300" filter="<Level 2>,<Site>"> (This is what I want)
The 'Level 2' field gets messed up for some reason
If you open Tableau desktop file in a text editor, you'll see that it's an XML. In an example, I have a file with the following line. Tableau assigns a unique id to calculated fields I create, this is the "Calculation_104990228446113793"
<column-instance column='[Calculation_104990228446113793]' derivation='None' name='[none:Calculation_104990228446113793:nk]' pivot='key' type='nominal' />
The same can be seen for data source references.
<datasource caption='my_data_source' inline='true' name='federated.0dbu8r50hqicaj1fm4f2b1r4o814' version='10.5'>
So when you swap a data source, unique id's change and cause the error you're seeing. Not sure if your issue is a bug or not, you could report it. But this is what is happening in your case.

Case-insensitive column names breaks the Data Preview mode in Data Flow of Data Factory

I have a csv file in my ADLS:
a,A,b
1,1,1
2,2,2
3,3,3
When I load this data into a delimited text Dataset in ADF with first row as header the data preview seems correct (see picture below). The schema has the names a, A and b for columns.
However, now I want to use this dataset in Mapping Data Flow and here does the Data Preview mode break. The second column name (A) is seen as duplicate and no preview can be loaded.
All other functionality in Data Flow keeps on working fine, it is only the Data Preview tab that gives an error. All consequent transformation nodes also gives this error in the Data Preview.
Moreover, if the data contains two "exact" same column names (e.g. a, a, b), then the Dataset recognizes the columns as duplicates and puts a "1" and "2" after each name. It is only when they are case-sensitive unequal and case-insensitive equal that the Dataset doesn't get an error and Data Flow does.
Is this a known error? Is it possible to change a specific column name in the dataset before loading into Data Flow? Or is there just something I'am missing?
I testes it and get the error in source Data Preview:
I ask Azure support for help and they are testing now. Please wait my update.
Update:
I sent Azure Support the test.csv file. They tested and replied me. If you insist to use " first row as header", Data Factory can not solve the error. The solution is that re-edit the csv file. Even in Azure SQL database, it doesn't support we create a table with same column name. Column names are case-insensitive.
For example, this code is not supported:
Here's the full email message:
Hi Leon,
Good morning! Thanks for your information.
I have tested the sample file you share with me and reproduce the issue. The data preview is alright by default when I connect to your sample file.
But I noticed when we do the trouble shooting session – a, A, b are column name, so you have checked the first row as header your source connection. Please confirm it’s alright and you want to use a, A, b as column headers. If so, it should be a error because there’s no text- transform for “A” in schema.
Hope you can understand the column name doesn’t influence the data transform and it’s okay to change it to make sure no errors block the data flow.
There’re two tips for you to remove block, one is change the column name from your source csv directly or you can click the import schema button ( in the blow screen shot) in schema tab, and you can choose a sample file to redefine the schema which also allows you to change column name.
Hope this helps.

Header Row Insert

I am trying to insert data to big query using google cloud dataprep, I did create recipe and add first row as header row, but when I am trying to run on multiple files it insert the header row to my big query table also.
Anybody facing this problem ?
Welcome to StackOverflow, Andy!
I think I'm correctly understanding your problem, but I want to make sure since I'm making some assumptions:
You have multiple files imported in Dataprep
You created a recipe for the first file and convert row 1 to be a header
You apply a UNION step to merge the additional files
Your output contains the header rows for the additional files
If that's correct, the issue is that the header rows in the other files aren't being removed simply because Dataprep doesn't know what they are. In most cases, Dataprep will detect the file structure and you won't have to manually specify the header row. When that fails, however, UNION steps get a little funny like this—but you can definitely fix it in Dataprep.
Workarounds:
Apply a Recipe to Each Input File
Simply add a recipe to each file that converts the first row to a header—then instead of selecting your original file in the main recipe's UNION, select the other recipes (Dataprep will run them before merging the data).
While this takes some extra effort, it's doable for a small number of files. The advantage here is that you don't have to worry about whether your data may contain the header value—but I'd recommend using the other option if you're able to.
Use a Custom Filter Formula to Delete All Header Rows
The other option is a bit more dependent on your data, but lets you do everything in the main recipe. For example, after setting headers from the first file and applying your UNION you would add a "Filter rows using custom formula" step (or clicking Filter Rows > On column values > Custom Filter...), then match using a column that wouldn't contain the header string (e.g. CustomerID == "CustomerID")—integer columns work great since you don't have to worry if the value could contain the header string. The Resulting wrangle script should look something like this:
header sourcerownumber: 1
[union step goes here]
filter type: custom rowType: single row: CustomerID == 'CustomerID' action: Delete
Note: You may be tempted to do this by using $sourcerownumber, but that doesn't exist due to the union. I'm hoping that they'll eventually support it for this use case though.
These aren't the only ways you could eliminate the headers, but should provide two easy options for you.
As a pro-tip, you can copy a line of the wrangle script above and paste it after clicking "New Step" in your recipe and it'll set up the filter the same way that I did so you don't have to start from scratch. Just change the column name/value and you should be good to go.
Again, welcome to the site—and if any of the assumptions above are incorrect, update your original question with the additional details and let me know in a comment and I'll be happy help you out further.

Filemaker Pro 14 History tables

With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.