Talend Cloud Studio : Issue with look up mapping - talend

Good day,
I’m new with Talend Cloud Studio and I’m getting stuck with the job I’m working on.
Context:
My company uses an API directory (Genesys). So far, we use a .csv file to insert / update contacts. This file is generated from a database extract.
We want to automatize this process with a Talend Job.
(A) is my Contacts database extract.
(B) is my API (feeded from A) and used as look up component.
Because there is not a unique field that we can use like a proper foreign key between (A) and (B), we map most of (A) fields to (B)’s so input datas can be « filtered » and redirected to two disctincs output « Inserted » and « Updated ».
Please find my schema attached
My issue is the following : I know for a fact that 9 rows should be considered as "to be inserted" and one row should be considered as "to be updated",yet, every output are considered as « inserted » (aka new contacts in the directory) and not « updated ».
Please find my result attached
If anyone as any clue about what’s happening or need more details, feel free to ask me !
Thanks by advance for your help.
Regards

Related

Case-insensitive column names breaks the Data Preview mode in Data Flow of Data Factory

I have a csv file in my ADLS:
a,A,b
1,1,1
2,2,2
3,3,3
When I load this data into a delimited text Dataset in ADF with first row as header the data preview seems correct (see picture below). The schema has the names a, A and b for columns.
However, now I want to use this dataset in Mapping Data Flow and here does the Data Preview mode break. The second column name (A) is seen as duplicate and no preview can be loaded.
All other functionality in Data Flow keeps on working fine, it is only the Data Preview tab that gives an error. All consequent transformation nodes also gives this error in the Data Preview.
Moreover, if the data contains two "exact" same column names (e.g. a, a, b), then the Dataset recognizes the columns as duplicates and puts a "1" and "2" after each name. It is only when they are case-sensitive unequal and case-insensitive equal that the Dataset doesn't get an error and Data Flow does.
Is this a known error? Is it possible to change a specific column name in the dataset before loading into Data Flow? Or is there just something I'am missing?
I testes it and get the error in source Data Preview:
I ask Azure support for help and they are testing now. Please wait my update.
Update:
I sent Azure Support the test.csv file. They tested and replied me. If you insist to use " first row as header", Data Factory can not solve the error. The solution is that re-edit the csv file. Even in Azure SQL database, it doesn't support we create a table with same column name. Column names are case-insensitive.
For example, this code is not supported:
Here's the full email message:
Hi Leon,
Good morning! Thanks for your information.
I have tested the sample file you share with me and reproduce the issue. The data preview is alright by default when I connect to your sample file.
But I noticed when we do the trouble shooting session – a, A, b are column name, so you have checked the first row as header your source connection. Please confirm it’s alright and you want to use a, A, b as column headers. If so, it should be a error because there’s no text- transform for “A” in schema.
Hope you can understand the column name doesn’t influence the data transform and it’s okay to change it to make sure no errors block the data flow.
There’re two tips for you to remove block, one is change the column name from your source csv directly or you can click the import schema button ( in the blow screen shot) in schema tab, and you can choose a sample file to redefine the schema which also allows you to change column name.
Hope this helps.

Talend tFileInputDelimited Advanced Setting --> "Check each row structure against schema"

Can anyone please explain the difference of having the field "Check each row structure against schema" checked and unchecked in the Advanced Setting of tFileInputDelimited.
I tried to read a csv file as input and a tFileOutputDelimited with the options checked and unchecked, but there was no difference.
I'm guessing your file is valid, meaning the structure of your rows is that defined in your schema, that's why you don't see a difference whether the option is checked or unchecked.
Now consider this sample file :
id;name;state
1;abraham;NY
2;jeff
3;thomas
You can see that rows 2 and 3 do not have a valid structure, yet when I run my job Talend doesn't complain (with Check each row structure against schema unchecked):
It just reads all it can.
Now with Check each row structure against schema checked :
I get a nice little warning in the console saying that 2 rows have missing columns. Those rows can be captured using a Reject link on the tFileInputDelimited :
Another benefit of Check each row structure against schema option, is you can stop job execution if you have invalid rows. Just check "Die on error" in the basic settings tab of tFileInputDelimited (But doing that prevents you from using the Reject link).

How do I search though related tables in access and have the results displayed in a list box

I have created a form that searches through a table, following the instructions of John Big Booty, as can be seen here: https://access-programmers.co.uk/for...d.php?t=188663. The search form has a list box that has a query run whenever a text box is changed which allows me to search fields in one table but how do I adjust it to allow me to find related records stored in a second table. The database stores widow information and they are looked after by legatees.
The query has the following code in the criteria for each field you want to search Like "" & [forms]![FRM_SearchMulti]![SrchText] & "".
Each legatee is responsible for looking after multiple widows and any widow can be assigned to any legatee but any widow can only one legatee when she is assigned to one.
The search form is called FRM_SearchMulti and it searches through the widow table as I type. I want it so that you can type in a legatee and it will display the widows that have been assigned to that legatee typed in the search box. I seem to have hit a road block and everything I try doesn’t seem to work.
I have uploaded a version of the db with a lot of the information deleted, for privacy reasons, so if it is possible, could you describe to me how to do it and I will update the file with all the information in it please?
Any help would be greatly appreciated.
Thanks,
Dave

Ignore row if duplicate at CSV import

I was wondering if it possible? If a row for some reason cannot be imported
ex. duplicate primary key, wrong input type etc etc can it be ignored and move to the next row?
I'm getting this
ERROR: duplicate key value violates unique constraint "team_pkey"
DETAIL: Key (team)=(DEN) already exists.
CONTEXT: COPY team, line 23: "DEN,Denver,Rockets,A"
There's a lot of mistakes in the file and its a pretty big one, so is it possible to ignore the rows that can't be inserted?
A solution that handles the duplicate key issue is described in To ignore duplicate keys during 'copy from' in postgresql - in short using an unconstrained temp table and select distinct on uniquefield into the destination table.
Another way would involve using pgLoader. Unfortunately the documentation seems to have disappeared from the website, but there are several tutorial article on the author's site. It has rich functionality to help you read data with issues, and can do things like store rejected lines in a separate file, transform fields and so on.
Something that may not be obvious immediately: pgLoader version 2 is written in Python, version 3 is written in Lisp. Both can be obtained from the GitHub page.

Fulltext search replacement Google Cloud SQL

I'm trying to implement a search functionnality with autocomplete in a project I'm working on. So far I've managed to do this with a select column1, column2 where myColumn like %...% but it isn't as responsive is I would like, I mean it's just ok and it searches only in one single row. The current version of MySql with innoDB tables doesn't support "match against" any plans on upgrading the db version? Otherwise, could anyone suggest another way of achieving a search + autocomp (against a single table).
Thanks!
Try www.rockitsearch.com , it has the autocomplete implementation. The only thing you'll need to do is :
- create an account
- export your data