tMap from two tSetGlobalVar - talend

I have two tSetGlobalVar in which I store two different columns.
NB_DNCL_OLD and NB_DNCL_NEW.
I am trying to connect them to a tMap component in order to have a single table as output, adding an expression for the difference between both integer variables.
I am able to connect NB_DNCL_OLD as row3 (Main) to tMap but I am not allowed to connect NB_DNCL_NEW to the same tMap.
tMapImage
New tsetglobalvars connected to tMap before being connecting to their sources:
Should I use any component between the tSetGlobalVar and the tMap? What am I doing wrong? (I am new to Talend and I have no Java knowledge).
I have just recreated the tsetglobalvar and I have connected them to the tMap without connecting them to their sources (2 tFlowIterate) and this time I was able to connect the second one to the tMap through a lookup but, if I try to reconnect the tsetglobalvars to their sources I have the same problem: I am able to connect just the first as Main and I am not allowed to connect the second.
Thanks for your advice.

I've got a good answer from the Talend Community which has helped me to solve my problem from a different perspective:
https://community.talend.com/t5/Design-and-Development/tMap-from-two-tSetGlobalVar/m-p/39258/highlight/false#M10959

Related

Talend 8.0.3 - Component tDBSCD (MSSQL/MySQL)

I'm currently learning the SCD methodology and tried to apply it with Talend (TOS 8.0.3 for DI) and I noticed something with the tDBSCD component. I tried using the tDBSCD with 2 different databases, one with MySQL and the other one with SQLServer.
So the issue I have is when using the tDBSCD component with the MySQL database (configuration) I have the option "Action on table" but not when I configure it for MSSQL.
Is the "Action on table" option (for tDBSCD component) only available for MySQL (and possibly other databases, I didn't check them all) and if you know why ?
Thanks.
You can actually compare the 2 components:
MySQL:
https://github.com/Talend/tdi-studio-se/tree/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD
MsSQL: https://github.com/Talend/tdi-studio-se/tree/master/main/plugins/org.talend.designer.components.localprovider/components/tMSSqlSCD
You can see that MySQL has a TABLE_ACTION entry: https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD/tMysqlSCD_java.xml#L154
And the code generation also includes the tableAction snippet: https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/tMysqlSCD/tMysqlSCD_begin.javajet#L113
SQL server doesn't have the UI element defined neither it includes the tableAction snippet, that's the reason it's not visible.
If you look at the snippet you can see it only covers MySQL and Oracle.
https://github.com/Talend/tdi-studio-se/blob/master/main/plugins/org.talend.designer.components.localprovider/components/templates/_tableActionForSCD.javajet
With that being said there's a tCreateTable component that you could use with your schema to create tables.

Append datasets from xlsx and databse - Talend

I have three Excel files and one database connection which I need to append as a part of my flow. All four datasets in the pre-append stage have just one column.
When I try to use tUnite, I get the error for tFileInputExcel - see the screenshot. Moreover, I cannot join the database connection with tUnite.
What am I doing wrong?
I think the problem is with the tFileExist components (I think that's what these are on the left with the "if" links coming out) because each of them is trying to start a new flow. Once you're joining them with the unite, there can be only one start to the flow - and this goes to the start of the first branch of the merge order.
You can move the if logic elsewhere. Another idea is to put the output from each of the Excel into a tHashOutput (linked together), then use a tHashInput to write to your DB.

How to create a Derived Column in IIDR CDC for Kafka Topics?

we are currently working on a project to get data from an IBM i (formerly known as AS400) system with IBM IIDR CDC to Apache Kafka (Confluent Plattform).
So far everything was working fine, everything get replicated and appears in the topics.
Now we are trying to create a derived column in a table mapping which gives us the journal entry type from the source system (IBM i).
We would like to have the information to see whether it was an Insert,Update or Delete Operation.
Therefore we crated a derived column called OPERATION as Char(2) with Expression &ENTTYP.
But unfortunately the Kafka Topic doesn't show the value.
Can someone tell me what we were missing here?
Best regards,
Michael
I own the IBM IDR Kafka target, so lets see if I can help a bit.
So you have two options. The recommended way to see audit information would be to use one of the audit KCOPs. For instance you might use this one...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavroformat.html#kcopauditavroformat
You'll note that the audit.jcf property in the example is set to CCID and ENTTYP, so you get both the operation type and the transaction id.
Now if you are using derived columns I believe you would follow the following procedure... https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.mcadminguide.doc/tasks/addderivedcolumn.html
If this is not working out, open a ticket and the L2 folks will provide a deeper debug. Oh also if you end up adding one, does the actual column get created in the output, just with no value in it?
Cheers,
Shawn
your colleagues told me how to do it:
DR Management Console -> Go to the "Filtering" tab -> find out "Derived Column" column in "Filter Columns" (Source Columns) section and mark "replicate" next to the column. Save table mapping afterwards and see if it appears now.
Unfortunately a derived column isn`t automatically selected for replication, but now I know how to select it.
you need to duplicate the new column on filter:
https://www.ibm.com/docs/en/idr/11.4.0?topic=mstkul-mapping-audit-fields-journal-control-fields-kafka-targets

DBeaver will not display certain schemas correctly in the Database Navigator

I'm using DBeaver 5.2.5.201811181655 with IBM DB2/400 v7r3.
I'm trying to see a schema called WRKCERTO, but Database Navigator will not show it. The schema is there and I have rights to it, and I'm able to run SQL scripts with its objects, such as SELECT * FROM WRKCERTO.DAILYT and it works.
To make matters stranger, when WRKCERTO is the only schema in the filters, the contents of a schema which I cannot identify are shown under the connection as if the connection is their parent. It doesn't show any schema as a node in the tree between the connection & Tables, Views, etc. The tables are familiar, but I cannot determine their exact schema, and as such also cannot query any of them because DBeaver doesn't know what schema to use.
The behavior of the Projects window is the same.
If I connect with SquirrelSQL 3.8.1 everything looks ok. I can see WRKCERTO along with all my other schemas as if nothing is different.
The screenshot below shows the issue. The schema I use most is F_CERTOB, which is visible under the connection ASP7, which currently has two schema filters: F_CERTOB and WRKCERTO. But as shown, WRKCERTO...isn't.
The connection TEST is an exact copy of ASP7, but its only filter is WRKCERTO. And as mentioned above, the items under the connection name cannot be identified.
I've gone through the DBeaver settings, but I cannot find any way to change this behavior. AND...this is the first time I've tried to use WRKCERTO. I tried to access it for the first time only a couple days ago, so it seems unlikely there are bad bits of information about it floating around in my system, or in DBeaver.
What information can I provide to help diagnose this issue...?
Please check below url.Similar issue mentioned with some solution.
You may also want to try this and let me know if it works or not.
https://dbeaver.io/forum/viewtopic.php?f=2&t=911

Adding logs of jobs into a database with Talend

I am trying to import all the logs of the jobs running into a table in Postgres. I am using the components tLogCatcher and tStatCatcher and joining them to create a table with all the available data.
The job looks like this:
Inside the tMap, I am joining the two sources from logcatcher and statcatcher on the pid and the job name and try to merge the results to have them combined in a table:
However whenever the job fails I get nulls in the logcatcher output, even if there are error messages:
[statistics] connecting to socket on port 3696
[statistics] connected
2017-02-03 13:51:07|PR7710|PR7710|PR7710|6981|NASIA|Master_ETL_Job|_52dYEJUvEeaqS8phzVFskQ|0.1|Default||begin||
Exception in component tFileInputDelimited_1
java.io.FileNotFoundException: /Users/nasiantalla/Documents/keychain.csv (No such file or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.talend.fileprocess.TOSDelimitedReader.<init>(TOSDelimitedReader.java:88)
at org.talend.fileprocess.FileInputDelimited.<init>(FileInputDelimited.java:164)
at nasia.master_etl_job_0_1.Master_ETL_Job.tFileInputDelimited_1Process(Master_ETL_Job.java:796)
at nasia.master_etl_job_0_1.Master_ETL_Job.runJobInTOS(Master_ETL_Job.java:6073)
at nasia.master_etl_job_0_1.Master_ETL_Job.main(Master_ETL_Job.java:5879)
2017-02-03 13:51:08|PR7710|PR7710|PR7710|NASIA|Master_ETL_Job|Default|6|Java Exception|tFileInputDelimited_1|java.io.FileNotFoundException:/Users/nasiantalla/Documents/keychain.csv (No such file or directory)|1
2017-02-03 13:51:08|PR7710|PR7710|PR7710|6981|NASIA|Master_ETL_Job|_52dYEJUvEeaqS8phzVFskQ|0.1|Default||end|failure|890
[statistics] disconnected
Job Master_ETL_Job endet am 13:51 03/02/2017. [exit code=1]
And in my table the data I get are like this:
Do you see something that I might have missed? I tried with all different joins in the tMap but it doesn't seem to work and I dont understand why..
Thanks in advance!
The tStatCatcher and tLogCatcher do not work when joined with a tMap. I cannot give a definitive answer as to why, but I think its related to the special functionality involved in 'catching' the errors and stats, and is likely a timing issue. The log catcher for instance will only catch an error while the stats can catch stats on every component.
I recommend writing to separate tables and joining on those tables to produce reports. As a matter of fact Talend has this functionality built in so you do not even need to provide your own tStatCatcher and tLogCatcher components in each job.
You must first create the AMC database structure then go to file-->edit project settings--> job settings --> stats and logs. Choose the 'on database' option. Then Talend will automatically log stats, errors and flows to the AMC db. You can report off this db.
They are 3 reasons for that :
tLogCatcher does not provide logs if there is no tDie or tWarn, and i think this is your case.
It's not necessary that tLogCatcher and tStatCatcher provide their data at the same time, because their are triggered by different events. So join will not match.
From functional prespective, joining the 2 flow does not make sense, they are fully independent.
I recommand you to dump these flows into different tables, and this can be achieved implicitly without using any component and without development, see here.