In my Talend job, I have a first step to select a data in a database postgres.
After this selection, I would like to use the value of this field, in a parameter in an aggregation pipeline query :
How I can use it please ? I have tested context.LastStartOperation and LastStartOperation, and input_row.LastStartOperation without success.
What is the well syntax please ?
Put a tFlowToIterate between tDBInput and tMongoDBInput like this:
tDBInput -- Row -- tFlowToIterate -- Iterate -- tMongoDBInput
Then you can access the value like this in your tMongoDBInput:
rowX.LastStartOperation where rowX is the row between tDBInput and tFlowToIterate
Related
I want to access the table names from tFileInputDelimited, so how to write SQL squery in tDBInput so that I can able to access the data of that table.please just see the image you can understand, there is SQL query which i have written.
I tried various ways but it's not working.
try using this query
"select * from "+((String)globalMap.get("row2.Table_name"))+""
I supposed that your getting the right result from tFileInputDelimited to check that you have to link
tFileInputDelimited -> tLogRow
Consider the below image part of my talend job
I am aware of the advanced setting int Talend Studio.
I want to be able to log output the whole runtime dynamic values substituted in the query for CREATE_RULE_TICKET component .
For example lets say the component has the following query
SELECT START_DATE FROM TABLENAME WHERE CIF IN ('"+globalMap.get("cif")+')
The log should show me the runtime value for CIF
SELECT START_DATE FROM TABLENAME WHERE CIF IN ('HU8909','JKO98')
How do we go about it?
The component has a global variable QUERY, which returns your query after it has been constructed, so you can log it in a tJava like:
tHiveRow -- OnComponentOk -- tJava (System.out.println((String)globalMap.get("tHiveRow_1_QUERY"));)
I have a csv file where insert queries are present.
I want to create a job to execute query against the DB.
How would I do this?
Use tFileInputDelimited component to read the csv file based on your file configuration like fields & row delimiter. Connect this component to tFlowToIterate and Connect tFlowToIterate to db component (tOracleRow, tMySQLRow likewise based on your database) with iterate link.
In tFileInputDelimited, define the schema like : Query
tFlowToIterate will iterate each Row(insert query) and convert it in key value pair and than this will pass to DB component to execute.
In DB component, ((String)globalMap.get("row3.Query"))
Hope this help
The format is
tJDBCInput
main
tAggregateRow
main
tJavaRow
main
tLogRow
as shown in the image:
Under tAggregateRow basic setting I have this:
What should I write in tJava to get the value of rowcount?
If you want to get the row number of the data read by tjdbcinput, Talend provide it natively with no need to make aggregation, the row number is stored in the global map and you can get it using this line of code ((Integer)globalMap.get("tJDBCInput_1_NB_LINE"))
You can use it in a tJava component and wite it in your console using
System.out.println(((Integer)globalMap.get("tJDBCInput_1_NB_LINE")));
I am importing an excel file in Talend.
I want to select all the distinct values in column "A" and then dump that data into the database. Is it possible to do that with Talend?
If not, what are the alternatives available. Any help is appreciated
Yes you can do that easily with Talend Open Studio.
Create a new job like this one:
You can replace the tOracleOutput component by the component corresponding to your database.
Then parameterize the tAggregateRow component like this :
Distinct values of ColumnA will be transfered to distinctColumnA in the output schema.
You can also get the number of occurences by adding a count of columnB in the operations table.
Using tUniqRow in Talend Open Studio 6.3 works very well and you get to keep all your columns.