I'm trying to query only once, and after that produce two entry on the output file (their mapping for retrieving of fields are different but it corresponds to the same number of output).
Here is the flow that I'm aiming:
tMap1
/ \
tOracleInput -> tReplicate tUnite - tSort - tOutputFile
\ /
tMap2
But its not allowing me to connect tMap2 to tUnite (if tMap2 to another tOutputFile is okay).
Any ideas?
Thanks!
you can not use tReplicate and tUnite in the same subjob. What you can do here is:
tOutputFile
/
tOracleInput -> tSort -->tMap -- > tOutputFile
You can keep your output file in append mode. so that you will get a single output and also you dont have to use tReplicate, instead you can have multiple output flow in tMap only.
hope this helps...
I believe the feature you are looking for is tSplitRow.
It lets you split one input row into one or more output rows, out of the same stream.
Read here:
https://help.talend.com/reader/wDRBNUuxk629sNcI0dNYaA/yn7aPyanBrstCYkH_XhyIw
Related
I have a Talend Job that currently does the following:
Input csv (tFileInputDelimited) --> tMap --> Output csv(tFileInputDelimited)
The goal of my job, is keep a value from the tMap, and use it to rename the output file.
I've tried to use a context and specify the row and the column I want to use, but it didn't work.
I'm a beginner, I use talend during an intership, I started 6 years ago, so I don't know many things ^^
Thank you for you future help !
You can use a tJavaRow to capture the value from the flow and assign it to the variable, the code will be like this :
// get the value of wanted_field of the id 40
if (input.id == 40) context.myvar = input.wanted_field
Your job will look like this:
Input csv (tFileInputDelimited) --> tJavaRow --> tMap --> Output csv(tFileInputDelimited)
I have a 6 row input file which consists of a field(Position 1 to 6) that contains a different value on every line. Based on the different values contained in this field the other fields (From position 7 -80) will be moved to to a single row in output.
E.G.
Input:
035MI 88122
035ST 72261
035SU 317786762
105 06616858
1601 11
1651 0000000140006PC
Output:
1 8812272261317786762 06616858 11 0000000140006PC
I need to find out how to read these all in as different rows and then output to a single row. I've tried using something similar to the code to this:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=(1,6,CH,EQ,C'035MI '),
OVERLAY=(3:7,5)),
But this will move the data onto the correct position on seperate rows like this:
1 8812272261317786762
1 06616858
1 11
1 0000000140006PC
So now I think I need to do a sort in one step and a merge in another step. I would prefer to do it in one if it's possible however. I'd appreciate any help on this. Thanks.
You add a sequence-number to each record.
The use WHEN=GROUP to copy data from one record to one or more subsequent records.
You use OUTFIL INLUDE= to just pick up the final record.
OPTION COPY
INREC IFTHEN=(WHEN=INIT,
OVERLAY=(81:SEQNUM,1,ZD)),
IFTHEN=(WHEN=GROUP,
BEGIN=(81,1,CH,EQ,C'1'),
PUSH=(somestuff),
RECORDS=6),
IFTHEN=(WHEN=GROUP,
BEGIN=(81,1,CH,EQ,C'2'),
PUSH=(somestuff),
RECORDS=5),
IFTHEN=(WHEN=GROUP,
BEGIN=(81,1,CH,EQ,C'3'),
PUSH=(somestuff),
RECORDS=4),
IFTHEN=(WHEN=GROUP,
BEGIN=(81,1,CH,EQ,C'4'),
PUSH=(somestuff),
RECORDS=3),
IFTHEN=(WHEN=GROUP,
BEGIN=(81,1,CH,EQ,C'5'),
PUSH=(somestuff),
RECORDS=2),
OUTFIL INCLUDE=(81,1,CH,EQ,C'6'),
BUILD=(1,80)
You need to do a bit of planning. The sixth record will contain all the data, but perhaps not yet in the order that you want. Either with an IFTHEN=(WHEN=(logical expression) (to identify the sixth record) on the INREC or with the BUILD on the OUTREC, you can do your final formatting.
You need to change the something each time, it will be receivingposition:sourceposition,length
The DFSORT manuals are very good, there is a Getting Started for those new to the product, and everything you'll ever need is in the Application Programming Guide,
I am new to using Talend.
The first table is fed into tHashOutput1. This part works fine.
The tHashOutput1 is fed into tHashOut2 and tHashOutput3.
From tMap_3 is fed from user . When i try to feed tHashInput3 into the tMap i am not allowed to do this. What is wrong with this.
Please make sure that tHashInput3have the schema of tHashOutput1, as you can see there is a small yellow warning on the tHashInput3 but it not exist on tHashInput2. Also, there is no output from the tMap3 to see if it works.
I have one of jmeter User defined variable as a "comma separated value" - ${countries} = IN,US,CA,ALL .
(I was first trying to get it as a list/array - [IN,US,CA,ALL] )
I want to use the variable to test a web service - GET /${country}/info . IS it possible using ForEach controller or Loop controller ?
Only thing is that I want to save it or read it as IN,US,..,ALL and use it in the request path.
Thanks
The CSV should be as per the format mentioned in the image attached.
Refer to the link on how to use CSV in Jmeter: http://ivetetecedor.com/how-to-use-a-csv-file-with-jmeter/
Thread Group Settings
No. of threads: 1
Ramp-up period: 1
Loop Count: 4
Hope this will help.
CSV config is a red herring, you don't need it.
You can use a regular expression extractor to split up the variable into another variable (eg MyVar), using something like:
(.+?)[,\n]
This is trying to match each item before a , or newline. It will place the values in variables like MyVar_1, MyVar_2, etc. This is as close to an array as JMeter understands natively.
You can then loop on the contents of the matches using MyVar_matchNr, and MyVar_1 to MyVar_n (you will need to use __V() function to access the 'array' contents.
I have searched all over, and read this post.
But it doesn't seem complete and doesn't work.
The situation: I need to get the last modified file from a directory on the local machine. I then need to pass that file into the fileinputdelimited component.
I currently have:
tfilelist --> iterate --> titeratetoflow --> tsamplerow
-->tflowtoiterate -> tinpufiledelimited ---> tlogrow (just to make sure its pulling the right file)
But it doesn't work. I have configured it. so that titeratetoflow has a column called
"FileName" with "((String)globalMap.get("CURRENT_FILE"))" as the value,
"FileDirectory" with ((String)globalMap.get("CURRENT_FILEDIRECTORY")) as value, and
"FileAndDirectory" with ((String)globalMap.get("CURRENT_FILEPATH")) as value.
The tsamplerow is limited to "1".
The tiflowtoiterate is set so that
"FileNameOnly" is value of "FileName"
"FileDirectoryOnly" is "FileDirectory" and
"FilePathComplete" is "FileAndDirectory"
In the File location field of the tinputfiledelimited, I have "((String)globalMap.get("FilePathComplete"))"
When it runs I get an error saying cannot find file or path. If I cut out the fileinput component and have it send straight to the tlogrow, it shows a single line of blank entry.
Any ideas?
I'm not sure if you've just slightly misconfigured the job here but it seems to work fine for me.
Here's a few screenshots showing my job design:
The only thing I can think of just by looking at your post is that you might have slightly messed up the key value pair combinations in the tFlowToIterate. I tend to find that the default settings there work fine pretty much all of the time and it makes it a little more obvious what it's doing as well.
EDIT: Actually, it looks like you might be using the wrong values in your tIterateToFlow. The tFileList will throw the values for the file paths etc in to the global map but it will preface it with the unique component name. If you hit ctrl+space in the value window it should prompt you with a list of available values (these are also specified in the "Outline" tab of the studio). It typically makes an implicit conversion to String but for this you will need to explicitly convert it so use .toString() instead of (String).
Another way to get last modified file is as below
tFileList(sorted DESC by file modified date) ------> tFixedFlowInput (schema - filename, filenumber) ----->tHashOutput
here in tFixedFlowInput
filename = file(String)globalMap.get("tFileList_1_CURRENT_FILEPATH")+"/"+(String)globalMap.get("tFileList_1_CURRENT_FILE")
filenumber = (Integer)globalMap.get("tFileList_1_NB_FILE")
What above will accomplish is get list of all files in the directory with their number/rank - where the file last modified will have file number =1 and next to that will have 2...and so on.
Now on SubJobOK of above tFileList you can have tHashInput which will read from above tHashOutput and filter only row where filenumber==1 - which means the last modified file.
tHashInput (link to tHashoutput) ---->tFilterRow(filenumber==1)------>tLogRow
One reason why you are getting null is probably you have used globalMap.get("CURRENT_FILEPATH) instead of globalMap.get("tFileList_1_CURRENT_FILEPATH")
The Simple Solution for above problem could be as below:
tFileList(sorted ASC by file modified date)--> tIterateToFlow --> tJava( just to end the subjob).
Then on
subjob ok --> tfileinput ( use (String)globalMap.get("tFileList_1_CURRENT_FILE") or (String)globalMap.get("tFileList_1_CURRENT_FILEPATH") as a file name/file path)
Explanation:
Since tFileList iterates all the files in ASC order, it will always have Latest file name stored in globalMap for the last iteration. The list is only iterated till tIterateToFlow hence after this component (String)globalMap.get("tFileList_1_CURRENT_FILE") will always give the last file name from the iterated list, which is the latest file in out case.
Main Flow :
Component View: