I am a beginner on TALEND and I would like to be able to compare the lines of different sources to be able to make a historization.I make a filter but it does not work. I would like to filter on the updates that have a modification and reject those that have not had any change.Thank you in advance for your help here is my JOB
enter image description hereenter image description here
Related
Dear Anylogic Community,
I am struggling with finding the right approach for storing my simulation results. I have datasets created that keep track of every value I am interested in. They live in Main (see below)
My aim is to do a parameter variation experiment. In every run, I change the value for p_nDrones (see below)
After the experiment, I would like to store all the datasets in one excel sheet.
However, when I do the parameter variation experiment and afterwards check the log of the dataset (datasets_log), the changed values do not even show up (2 is the value I did set up in the normal simulation).
Now my question. Do I need to create another type of dataset if I want to track the values that are produced in the experiments? Why are they not stored after executing the experiment?
I really would appreciate if someone could share the best way to set up this export of experiment results. I would like to store the whole time series for every dataset.
Thank you!
Best option would be to write the outputs to some external file at the end of each model run.
If you want to use Excel, which I personally would not advise, even though it has a nice excelFile.writeDataSet() function, you can.
I would rather write the data to a text file as you will have much for control over the writing, the file itself, it is thread-safe, and useable in many many more platforms than Microsoft Excel.
See my example below:
Setup parameters in your model that you will write the data to at the end of the model of type TextFile. Here I used the model on destroy code to write out the data from the data sets.
Here you can immediately see the benefit of using the text file! You can add the number of drones we are simulating (or scenario name or any other parameter) in a column, whereas with Excel this would be a pain...
Now you can pass your specific text file to the model to use by adding it to the parameter variation page, providing it to the model through the parameters.
You will see that I also set up some headers for the text file in the Initial Experiment setup part, and then at the very end of the experiment, I close the text files in the After experiment section so that the text files can be used.
Here is the result if you simply right-click on the text files and open them in Excel. (Excel will always have a purpose, even if it is just to open text files ;-) )
Looking for a way to add manual data into Grafana. Want to display the results of a simple survey consisting of questions such as, "how old are you?", "how long have you worked here?" and so on. Summarizing the answers in grafana with graphs or similar would be tremendous.
Setting up a data source for this seems unnecessary, wondering if there is a plugin or something that allows me to do this? Not too familiar with the JSON behind the panels, but maybe it is possible through that aswell.
If anyone is wondering why I'm trying to do this in such a weird and unfitting way, it's for a school thing... :)
You can generate graph by manually putting data. To do so:
Go to configuration: click on Add data source
Select TestData DB, Change the name and click Save & Test
Create new dashboard: Add Panel -> Add query -> select data source to TestData
Add data to string input field and Alias (i.e. How old are you?)
Learn more about TestData
You can use CSV plugin for grafana. I use excel and then save the data as CSV.
Below is snapshot of the data where we are portraying our test results.
I have an input file with data
GGN,IBM
BNGLR, IBM
GGN,HCL
NOIDA,HCL
BNGLR,HCL
I want output like
IBM,GGN,BNGLR
HCL,GGN,NOIDA,BNGLR
using datastage tool.
Thanks in advance
You've not given us much details to work with, so I'm making a few assumptions here on the job you're using (server/parallel) and your DataStage version. In the job design I've considered the name of the first of your columns to be "Value" and the second to be "Key".
Here is a basic job design, notice the partitioning: Job design image
Here is the first transformer setup. I know it's inneficient to add a second transformer just for a trim, but a limitation of the LastRowInGroup() function is that it can only accept columns as params. So transforms to the column it uses must be done before it's passed in the function: first transformer image
Here is the second transformer setup. The stage variable order matters, don't forget the constraint: Second transformer image
In the second transformer, be sure to set the partitioning and constraint as detailed in the picture: second transformer properties image
Your output data will look like this: output stage data image
Hope that helps and is clear, look through the images closely. I'm using images as they speak more than words.
Regards,
Sam Gribble
#InforgeAcademy
I don't know anything about TalenD but have to work,so if anybody can help me out in letting me know how to create mapping and transfer data from suppose a txt file to .xls.
Thanks In advance
You need to find out the format of the txt file.
Then you need to find the appropriate talend input component and read in the data.
You should make clear what your specification of your output format are.
For example if you need conditional formatting or complicated dynamic formulas, you need to find a good Talend output component, for example tFileExcelSheetOutput [1].
Otherwise you could use the standard tFileOutputExcel component.
You would use tMap to map the input to the output and depending on your transformation requirements you might also need additional components.
I would recommend looking at Talend on demand seminars which should be freely available from their website to get yourself familar with Talend Studio. Then you can consult the component reference, which is also integrated into Talend Studio.
If you need further assistance you can describe what you tried and what part exactly doesn't work.
[1] http://talendforge.org/exchange/index.php?eid=1257&product=tos&action=view&nav=1,1,1
I'm looking for a way how to split job execution in talend studio according to actual file row - I'd like to process file rows starting with "DEBUG" in one job branch and another rows in another job branch. It that possible?
To do this, use a tMap component. Your job will look like this
t*Input--row-->tMap--out1--->tFileOutput*
--out2--->tFileOutput*
In the tMap component, you have input on the left and output on the right. In your output table, select "Activate expression filter" and use the text box to define your filter-- only rows that match that filter will be ouput from that connection. You can have as many output tables and filters as you need.
Using tMap is cool, but if number of output stream is not defined and fixed, tMap is not a good choice.
In this case using iterate link or tjavaflex can help you:
Have a look at this tutorial on "how to split a file into many files regarding a key on each record" which explains how to solve this kind of task. It is actually only available in french. The tutorial shows 3 different technics to achieve this task.
Finally I used tExctractRegeFields component - simply defined regex for matching lines. The most important (and I didn't know before) is that you can connect components with different types of connections. I did right click on used component a chose Row > Reject for new branch in job as described in question.
We can do it by using tfileoutputdelimited and tfileinputdelimited.
We have one option in tfileoutputdelimited in advanced settings and check option split out files in several files.