How to add an attribute in a json file via thmap? - talend

I am a beginner on Talend, I have a problem processing a json file via talend. I have a json file with several levels and containing tables on different levels (or depths) of json. I just want to add an attribute in a json area located at a given depth via thmap. So in input I have the json file and in output the same json file with the new attribute. I have no idea how to configure the thmap although it is dedicated to simplify complex mappings.

difficult to answer without more information can you create a screen grab of your TMAP usually it's quite simple in the output field to on the left cell you add it there

Related

In Power Query, when duplicating the source query should I duplicate the Transform File folder as well?

My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.

Load multiple multischema delimited file from same directories

Could I know does it have any method to load multiple files that are multi schema delimited files which store in same directories in Talend?
I have tried use the tFileInputMSDelimited component before, but unable to link with tFilelist component to loop through the files inside the directory.
Does anyone have idea how to solve this problem?
To make clearer, each file only contain one batch line but contain multiple header line and it comes with a bunch of transaction line. As showing at the sample data below.
The component tFileOutputMSDelimited should suit your needs.
You will need multiple flows going into it.
You can either keep the files and read them or use tHashInput/tHashOutput to get the data directly.
Then you direct all the flows to the tFileOutputMSDelimited (example with tFixedFlowInput, adapt with your flows) :
In it, you can configure which flow is the parent flow containing your ID.
Then you can add the children flows and define the parent and the ID to recognize the rows in the parent flow :

Two different mappings to ONE XML output file

I'm working on a talend job where I have a excel file and a couple of database fields that gets mapped to an XML file.
The working job looks like this:
Problem: I want to, with the same input of the excel file and the database fields, make another mapping that outputs to the same working XML file mentioned ealier. So I will have ONE XML file with TWO different mappings. How can I achieve this?
Update
I have done this mapping:
which in the end gets exported like this:
but I'm unsure on how to use this mapping in the tAdvancedFileOutputXML
If I understood correctly, you want to have a single XML file containing two different XMLs (the second one appended to the first one). In the shown Job add a OnSubJobOk link to point to a duplicate of your document flow which has a different mapping. In the second flow rather than using tFileOutputXML component to write the XML file, you can use the tAdvancedFileOutputXML with Append Source XML File marked to add to the file generated from the first flow. Also make sure to configure the XML tree. Check the following link for further information https://help.talend.com/reader/~hSvVkqNtFWjDbBHy0iO_w/h3wZegFH1_1XfusiUGtsPg
Hope this helps.

Importing data from postgres to cytoscape

I have been trying to load some gis data from a postgis database into Cytoscape 3.6. I am trying to get some inDegree and outDegree values I have used the sif file format.
As long as the data is written out in the follow format
source_point\tinteracts with\ttarget_point
Cytoscape is happy to read it.
I am just wondering if there is anyway of including my own metric for the cost of getting between source_point and target_point
Sure! There are several ways to read in text files into Cytoscape -- SIF is just one of them. I would create a file that looks like SIF, but is actually a more complete text file:
Source\tTarget\tScore
source_point\ttarget_point\t1.1
...
And then use the "File->Import Network->File", choose your source and target and leave score as an edge attribute. You can have as many attributes on each line as you want, and can even mix edge attributes, source node attributes, and target node attributes.
-- scooter

In DataStage, how do you extract an element together with a list of elements from an XML file

so I've spent hours trying to figure this out. I'm basically trying to read an xml document (using the Hierarchical Data stage). Then I need to output the contents of that document into a dataset with two columns.
The difficulty is that in the xml document I read from an element and then I need to read from a list of elements; Specifically productID and SubjectCode.
The output I need is this
But I'm getting the following error because DataStage doesn't want to associate a single element with a multiple list element .
I should mention that if subjectCode was a single element like productID, it works fine. Any ideas would be appreciated.
Apologies, I'm not at a computer to deliver screenshots but I recall having a similar issue and this answer is intended to give you some more options to try (if you haven't already done these!)
I believe you can set subjectCode as the "top" element and then the mapping for productID would become ../productID
Failing that, you can right click and set subjectCode element differently within the XML_Parser_step in order to create a repeater element there
I believe the Datastage XML Integration Redbook covers off the above and is available from the IBM for free