How to convert Json Array (nested) in multiple rows - talend

I have a Json array that looks like this screenshot:
Now I want to convert this array into a table format like this:
Currently I'm working with tExtractJSONFields. But I only find a way to target one of the nested array at the same time.
My expression:
"['book'][0].['category']"
"['book'][0].['author']"
......
So I got the Ouput for the first row. But I don't want to do the same for 50, 100 etc. rows.
Does anybody know a way to solve that?

In tExtractJsonFields, select edit Schema and give all the possible column you have in output schema. Select "Read by" as "Xpath".
Now in "Loop Xpath Query" write "/book"
In the "Mapping" section, you will see all the output column under "Column" as you have given in Schema. Under Xpath Query put the respective column like :
if column is Author than Xpath Query would be "Author"
column is Price than Xpath Query would be "Price" and so on.
hope this help..

To complete PrettyK's answers, you could also use the tExtractJsonFields with the JsonPath option from the dropdown list.
In your input row, you would have something like "book" of type Object or String.
Then, in the component, you define all your fields (category, author, ..), and in the component, you just have to define :
The json field on which you will loop (here : book)
To link your field, you just then write "#.author" for example.
For more info on the JsonPath, there is this good site.

Related

Iterating a json array in postgresesql and filtering based on data

[{“name”=“abc”,”type”=“charts”},{“name”=“def”,”type”=“transactions”}]
Attachment column gives me this data but I need to iterate this and check if type is present and if type=charts or transactions ..mainly we need to filter it out .can someone help me on this as I am new to Postgres
It's unclear to me what exactly you want to filter.
If you just want to see rows that contain a specific type, you can use:
select *
from the_table
where attachment #> '[{"type": "transactions"}]';
This will return all rows that include at least one element with {"type": "transactions"} in the array.
This assumes that your attachment column is defined as jsonb. If it's not you will need to cast it.

Sequelize how to use aggregate function on Postgres JSONB column

I have created one table with JSONB column as "data"
And the sample value of that column is
[{field_id:1, value:10},{field_id:2, value:"some string"}]
Now there are multiple rows like this..
What i want ?
I want to use aggregate function on "data" column such that, i should
get
Sum of all value where field_id = 1;
Avg of value where field_id = 1;
I have searched alot on google but not able to find a proper solution.
sometimes it says "Field doesn't exist" and some times it says "from clause missing"
I tried referring like data.value & also data -> value lastly data ->> value
But nothing is working.
Please let me know the solution if any one knows,
Thanks in advance.
Your attributes should be something like this, so you instruct it to run the function on a specific value:
attributes: [
[sequelize.fn('sum', sequelize.literal("data->>'value'")), 'json_sum'],
[sequelize.fn('avg', sequelize.literal("data->>'value'")), 'json_avg']
]
Then in WHERE, you reference field_id in a similar way, using literal():
where: sequelize.literal("data->>'field_id' = 1")
Your example also included a string for the value of "value" which of course won't work. But if the basic Sequelize setup works on a good set of data, you can enhance the WHERE clause to test for numeric "value" data, there are good examples here: Postgres query to check a string is a number
Hopefully this gets you close. In my experience with Sequelize + Postgres, it helps to run the program in such a way that you see what queries it creates, like in a terminal where the output is streaming. On the way to a working statement, you'll either create objects which Sequelize doesn't like, or Sequelize will create bad queries which Postgres doesn't like. If the query looks close, take it into pgAdmin for further work, then try to reproduce your adjustments in Sequelize. Good luck!

PostgreSql Search JSON And Match Whole Term

I have a Postgres database using JSON storage. I have a table of cameras and lenses with a single property to search against called BrandAndModel. The relevant JSON portion looks like this and is stored in a column called "data":
"BrandAndModel": "nikon nikkor 50mm f/1.4 ai-s"
I have a LIKE query running against this brand and model string but it only returns a result of the sequence of characters matches. For instance, the above does get results for "nikkor 50mm" but NOT "nikon 50mm".
I'm no SQL expert and I'm not sure what I need to use to match more possible combinations.
My query looks like this
SELECT * FROM listing where data ->> 'Product' ->> 'BrandAndModel' like '%nikon 50mm%'
How could I get this query to match "nikon 50mm"?
You may use ANY with an array for multiple comparisons.
LIKE ANY(ARRAY['%nikon%ai-s%', 'nikon%50mm%', '%nikkor%50mm%'])

Querying on multiple LINKMAP items with OrientDB SQL

I have a class that contains a LINKMAP field called links. This class is used recursively to create arbitrary hierarchical groupings (something like the time-series example, but not with the fixed year/month/day structure).
A query like this:
select expand(links['2017'].links['07'].links['15'].links['10'].links) from data where key='AAA'
Returns the actual records contained in the last layer of "links". This works exactly as expected.
But a query like this (note the 10,11 in the second to last layer of "links"):
select expand(links['2017'].links['07'].links['15'].links['10','11'].links) from data where key='AAA'
Returns two rows of the last layer of "links" instead:
{"1000":"#23:0","1001":"#24:0","1002":"#23:1"}
{"1003":"#24:1","1004":"#23:2"}
Using unionAll or intersect (with or without UNWIND) results in this single record:
[{"1000":"#23:0","1001":"#24:0","1002":"#23:1"},{"1003":"#24:1","1004":"#23:2"}]
But nothing I've tried (including various attempts at "compound" SELECTs) will get the expand to work as it does with the original example (i.e. return the actual records represented in the last LINKMAP).
Is there a SQL syntax that will achieve this?
Note: Even this (slightly modified) example from the ODB docs does not result in a list of linked records:
select expand(records) from
(select unionAll(years['2017'].links['07'].links['15'].links['10'].links, years['2017'].links['07'].links['15'].links['11'].links) as records from data where key='AAA')
Ref: https://orientdb.com/docs/2.2/Time-series-use-case.html
I'm not sure of what you want to achieve, but I think it's worth trying with values():
select expand(links['2017'].links['07'].links['15'].links['10','11'].links.values()) from data where key='AAA'

Performing Oracle Text search on column with xml and html tags

I've done some online reading but can't seem to find an answer to what I'm looking for. It may be because I'm approaching this the wrong way.
I am using Oracle 10g.
I have table xyz with columns:
recID- VARCHAR type
XML- CLOB type
The XML column contains xml and html tags. ie.
<Employee><firstName>John</firstName>
<EmpDesc><![CDATA[<p>I like to live by Ghandi&#39s motto: Be the <strong>change</strong> you want to see</p>]]</EmpDesc>
</Employee>
An index was created:
create index myIndex on xyz(xml) indextype is ctxsys.context
When I perform the query below, I don't get the result I'm expecting, because one of the words is enclosed in html tags.
SELECT * from xyz where contains(xml, 'be the change you want to see')>0;
*Note that the string can exist in any node so inpath may not be a suitable option.
Is there a way to create an index to ignore html tags so that a result is returned?