I have a table (refer to it as A) with 1 column (refer to it as c) that contains a stringified JSON array in the follow format:
[
{"sys": {"type": "Link", "linkType": "Entry", "id": "27OfJChoPO894W4rA6bQ67"}},
{"sys": {"type": "Link", "linkType": "Entry", "id": "2ygvvrBSPuWw0uTW4jdDP2"}}
]
Please, note that the array have variable length. The id fields refer to the ID of the second table (B). So, I need to select all fields from A, but populate c with a column from B.
I tried looking for JSON functions to help me get the ids, but I couldn't progress from an array of ids to finally populating it with the column from B. So, my current idea is creating a new table to hold the relation between A and B. What's the best way?
demo:db<>fiddle
You can expand your array and use the elements in the JOIN condition
SELECT
*
FROM
a,
json_array_elements(c) as elems
JOIN b ON b.id = elems -> 'sys' ->> 'id'
However, please think about normalizing your data. You shouldn't store JSON data directly if you don't need it, especially arrays are difficult to handle. If you can save the data in to appropriate tables/columns, every single action (update, search, filter and of course join) would be easier and much faster. Furthermore you have the chance for proper indexes.
Related
I need to join two collections, on the first collection I have say profile info and on the second collection I have computed amounts (with dates). I want to join these collection based on a common id, but I want to create a result set of joined profile data with the latest amount.
I can create a join with the amount data as a list incorporated into it. But what is the smartest way to sort this nested list and then dump the other elements that are not the latest in the list.
So for example:
profile
{"pid": 123, "name": "xyz"}
amount
{"pid": "amount": 12.00, "date": date}
I can see two ways to do this, one with an unwind and one with a sort on the array. Is there any negative to speed in the unwind method?
I keep having a problem when filtering some data in postgresql.
For example, I want to filter by a json.
My jsons are saved in the following way
"[{\"Brand\":\"Leebo\"},{\"Housing Color\":\"Black\"},{\"Beam Type\":\"High Beam, Low Beam\"}]"
And let's say that I want to filter after
[{\"Brand\":\"Leebo\"}]
Shouldn't I write something like that in the query?
SELECT * FROM public.products
WHERE attributes is not NULL
AND attributes::text LIKE '%{\"Brand\":\"Leebo\"}%';
I tried also
SELECT * FROM public.products WHERE attributes::jsonb #> '"[{\"Material\":\"Artificial Leather\"}]"'
Because I won't receive data
Do you know how I could proceed differently?
But it only works if the column has all the data (eg if I give the exact data that is in the column)
Also, how could I search with whereIn?
You have an array in your JSONB because those characters ([ and ]) are array characters. If you are sure that you will always have only one array in your JSONB so you can use this:
SELECT * FROM public.products
WHERE attributes is not NULL
AND attributes[0]->>'Brand' = 'Leebo'
But if you can have several arrays inside JSONB then use jsonb_array_elements for extracting array elements, after then you can use JSONB operations like as ->>
I am planning to use Drools for executing the DMN models. However I am having trouble to write a condition in DMN Decision table where the input is an array of objects with structure data type and condition is to check if the array contains an object with specific fields. For ex:
Input to decision table is as below:
[
{
"name": "abc",
"lastname": "pqr"
},
{
"name": "xyz",
"lastname": "lmn"
},
{
"name": "pqr",
"lastname": "jkl"
}
]
Expected output: True if the above list contains an element that match {"name": "abc", "lastname": "pqr"} both on the same element in the list.
I see that FEEL has support for list contains, but I could not find a syntax where objects in array are not of primitive types like number,string etc but structures. So, I need help on writing this condition in Decision table.
Thanks!
Edited description:
I am trying to achieve the following using the decision table wherein details is list of info structure. Unfortunately as you see I am not getting the desired output wherein my input list contains the specific element I am looking for.
Input: details = [{"name": "hello", "lastname": "world"}]
Expected Output = "Hello world" based on condition match in row 1 of the decision table.
Actual Output = null
NOTE: Also in row no 2 of the decision table, I only check for condition wherein I am only interested in the checking for the name field.
Content for the DMN file can be found over here
In this question is not clear the overall need and requirements for the Decision Table.
For what pertaining the part of the question about:
True if the above list contains an element that match {"name": "abc", "lastname": "pqr"}
...
I see that FEEL has support for list contains, but I could not find a syntax where objects in array are not of primitive types like number,string etc but structures.
This can be indeed achieved with the list contains() function, described here.
Example expression
list contains(my list, {"name": "abc", "lastname": "pqr"})
where my list is the verbatim FEEL list from the original question statement.
Example run:
giving the expected output, true.
Naturally 2 context (complex structures) are the same if all their properties and fields are equivalent.
In DMN, there are multiple ways to achieve the same result.
If I understand the real goal of your use case, I want to suggest a better approach, much easier to maintain from a design point of view.
First of all, you have a list of users as input so those are the data types:
Then, you have to structure a bit your decision:
The decision node at least one user match will go trough the user list and will check if there is at least one user that matches the conditions inside the matching BKM.
at least one user match can implemented with the following FEEL expression:
some user in users satisfies matching(user)
The great benefit of this approach is that you can reason on specific element of your list inside the matching BKM, which makes the matching decision table extremely straightforward:
I am new to PostgreSQL I created a table with a JSON type column
id,country_code
11767,{"country_code": [{"code": "GB01F290/00", "new": 1}, {"code": "DE08F290/00", "new": 1}, {"code": "GB02F290/00", "new": 1}]}
11768,{"country_code": [{"code": "GB01F290/20", "new": 1}, {"code": "GB20F290/23", "new": 1}]}
list = ["GB01F290/00", "GB21F290/41"]
How can I select the rows that country_code:code contains any element of the list?
There is probably a way to create a jsonpath query to do this, but you would need some way to transform your ["GB01F290/00", "GB21F290/41"] into the correct jsonpath. I'm not very good at jsonpath, so I won't go into that.
Another way to do this would be to use the #> operator with the ANY(...) construct. But that takes a PostgreSQL array of jsonb documents as its right-hand-side, and each document needs to have a specific structure to match the structure of the documents you are querying. One way to express that array of jsonb would be:
'{"{\"country_code\": [{\"code\": \"GB01F290/00\"}]}","{\"country_code\": [{\"code\": \"GB21F290/41\"}]}"}'::jsonb[]
Or another way, with less obnoxious quoting/escaping would be:
ARRAY['{"country_code": [{"code": "GB01F290/00"}]}'::jsonb,' {"country_code": [{"code": "GB21F290/41"}]}']
A way to obtain that value given your input would be with this query:
select array_agg(jsonb_build_object(
'country_code',
jsonb_build_array(jsonb_build_object( 'code', x))
)) from
jsonb_array_elements('["GB01F290/00", "GB21F290/41"]')
But there might be better ways of doing that, in python.
Then the query would be:
select * from thetable where country_code #> ANY($1::jsonb[])
Where $1 holds the value given in the first block, or the result of the expression given in the 2nd block or the result of the query given in the third block. You could also put combine the queries into one by putting the first into the second as a subquery, but that might inhibit use of indexes.
Note that the column country_code needs to be of type jsonb, not json, for this to work. But that is what it should be anyway.
It would probably be better if you chose a different way to store your data in the first place. An array of objects where each object has a unique name (the value of "code", here) is an antipattern, and should instead be an object of objects, with the unique name being the key. And having objects which just have one key at the top level, which is the same as the name of the column, is another antipattern. And what is the point of "new":1 if it is always present (or is that just an artifact of the example you chose)? Does it convey any meaning? And if you remove all of that stuff, you are left with just a list of strings. Why use jsonb in the first place for that?
The JSON column type accepts non valid JSON
eg
[1,2,3] can be inserted without the closing {}
Is there any difference between JSON and string?
While [1,2,3] is valid JSON, as zerkms has stated in the comments, to answer the primary question: Is there any difference between JSON and string?
The answer is yes. A whole new set of query operations, functions, etc. apply to json or jsonb columns that do not apply to text (or related types) columns.
For example, while with text columns you would need to use regular expressions and related string functions to parse the string (or a custom function), with json or jsonb, there exists a separate set of query operators that works within the structured nature of JSON.
From the Postgres doc, given the following JSON:
{
"guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
"name": "Angela Barton",
"is_active": true,
"company": "Magnafone",
"address": "178 Howard Place, Gulf, Washington, 702",
"registered": "2009-11-07T08:53:22 +08:00",
"latitude": 19.793713,
"longitude": 86.513373,
"tags": [
"enim",
"aliquip",
"qui"
]
}
The doc then says:
We store these documents in a table named api, in a jsonb column named
jdoc. If a GIN index is created on this column, queries like the
following can make use of the index:
-- Find documents in which the key "company" has value "Magnafone"
SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc #> '{"company": "Magnafone"}';
This allows you to query the jsonb (or json) fields very differently than if it were simply a text or related field.
Here is some Postgres doc that provides some of those query operators and functions.
Basically, if you have JSON data that you want to treat as JSON data, then a column is best specified as json or jsonb (which one you choose depends on whether you want to store it as plain text or binary, respectively).
The above data can be stored in text, but the JSON data types have the advantage you can apply JSON rules in those columns. There are several functions which are JSON specified which cannot be used for text fields.
Refer to this link to understand about the json functions/procedures