postgresql using json sub-element - postgresql

How do I select only "A" named values from a postgresql database table.
Id
Column
1001
{"results":[{"name":"A","value":"7.8"}, {"name":"B","value":"0.5"}]}
1002
{"results":[{"name":"B","value":"5.4"}, {"name":"D","value":"4.5"}]}
1003
{"results":[{"name":"D","value":"4.8"}, {"name":"A","value":"6.7"}]}
Results should be as
ID
Name
Value
1001
A
7.8
1003
A
6.7

You can use a JSON path query to access such an element:
select id,
'A' as name
jsonb_path_query_first("column", '$.results[*] ? (#.name == "A").value') #>> '{}' as value
from the_table;
This assumes that column (which is a horrible name) is defined as jsonb (which it should be). If it's not, you need to cast it "column"::jsonb
jsonb_path_query_first returns a jsonb value and there is no straighforward way to convert that to a proper text value (as e.g. ->> does). The #>> '{}' is a little hack to convert a scalar jsonb value to text.

According to column type you can use json_to_recordset (If type of column is json) or jsonb_to_recordset (If type of column is jsonb)
Demo
JSONB sample
select
t.id,
x.name,
x.value
from
test t
cross join jsonb_to_recordset(("column"::jsonb) -> 'results') as x(name text, value text)
where
x.name = 'A'
JSON sample
select
t.id,
x.name,
x.value
from
test t
cross join json_to_recordset(("column"::json) -> 'results') as x(name text, value text)
where
x.name = 'A'

Related

How can I compare json field with a string value in postgresql?

I have a field payload saved in a postgresql table which is json type. This type has a nested field subcategory which is string. Below is the output of this value:
=> select payload->'subcategory' from "Merchant";
?column?
-------------------------------
"Food"
null
"AUTOMOTIVE"
null
"MEDICAL"
null
null
"CLUB"
"Petrol Stations"
However, I can't put this field in the where clause. Below query returns 0 rows. But from above output it shows there are rows whose value is CLUB. What is the right way to use json field in where clause?
=> select count(*) from "Merchant" where ("payload"->'subcategory')::text = 'CLUB';
count
-------
0
Figured out what's wrong, I need to use ->> in the where like "payload"->'subcategory'.
that because ->> converts it to text while -> makes it as JSONB
An alternative solution is to use the JSON contains operator #>:
select count(*)
from "Merchant"
where payload #> '{"subcategory": "CLUB")';

Update table with newly added column containing data from the same table old column, but modified (flattened) jsonb

So i've came across issue with having to migrate data from one column to "clone" of itself with different jsonb schema -> i need to parse the json from
["keynamed": [...{"type": "type_info", "value": "value_in_here"}]]into something plain object with key:value - dictionary like {"type_info": "value_in_here" ,...}
so far i've tried with subqueries and json functions in subquery + switch case to map "type" to "type_info" and then use jsonb_build_object(), but this takes data from the wole table and i need to have it on update with data from row - is there anything simpler than doing N subqueries closest way i've came with is:
select
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
jsonb_build_object(type_, _value) as _json
from
(
select
_value,
CASE _type
...
END type_
from
(
select
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v);
But i have no idea how to make it row specyfic and apply to simple update like:
UPDATE table
SET table.new_field = (subquery with parsed dict in json)
Any ideas/tips how to solve it with plain PSQL without any external support?
The expected output of the table would be:
id | old_value | new_value
----------------+-------------------------------------+------------------------------------
1 | ["keynamed": [...{"type": "type_info", "value": "value_in_here"}]] | {"type_info": "value_in_here" ,...}
According to postgres documents you can use update with select table and use join pattern update document
Sample:
UPDATE accounts SET contact_first_name = first_name,
contact_last_name = last_name
FROM salesmen WHERE salesmen.id = accounts.sales_id;
If I understand correctly, below query can help you. but I can't test because I haven't sample data and I don't know this query has syntax error or not.
update table t
set new_value = tmp._json
from (
select
id,
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
id,
jsonb_build_object(type_, _value) as _json
from
(
select
id,
_value,
CASE _type
...
END type_
from
(
select
id,
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
id,
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v)
group by id) tmp
where tmp.id = t.id;

Extract multiple values from JSONB in Postgres

I have a table that looks like this:
id
attrs
1
{"a":{"kind":"kind_1", "value":"val_1"}, "b":{"kind":"kind_2", "value":"val_2"}
2
{"c":{"kind":"kind_3", "value":"val_1"}}
3
{"a":{"kind":"kind_1", "value":"val_1"}, "d":{"kind":"kind_4", "value":"val_4"}, .....
I would like to extract all the unique value, so the output would be:
val_1
val_2
val_4
...
I tried to use jsonb_each method for it, but without any luck
You can use a JSON Path query:
select distinct v.item #>> '{}'
from the_table t
cross join jsonb_array_elements(jsonb_path_query_array(t.attrs, '$.**.value')) as v(item);
The v.item #>> '{}' is a trick to convert a scalar JSON value to text (because casting it wouldn't work)
Alternatively you can use jsonb_each() twice:
select distinct v.value
from the_table t
cross join jsonb_each(t.attrs) as i(key, item)
cross join jsonb_each_text(i.item) as v(key, value)
where v.key = 'value'

Postgresql order by case when {someCase} then json type column

I need order result from select by few ways.
It's working when it's some column from table TenderItem.
But NOT working if it some key from json type column TenderItem.ItemInfo, f.e.
select * from "TenderItem" order by "ItemInfo" ->> 'Name'; -- working in simple select
with sortingParams (columnName, isAsc) AS (VALUES ('ItemId', true))
select *
FROM "TenderItem" i, sortingParams
WHERE i."TenderId" = 1
AND i."ItemInfo" ->> 'Name' like '%Transcend%'
ORDER BY
case
WHEN columnName like '%ItemId%' THEN i."ItemId" --*work
WHEN columnName like '%ABCSegment%' THEN i."ItemInfo" ->> 'ABCSegment' --**
end desc;
**on this string i have message "ERROR: CASE types bigint and text cannot be matched"
It's not clear how you'd sort the itemID against the ItemInfo segment (unless this points to an item id) since they are not all text values (and if they are all text but some are text strings like '12345' then you do not want to use text sort since then '100' would come before '99'). You probably want them to be separate sort conditions to give more flexibility in ordering:
with sortingParams (columnName, isAsc) AS (VALUES ('ItemId', true))
select *
FROM "TenderItem" i, sortingParams
WHERE i."TenderId" = 1
AND i."ItemInfo" ->> 'Name' like '%Transcend%'
ORDER BY
case
WHEN columnName like '%ItemId%' THEN i."ItemId"::bigint end asc nulls last --puts things with an itemID ahead of those without, or could use nulls first
--if two items have same item id, then sort by segment
, case
WHEN columnName like '%ABCSegment%' THEN i."ItemInfo" ->> 'ABCSegment'
end desc;
Note that each sort condition must give the same datatype for each row being evaluated! This is what gives the error your described where the case statement gives a biting for ItemId and a text value for ItemInfo ->> 'ABCSegment'
ItemId is BIGINT and i."ItemInfo" ->> 'ABCSegment' is text which are incompatible types to do sorting on.
Try casting the value explicitly to BIGINT, i.e
..WHEN columnName like '%ABCSegment%' THEN (i."ItemInfo" ->> 'ABCSegment')::BIGINT
or make i."ItemId" a text if the above fails due to invalid bigint values.
i."ItemId"::TEXT

Processing record type from a jsonb_each query

I store some data as JSON.
I want to flatten the data using jsonb_each.
The new column type is RECORD, but I don't how extract values from it.
SELECT T FROM (
SELECT json_each_text(skills::json->'prizes') FROM users) AS T;
The output is
jsonb_each
---------------------------------
(compliance,2)
(incentives,3)
(compliance,0)
(legal,3)
(legal,2)
(international-contributions,3)
The type is RECORD.
pg_typeof
-----------
record
I want to do an aggregate and GROUPBY, but I cannot figure out how to extract the first element(the string) and the second element (the value).
Here is a workaround I have found: JSON -> ROW -> JSON -> (string, integer) and then aggregate. But I am wondering if there is a shortcut and skip the ROW->JSON conversion.
SELECT U.key, AVG(U.value::int) FROM
(SELECT row_to_json(T)->'s'->>'key' AS key,
row_to_json(T)->'s'->>'value' AS value
FROM
(SELECT jsonb_each(skills::jsonb->'prizes') AS s
FROM users) AS T
) AS U
GROUP BY key;
Thanks a lot, #Arnaud, this seems like a not-very-common problem. I wasn't sure about json data structure after using row_to_json function, so I needed to validate that via:
SELECT row_to_json(T) FROM
(SELECT jsonb_each((data->'app_metadata'->>'results')::jsonb)
FROM temp) AS T;
And once I got the keys structure, I could replicate your approach:
SELECT row_to_json(T)->'jsonb_each'->>'key' as key, row_to_json(T)->'jsonb_each'->>'value' as value
FROM (select jsonb_each((data->'app_metadata'->>'results')::jsonb) FROM temp) AS T