How do I select only a specific key's value from jsonb type in Postgres - postgresql

I have a jsonb column which has data as below.
[
{"key": "unit_type", "value": "Tablet", "display_name": "Unit Type"},
{"key": "pack_type", "value": "Packet", "display_name": "Pack Type"},
{"key": "units_in_pack", "value": "60", "display_name": "Units in Pack"},
{"key": "item_unit", "value": "", "display_name": "Item unit"},
{"key": "item_size", "value": "1", "display_name": "Item Size"},
{"key": "details", "value": "", "display_name": "Details"},
{"key": "slug", "value": "otc7087", "display_name": "Slug"}
]
I want to get the value field from the array which has a key called slug, so that when I do a select query over table, I get this particular value from the column. For the above row when I do select name, slug, price from table, I should get med1, otc7087, 100 as the output. I am unable to build a query for this thing. I can get all the keys or all the values but how do I select a particular one in the same select query?
Or simply how do I select just the slugs from the table? That will answer.

i believe your json is much more structured ,
just try jsonb_to_recordset
for ex:
select * from json_to_recordset('[
{"key": "unit_type", "value": "Tablet", "display_name": "Unit Type"},
{"key": "pack_type", "value": "Packet", "display_name": "Pack Type"},
{"key": "units_in_pack", "value": "60", "display_name": "Units in Pack"},
{"key": "item_unit", "value": "", "display_name": "Item unit"},
{"key": "item_size", "value": "1", "display_name": "Item Size"},
{"key": "details", "value": "", "display_name": "Details"},
{"key": "slug", "value": "otc7087", "display_name": "Slug"}
]') as x(key int, value text, display_name text);
it will convert jsonb into table with key, value, display_name as columns and then you can fire any type query over it, it works for extracting keys also, whereas the way #Craig Ringer suggested you won't be able to convert it into table like things and firing complex select query like not in , != , range queries , ilike will be really difficult and might be less performant.

You seem to want to search all elements of a json array for an object with a particular value for a given key, then return the value of another key if matched.
Something like this will do the trick:
WITH my_table(jsonblob) AS (VALUES('[
{"key": "unit_type", "value": "Tablet", "display_name": "Unit Type"},
{"key": "pack_type", "value": "Packet", "display_name": "Pack Type"},
{"key": "units_in_pack", "value": "60", "display_name": "Units in Pack"},
{"key": "item_unit", "value": "", "display_name": "Item unit"},
{"key": "item_size", "value": "1", "display_name": "Item Size"},
{"key": "details", "value": "", "display_name": "Details"},
{"key": "slug", "value": "otc7087", "display_name": "Slug"}
]'::jsonb))
SELECT elem ->> 'value'
FROM my_table
CROSS JOIN LATERAL jsonb_array_elements(jsonblob) elem
WHERE (elem ->> 'key') = 'slug';
i.e. select from the table, unpack the array into a join, filter the join table for the desired object by looking for the json key key with value slug, and return the value of the json key value in the select clause when found.
If you want multiple different values from the same json object you need multiple joins, one per desired value.
This is a pretty ugly way to store variable key/value format data. I'd suggest storage like:
{"unit_type": {"value": "Tablet", "display_name": "Unit Type"}, ...}
where you can actually look up the keys.

Related

Coalesce value bound to object's key into parent's value

I have a PostgreSQL 12.x database. There is a column data in a table typename that contains JSON. The actual JSON data is not fixed to a particular structure; these are some examples:
{"emt": {"key": " ", "source": "INPUT"}, "id": 1, "fields": {}}
{"emt": {"key": "Stack Overflow", "source": "INPUT"}, "id": 2, "fields": {}}
{"emt": {"key": "https://www.domain.tld/index.html", "source": "INPUT"}, "description": {"key": "JSONB datatype", "source": "INPUT"}, "overlay": {"id": 5, "source": "bOv"}, "fields": {"id": 1, "description": "Themed", "recs ": "1"}}
Basically, what I'm trying to come up with is a (database migration) script that will find any object with the keys key and source, take the actual value of key and assign it to the corresponding key/value pair where the object was originally bound to. For instance:
{"emt": " ", "id": 1, "fields": {}}
{"emt": "Stack Overflow", "id": 2, "fields": {}}
{"emt": "https://www.domain.tld/index.html", "description": "JSONB datatype", "overlay": {"id": 5, "source": "bOv"}, "fields": {"id": 1, "description": "Themed", "recs ": "1"}}
I started finding the rows that contained "source": "INPUT" by using:
select * from typename
where jsonb_path_exists(data, '$.** ? (#.type() == "string" && # like_regex "INPUT")');
...but then I'm not sure how to update the returned subset or to loop through it :/
It took me a while but here is the update statement:
update typename
set data = jsonb_set(data, '{emt}', jsonb_extract_path(data, 'emt', 'key')::jsonb, false)
where jsonb_typeof(data -> 'emt') = 'object'
and jsonb_path_exists(data, '$.emt.key ? (#.type() == "string")')
and jsonb_path_exists(data, '$.emt.source ? (#.type() == "string" && # like_regex "INPUT")');
There are probably better ways to implement that where clause, but that one works ;)
One downside is that I had to figure it out how many keys are involved in the update and align it with the number of update statements; e.g.: in the original example there were two keys: emt and description — so it should have been two update statements.

JSONB Postgres map object to list of objects

Let's say I have a following table in Postgres 14.
CREATE TABLE test(data jsonb);
Now I would insert the following JSON into the table.
INSERT INTO test
values ('{"STATUS1": {"value1": "1", "value2": "2"},
"STATUS2": {"value1": "11", "value2": "22"}}');
The question that I have is, how could I transform this data using SQL and possibly Postgres JSONB functions in a query so the SELECT result would be a a following JSON array:
[
{"key": "STATUS1", "value1": "1", "value2": "2"},
{"key": "STATUS2", "value1": "11", "value2": "22"}
]

PostgresSQL nested jsonb update value of complex key/value pairs

Starting out with JSONB data type and I'm hoping someone can help me out.
I have a table (properties) with two columns (id as primary key and data as jsonb).
The data structure is:
{
"ProductType": "ABC",
"ProductName": "XYZ",
"attributes": [
{
"name": "Color",
"type": "STRING",
"value": "Silver"
},
{
"name": "Case",
"type": "STRING",
"value": "Shells"
},
...
]
}
I would like to update the value of a specific attributes element by name for a row with a given id. For example, for the element with "name"="Case" change the value to "Glass". So it ends up like
{
"ProductType": "ABC",
"ProductName": "XYZ",
"attributes": [
{
"name": "Color",
"type": "STRING",
"value": "Silver"
},
{
"name": "Case",
"type": "STRING",
"value": "Glass"
},
...
]
}
Is this possible with this structure using SQL?
I have created table structure if any of you would like to give it a shot.
dbfiddle
Use the jsonb concatenation operator, ||, to replace keys on the fly:
WITH properties (id, data) AS (
values
(1, '{"ProductType": "ABC","ProductName": "XYZ","attributes": [{"name": "Color","type": "STRING","value": "Silver"},{"name": "Case","type": "STRING","value": "Shells"}]}'::jsonb),
(2, '{"ProductType": "ABC","ProductName": "XYZ","attributes": [{"name": "Color","type": "STRING","value": "Red"},{"name": "Case","type": "STRING","value": "Shells"}]}'::jsonb)
)
SELECT id,
data||
jsonb_build_object(
'attributes',
jsonb_agg(
case
when attribs->>'name' = 'Case' then attribs||'{"value": "Glass"}'::jsonb
else attribs
end
)
) as data
FROM properties m
CROSS JOIN LATERAL JSONB_ARRAY_ELEMENTS(data->'attributes') as a(attribs)
GROUP BY id, data
Updated fiddle

Select all values inside different arrays inside an array

I have a document that looks like this:
"userName": "sample name",
"values": [
{
"values": [
{
"brand": "SOLIGNUM CLEAR",
"name": "Solignum Colourless AZ",
"price": "569",
"qip": "30.00",
"sku": "1L",
"unit": "Piece"
}
]
},
{
"values": [
{
"brand": "FirePRO",
"name": "FirePRO",
"price": "419.75",
"qip": "30.00",
"sku": "1L",
"unit": "Cartons"
},
{
"brand": "SOLIGNUM AEROSOL",
"name": "Solignum Colourless AZ Aerosol",
"price": "397",
"qip": "30.00",
"sku": "500ML",
"unit": "Piece"
}
]
}
]
My query looks like this:
SELECT orders.unit, orders.sku, orders.name, orders.srp, TONUMBER(orders.price) AS price, orders.qip as quantity
FROM jdi stoCallLog
UNNEST stoCallLog.`values`[0].`values` AS orders
Query result looks like this
I have tried changing the unnest block into this:
UNNEST stoCallLog.`values`[1].`values` AS orders
selects only the 2nd array value
Also like this:
UNNEST stoCallLog.`values`.`values` AS orders
not possible i guess, it returns none
I need a way to select all of the values at once. Is there any way to do it?
Solved by modifying the UNNEST block to:
UNNEST `values` as rawOrders
UNNEST rawOrders.`values` as orders

Parsing Really Messy Nested JSON Strings

I have a series of deeply nested json strings in a pyspark dataframe column. I need to explode and filter based on the contents of these strings and would like to add them as columns. I've tried defining the StructTypes but each time it continues to return an empty DF.
Tried using json_tuples to parse but there are no common keys to rejoin the dataframes and the row numbers dont match up? I think it might have to do with some null fields
The sub field can be nullable
Sample JSON
{
"TIME": "datatime",
"SID": "yjhrtr",
"ID": {
"Source": "Person",
"AuthIFO": {
"Prov": "Abc",
"IOI": "123",
"DETAILS": {
"Id": "12345",
"SId": "ABCDE"
}
}
},
"Content": {
"User1": "AB878A",
"UserInfo": "False",
"D": "ghgf64G",
"T": "yjuyjtyfrZ6",
"Tname": "WE ARE THE WORLD",
"ST": null,
"TID": "BPV 1431: 1",
"src": "test",
"OT": "test2",
"OA": "test3",
"OP": "test34
},
"Test": false
}