PostgreSQL - GROUP_CONCAT for JSONB column - postgresql

I try to find a way to concat JSONB values using Postgres.
For example I have two lines :
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (1, {"name": "JSON_name1", "value" : "Toto"}, 5);
INSERT INTO "testConcat" ("id", "json_data", "groupID")
VALUES (2, {"name": "JSON_name2"}, 5);
I would like to do something like :
SELECT GROUP_CONCAT(json_data)
FROM testConcat
GROUP BY groupID
AND as results to obtain something like :
[{"name": "JSON_name1", "value": "Toto"}, {"name": "JSON_name2"}]
I try the creation of aggregate function, but when there is the same key into the JSON, then they are merged and only the last values is preserved :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
);
When I use this function as here :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID
The result is :
{"name": "JSON_name2", "value": "Toto"}
And not as those that I want because the
{"name": "JSON_name1"}
is missing. The function preserve only the different keys and merge the other one with the last value.
Thanks for any help

If there is always only a single key/value pair in the JSON document, you can do this without a custom aggregate function:
SELECT groupid, jsonb_object_agg(k,v order by id)
FROM testconcat, jsonb_each(json_data) as x(k,v)
group by groupid;
The "last" value is defined by the ordering on the id column
The custom aggregate function might be faster though.

Finally I just find a solution, even if it is not the best, it seems to works.
I create the agreate function, as previously described with a small modification :
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '[]'
);
I just replace :
INITCOND = '{}'
with
INITCOND = '[]'
And after used it as previously :
SELECT jsonb_merge(json_data)
FROM testConcat
GROUP BY groupID

Related

why is column display order changed in json_agg funtion than that of a temp table - PostgreSQL

I am creating a temp table in a PostgreSQL function\proc.
CREATE TEMP TABLE tbl_temp_class(id SERIAL PRIMARY key, batch_id INT, class_id INT, class_name VARCHAR);
later I am dynamically adding columns to this table, using dynamic sql.
the l_column_counter is incremented with in a for loop untill n
l_sql_query := CONCAT('ALTER TABLE tbl_temp_class ADD column ', 'col', '_', l_column_counter, ' varchar default('''');');
EXECUTE l_sql_query;
At then end I want the tbl_temp_class result as a json array. Hence I'm doing below.
select json_agg(ut)
from (
select *
from tbl_temp_class
order by id) ut;
I expect the result for the above query to be
[
{
"id":1,
"batch_id":1,
"class_id":1,
"class_name":"Maths",
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":""
},
{
"id":2,
"batch_id":1,
"class_id":2,
"class_name":"History",
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":""
}
]
however, the result I am getting is as below. The column display order is scrambled.
Any idea how to fix this? Is this because the json is generated out of a temp table?
I need the column display order in the final json array to be same as the column display order in the temp table.
[
{
"id":1,
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":"",
"class_id":1,
"batch_id":1,
"class_name":"Maths",
},
{
"id":2,
"col_1":"",
"col_2":"",
"col_3":"",
"col_4":"",
"col_5":"",
"class_id":2,
"batch_id":1,
"class_name":"History",
}
]
Did you try an ORDER BY in the aggregation?
SELECT json_agg(ut ORDER BY id) -- this where you want to use the ORDER BY
FROM (
SELECT *
FROM tbl_temp_class
ORDER BY id) ut;

Deleting a jsonb array item by name

I have the following table
CREATE TABLE country (
id INTEGER NOT NULL PRIMARY KEY ,
name VARCHAR(50),
extra_info JSONB
);
INSERT INTO country(id,extra_info)
VALUES (1, '{ "name" : "France", "population" : "65000000", "flag_colours": ["red", "blue","white"]}');
INSERT INTO country(id,extra_info)
VALUES (2, '{ "name": "Spain", "population" : "47000000", "borders": ["Portugal", "France"] }');
and i can add an element to the array like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,999999999}', '"green"', true);
and update like this
UPDATE country SET extra_info = jsonb_set(extra_info, '{flag_colours,0}', '"yellow"');
I now would like to delete an array item with a known index or name.
How would i delete a flag_color element by index or by name?
Update
Delete by index
UPDATE country SET extra_info = extra_info #- '{flag_colours,-1}'
How can i delete by name?
As Arrays do not have direct access to items in a straightforward way, we can try to approach this differently through unnesting -> filtering elements -> stitching things back together. I have formulated a code example with ordered comments to help.
CREATE TABLE new_country AS
-- 4. Return a new array (for immutability) that contains the new desired set of colors
SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE)
FROM country
-- 3. Use Lateral join to apply this to every row
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE;
In the case you would like to update only the affected column without recreating the main table, you can:
UPDATE country
SET extra_info=new_extra_info
FROM new_country
WHERE country.id = new_country.id;
I have broken it down to two queries to improve readability; however you can also use a subquery instead of creating a new table (new_country).
With the subquery, it should look like:
UPDATE country
SET extra_info=new_extra_info
FROM (SELECT id, name, jsonb_set(extra_info, '{flag_colours}', new_colors, FALSE) new_extra_info
FROM country
-- 3. Use Lateral join to scale this across tables
LEFT JOIN LATERAL (
-- 1. First unnest the desired elements from the Json array as text (to enable filtering)
WITH prep AS (SELECT jsonb_array_elements_text(extra_info -> 'flag_colours') colors FROM country)
SELECT jsonb_agg(colors) new_colors -- 2. Form a new jsonb array after filtering
FROM prep
WHERE colors <> 'red') lat ON TRUE) new_country
WHERE country.id = new_country.id;
Additionally, you may filter rows via (As of PostgreSQL 9.4):
SELECT *
FROM country
WHERE (extra_info -> 'flag_colours') ? 'red'
Actually PG12 allows to do it without JOIN LATERAL:
SELECT jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'),
jsonb_set(j, '{flag_colours}', jsonb_path_query_array(j #> '{flag_colours}', '$[*] ? (# != "red")'))
FROM (SELECT '{ "name" : "France", "population" : "65000000",
"flag_colours": ["red", "blue","white"]}'::jsonb AS j
) AS j
WHERE j #? '$.flag_colours[*] ? (# == "red")';
jsonb_path_query_array | jsonb_set
------------------------+---------------------------------------------------------------------------------
["blue", "white"] | {"name": "France", "population": "65000000", "flag_colours": ["blue", "white"]}
(1 row)

Upserting a postgres jsonb based on multiple properties in jsonb field

I am trying to upsert into a table with jsonb field based on multiple json properties in the jsonb field using below query
insert into testtable(data) values('{
"key": "Key",
"id": "350B79AD",
"value": "Custom"
}')
On conflict(data ->>'key',data ->>'id')
do update set data =data || '{"value":"Custom"}'
WHERE data ->> 'key' ='Key' and data ->> 'appid'='350B79AD'
Above query throws error as below
ERROR: syntax error at or near "->>"
LINE 8: On conflict(data ->>'key',data ->>'id')
am I missing something obvious here?
I suppose you want to insert unique id and key combination value into the table. Then you need a unique constraint for them :
create unique index on testtable ( (data->>'key'), (data->>'id') );
and also use extra parentheses for the on conflict clause as tuple :
on conflict( (data->>'key'), (data->>'id') )
and qualify the jsonb column name ( data ) by table name (testtable) whenever you meet after do update set or after where clauses as testtable.data. So, convert your statement to :
insert into testtable(data) values('{
"key": "Key",
"id": "350B79AD",
"value": "Custom1"
}')
on conflict( (data->>'key'), (data->>'id') )
do update set data = testtable.data || '{"value":"Custom2"}'
where testtable.data ->> 'key' ='Key' and testtable.data ->> 'id'='350B79AD';
btw, data ->> 'appid'='350B79AD' converted to data ->> 'id'='350B79AD' ( appid -> id )
Demo

How to filter data from postgresql which has jsonb nested field in the array field of jsonb?

i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo

Postgres: concatenate JSONB values across rows?

I'm getting grips with the JSONB functionality in Postgres >= 9.5 (and loving it) but have hit a stumbling block. I've read about the ability to concatenate JSON fields, so '{"a":1}' || '{"b":2}' creates {"a":1,"b":2}, but I'd like to do this across the same field in multiple rows. e.g.:
select row_concat_??(data) from table where field = 'value'
I've discovered the jsonb_object_agg function which sounds like it would be what I want, except that the docs show it taking multiple arguments, and I only have one.
Any ideas how I would do this? jsonb_agg creates an array successfully so it feels like I'm really close.
After some digging around with custom aggregates in Postgres, I have the following:
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
)
Which is then usable as:
SELECT group_id, jsonb_merge(data) FROM table GROUP BY group_id
Use jsonb_each():
with data(js) as (
values
('{"a": 1}'::jsonb),
('{"b": 2}')
)
select jsonb_object_agg(key, value)
from data
cross join lateral jsonb_each(js);
jsonb_object_agg
------------------
{"a": 1, "b": 2}
(1 row)