My database contains a table which has a column with jsonb type, and I want to update a part of these data using functions/operators from postgreSQL. Given we have this:
{
"A":[
{"index":"1"},
{"index":"2"}
],
"B":[
{"index":"3"},
{"index":"4"}
]
}
Let's say we went to add a key with an empty array to objects from "A" array, in order to have:
{
"A":[
{"index":"1", "myArray":[]},
{"index":"2", "myArray":[]}
],
"B":[
{"index":"3"},
{"index":"4"}
]
}
How can I proceed?
I've already tried this kind of things without success:
UPDATE myTable SET myColumn = (myColumn::jsonb)->>'A' || '{"myArray":[]}'
UPDATE myTable SET myColumn = (
SELECT jsonb_agg(jsonb_set(
element,
array['A'],
to_jsonb(((element ->> 'A')::jsonb || '{"myArray":[]}')::jsonb)
))
FROM jsonb_array_elements(myColumn::jsonb) element
)::json
UPDATE myTable SET myColumn = (
SELECT jsonb_each((element ->> 'A')::jsonb) || '{"myArray":[]}'::jsonb
FROM jsonb_array_elements(myColumn::jsonb) element
)::json
Obviously, all of these tests have been big failure. I have difficulties to understand how works postgreSQL functions.
Somebody can help?
The approach with jsonb_array_elements and jsonb_set was the right idea, but somehow you nested them the wrong way round:
UPDATE myTable SET myColumn = jsonb_set(myColumn, '{A}', (
SELECT jsonb_agg( element || '{"myArray":[]}' )
FROM jsonb_array_elements(myColumn -> 'A') element
));
(online demo)
Btw if your column already has jsonb data type, you shouldn't need any casts.
Related
So i've came across issue with having to migrate data from one column to "clone" of itself with different jsonb schema -> i need to parse the json from
["keynamed": [...{"type": "type_info", "value": "value_in_here"}]]into something plain object with key:value - dictionary like {"type_info": "value_in_here" ,...}
so far i've tried with subqueries and json functions in subquery + switch case to map "type" to "type_info" and then use jsonb_build_object(), but this takes data from the wole table and i need to have it on update with data from row - is there anything simpler than doing N subqueries closest way i've came with is:
select
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
jsonb_build_object(type_, _value) as _json
from
(
select
_value,
CASE _type
...
END type_
from
(
select
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v);
But i have no idea how to make it row specyfic and apply to simple update like:
UPDATE table
SET table.new_field = (subquery with parsed dict in json)
Any ideas/tips how to solve it with plain PSQL without any external support?
The expected output of the table would be:
id | old_value | new_value
----------------+-------------------------------------+------------------------------------
1 | ["keynamed": [...{"type": "type_info", "value": "value_in_here"}]] | {"type_info": "value_in_here" ,...}
According to postgres documents you can use update with select table and use join pattern update document
Sample:
UPDATE accounts SET contact_first_name = first_name,
contact_last_name = last_name
FROM salesmen WHERE salesmen.id = accounts.sales_id;
If I understand correctly, below query can help you. but I can't test because I haven't sample data and I don't know this query has syntax error or not.
update table t
set new_value = tmp._json
from (
select
id,
jsonb_object_agg(t.k, t.v):: jsonb as _json
from
(
select
id,
jsonb_build_object(type_, _value) as _json
from
(
select
id,
_value,
CASE _type
...
END type_
from
(
select
id,
(datasets ->> 'type') as _type,
datasets -> 'value' as _value
from
(
select
id,
jsonb_array_elements(
values
-> 'keynamed'
) as datasets
from
table
) s
) s
) s
) s,
jsonb_each(_json) as t(k, v)
group by id) tmp
where tmp.id = t.id;
I have a table with a field called "data" which is of JSONB type. The content of "data" is an object with one of the fields called "associated_emails", which is an array of strings.
I need to update the existing table so that the content of "associated_emails" is all lower-case. How to achieve that? This is my attempt so far (it triggers error: ERROR: cannot extract elements from a scalar)
update mytable my
set
"data" = safe_jsonb_set(
my."data",
'{associated_emails}',
to_jsonb(
lower(
(
SELECT array_agg(x) FROM jsonb_array_elements_text(
coalesce(
my."data"->'associated_emails',
'{}'::jsonb
)
) t(x)
)::text[]::text
)::text[]
)
)
where
my.mytype = 'something';
You would like to use JSONB_SET and UPDATE the column with something like given below below:
UPDATE jsonb_test
SET data = JSONB_SET(data, '{0,associated_emails}',
JSONB(LOWER(data ->> 'associated_emails'::TEXT)));
I've got a JSON column in Postgres, that contains data in the following structure
{
"listings": [{
"id": "KTyneMdrAhAEKyC9Aylf",
"active": true
},
{
"id": "ZcjK9M4tuwhWWdK8WcfX",
"active": false
}
]
}
I need to do a few things, all of which I am unsure how to do
Add a new object to the listings array
Remove an object from the listings array based on its id
Update an object in the listings array based on its id
I am also using Sequelize if there are any built in functions to do this (I can't see anything obvious)
demo:db<>fiddle
Insert (jsonb_insert()):
UPDATE mytable
SET mydata = jsonb_insert(mydata, '{listings, 0}', '{"id":"foo", "active":true}');
Update (expand array, change value with jsonb_set(), reaggregate):
UPDATE mytable
SET mydata = jsonb_set(mydata, '{listings}', s.a)
FROM
(
SELECT
jsonb_agg(
CASE WHEN elems ->> 'id' = 'foo' THEN jsonb_set(elems, '{active}', 'false')
ELSE elems
END
) as a
FROM
mytable,
jsonb_array_elements(mydata -> 'listings') AS elems
) s;
Delete (expand array, filter relevant elements, reaggregate):
UPDATE mytable
SET mydata = jsonb_set(mydata, '{listings}', s.a)
FROM
(
SELECT
jsonb_agg(elems) as a
FROM
mytable,
jsonb_array_elements(mydata -> 'listings') AS elems
WHERE elems ->> 'id' != 'foo'
) s;
Updating and deleting can be done in one line like the inserting. But in that case you have to use the array element index instead of a certain value. If you want to change an array element with a value you need to read it first. This is only possible with a prior expanding.
i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo
I'm getting grips with the JSONB functionality in Postgres >= 9.5 (and loving it) but have hit a stumbling block. I've read about the ability to concatenate JSON fields, so '{"a":1}' || '{"b":2}' creates {"a":1,"b":2}, but I'd like to do this across the same field in multiple rows. e.g.:
select row_concat_??(data) from table where field = 'value'
I've discovered the jsonb_object_agg function which sounds like it would be what I want, except that the docs show it taking multiple arguments, and I only have one.
Any ideas how I would do this? jsonb_agg creates an array successfully so it feels like I'm really close.
After some digging around with custom aggregates in Postgres, I have the following:
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
)
Which is then usable as:
SELECT group_id, jsonb_merge(data) FROM table GROUP BY group_id
Use jsonb_each():
with data(js) as (
values
('{"a": 1}'::jsonb),
('{"b": 2}')
)
select jsonb_object_agg(key, value)
from data
cross join lateral jsonb_each(js);
jsonb_object_agg
------------------
{"a": 1, "b": 2}
(1 row)