Postgresql: How to increase by 1 on a jsonb field - postgresql

In Postgresql 9.6, There is a table contains a column data jsonb, it has a count field.
How to increase data->>count by 1 in a single sql? Like $inc from mongodb.

This is ugly but works. I'm just figuring this out now by readings the documentation, so there may very well be a better way of doing this.
Let's start with a simple table:
create table table1 (data jsonb);
Insert some JSON:
insert into table1 (data) values ('{"name": "example", "count": 0}');
Now, we want to update the value of the count key in the data column. Assuming that you have pg 9.5 or later, you can use the concatenation operator to merge two json (or jsonb) dictionaries, like this:
sandbox=# select data || '{"count": 1}' as data from table1;
data
---------------------------------
{"name": "example", "count": 1}
So we know how to update a JSON key. But in the above example I'm using a static value in the replacement, while we actually want "one more than the current value of count". We can use the concatenation operating with strings to build the necessary JSON:
sandbox=# select '{"count": ' || ((data->>'count')::int + 1) || '}' as count from table1;
count
--------------
{"count": 1}
Putting that together:
sandbox=# update table1 set data = data || ('{"count": ' || ((data->>'count')::int + 1) || '}')::jsonb ;
UPDATE 1
Which gets us:
sandbox=# select * from table1;
data
---------------------------------
{"name": "example", "count": 1}

Related

Search for string in jsonb values - PostgreSQL

For simplicity, a row of table looks like this:
key: "z06khw1bwi886r18k1m7d66bi67yqlns",
reference_keys: {
"KEY": "1x6t4y",
"CODE": "IT137-521e9204-ABC-TESTE"
"NAME": "A"
},
I have a jsonb object like this one {"KEY": "1x6t4y", "CODE": "IT137-521e9204-ABC-TESTE", "NAME": "A"} and I want to search for a query in the values of any key. If my query is something like '521e9204' I want it to return the row that reference_keys has '521e9204' in any value. Basicly the keys don't matter for this scenario.
Note: The column reference_keys and so the jsonb object, are always a 1 dimensional array.
I have tried a query like this:
SELECT * FROM table
LEFT JOIN jsonb_each_text(table.reference_keys) AS j(k, value) ON true
WHERE j.value LIKE '%521e9204%'
The problem is that it duplicates rows, for every key in the json and it messes up the returned items.
I have also thinked of doing something like this:
SELECT DISTINCT jsonb_object_keys(reference_keys) from table;
and then use a query like:
SELECT * FROM table
WHERE reference_keys->>'CODE' like '%521e9204%'
It seems like this would work but I really don't want to rely on this solution.
You can rewrite your JOIN to an EXISTS condition to avoid the duplicates:
SELECT t.*
FROM the_table t
WHERE EXISTS (select *
from jsonb_each_text(t.reference_keys) AS j(k, value)
WHERE j.value LIKE '%521e9204%');
If you are using Postgres 12 or later, you can also use a JSON path query:
where jsonb_path_exists(reference_keys, 'strict $.** ? (# like_regex "521e9204")')

How to filter data from postgresql which has jsonb nested field in the array field of jsonb?

i have a table with a jsonb column and documents are like these(simplified)
{
"a": 1,
"rg": [
{
"rti": 2
}
]
}
I want to filter all the rows which has 'rg' field and there is at least one 'rti'field in the array.
My current solution is
log->>'rg' ilike '%rti%'
Is there another approach, probably a faster solution exists.
Another approach would be applying jsonb_each to the jsonb object and then jsonb_array_elements_text to the extracted value from jsonb_each method :
select id, js_value2
from
(
select (js).value as js_value, jsonb_array_elements_text((js).value) as js_value2,id
from
(
select jsonb_each(log) as js, id
from tab
) q
where (js).key = 'rg'
) q2
where js_value2 like '%rti%';
Demo

postgres - syntax for updating a jsonb array

I'm struggling to find the right syntax for updating an array in a jsonb column in postgres 9.6.6
Given a column "comments", with this example:
[
{
"Comment": "A",
"LastModified": "1527579949"
},
{
"Comment": "B",
"LastModified": "1528579949"
},
{
"Comment": "C",
"LastModified": "1529579949"
}
]
If I wanted to append Z to each comment (giving AZ, BZ, CZ).
I know I need to use something like jsonb_set(comments, '{"Comment"}',
Any hints on finishing this off?
Thanks.
Try:
UPDATE elbat
SET comments = array_to_json(ARRAY(SELECT jsonb_set(x.original_comment,
'{Comment}',
concat('"',
x.original_comment->>'Comment',
'Z"')::jsonb)
FROM (SELECT jsonb_array_elements(elbat.comments) original_comment) x))::jsonb;
It uses jsonb_array_elements() to get the array elements as set, applies the changes on them using jsonb_set(), transforms this to an array and back to json with array_to_json().
But that's an awful lot of work. OK, maybe there is a more elegant solution, that I didn't find. But since your JSON seems to have a fixed schema anyway, I'd recommend a redesign to do it the relational way and have a simple table for the comments plus a linking table for the objects the comment is on. The change would have been very, very easy in such a model for sure.
Find a query returning the expected result:
select jsonb_agg(value || jsonb_build_object('Comment', value->>'Comment' || 'Z'))
from my_table
cross join jsonb_array_elements(comments);
jsonb_agg
-----------------------------------------------------------------------------------------------------------------------------------------------------
[{"Comment": "AZ", "LastModified": "1527579949"}, {"Comment": "BZ", "LastModified": "1528579949"}, {"Comment": "CZ", "LastModified": "1529579949"}]
(1 row)
Create a simple SQL function based of the above query:
create or replace function update_comments(jsonb)
returns jsonb language sql as $$
select jsonb_agg(value || jsonb_build_object('Comment', value->>'Comment' || 'Z'))
from jsonb_array_elements($1)
$$;
Use the function:
update my_table
set comments = update_comments(comments);
DbFiddle.

Postgres jsonb search in array with greater operator (with jsonb_array_elements)

I try to search a solution but I didn't find anything for my case...
Here is the database declaration (simplified):
CREATE TABLE documents (
document_id int4 NOT NULL GENERATED BY DEFAULT AS IDENTITY,
data_block jsonb NULL
);
And this is an example of insert.
INSERT INTO documents (document_id, data_block)
VALUES(878979,
{"COMMONS": {"DATE": {"value": "2017-03-11"}},
"PAYABLE_INVOICE_LINES": [
{"AMOUNT": {"value": 52408.53}},
{"AMOUNT": {"value": 654.23}}
]});
INSERT INTO documents (document_id, data_block)
VALUES(977656,
{"COMMONS": {"DATE": {"value": "2018-03-11"}},
"PAYABLE_INVOICE_LINES": [
{"AMOUNT": {"value": 555.10}}
]});
I want to search all documents where one of the PAYABLE_INVOICE_LINES has a line with a value greater than 1000.00
My query is
select *
from documents d
cross join lateral jsonb_array_elements(d.data_block -> 'PAYABLE_INVOICE_LINES') as pil
where (pil->'AMOUNT'->>'value')::decimal >= 1000
But, as I want to limit to 50 documents, I have to group on the document_id and limit the result to 50.
With millions of documents, this query is very expensive... 10 seconds with 1 million.
Do you have some ideas to have better performance ?
Thanks
Instead of cross join lateral use where exists:
select *
from documents d
where exists (
select 1
from jsonb_array_elements(d.data_block -> 'PAYABLE_INVOICE_LINES') as pil
where (pil->'AMOUNT'->>'value')::decimal >= 1000)
limit 50;
Update
And yet another method, more complex but also much more efficient.
Create function that returns max value from your JSONB data, like this:
create function fn_get_max_PAYABLE_INVOICE_LINES_value(JSONB) returns decimal language sql as $$
select max((pil->'AMOUNT'->>'value')::decimal)
from jsonb_array_elements($1 -> 'PAYABLE_INVOICE_LINES') as pil $$
Create index on this function:
create index idx_max_PAYABLE_INVOICE_LINES_value
on documents(fn_get_max_PAYABLE_INVOICE_LINES_value(data_block));
Use function in your query:
select *
from documents d
where fn_get_max_PAYABLE_INVOICE_LINES_value(data_block) > 1000
limit 50;
In this case the index will be used and query will be much faster on large amount of data.
PS: Usually limit have sense in pair with order by.
Grouping and limiting is easy enough:
select document_id
from documents d
cross join lateral
jsonb_array_elements(d.data_block -> 'PAYABLE_INVOICE_LINES') as pil
where (pil->'AMOUNT'->>'value')::decimal >= 1000
group by
document_id
limit 50
If you query this more often, you could store a list of documents and invoice lines in a separate table. When you're adding, modifying or deleting documents, you'd have to keep the separate table up to date too. But querying a regular table is much faster than querying JSON columns.

Postgres: concatenate JSONB values across rows?

I'm getting grips with the JSONB functionality in Postgres >= 9.5 (and loving it) but have hit a stumbling block. I've read about the ability to concatenate JSON fields, so '{"a":1}' || '{"b":2}' creates {"a":1,"b":2}, but I'd like to do this across the same field in multiple rows. e.g.:
select row_concat_??(data) from table where field = 'value'
I've discovered the jsonb_object_agg function which sounds like it would be what I want, except that the docs show it taking multiple arguments, and I only have one.
Any ideas how I would do this? jsonb_agg creates an array successfully so it feels like I'm really close.
After some digging around with custom aggregates in Postgres, I have the following:
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
)
Which is then usable as:
SELECT group_id, jsonb_merge(data) FROM table GROUP BY group_id
Use jsonb_each():
with data(js) as (
values
('{"a": 1}'::jsonb),
('{"b": 2}')
)
select jsonb_object_agg(key, value)
from data
cross join lateral jsonb_each(js);
jsonb_object_agg
------------------
{"a": 1, "b": 2}
(1 row)