I would like to design a query where I can combine two jsonb with an unknown number/set of elements in postgreSQL in a controlled manner. The jsonb operator || almost exactly fits my purpose, but for one of the jsonb elements I would like to concatenate and separate by comma the two values rather than having the second jsonb's value override the first's value. For example:
'{"a":"foo", "b":"one", "special":"comma"}'::jsonb || '{"a":"bar", "special":"separated"}'::jsonb → '{"a":"bar", "b":"one", "special":"comma,separated"}'
My current query is the following:
INSERT INTO table AS t (col1, col2, col3_jsonb)
VALUES ("first", "second", '["a":"bar", "special":"separated"]'::jsonb))
ON CONFLICT ON CONSTRAINT unique_entries DO UPDATE
SET col3_jsonb = excluded.col3_jsonb || t.col3_jsonb
RETURNING id;
which results in a jsonb element for col3_jsonb that has the value special set to separated rather than the desired comma,separated. I understand that this is the concatenation operator working as documented, but I am not sure how to approach treating one element of the jsonb differently other than perhaps trying to pull out special values with WITH.. clauses elsewhere in the query. Any insights or tips would be hugely appreciated!
with t(a,b) as (values(
'{"a":"foo", "b":"one", "special":"comma"}'::jsonb,
'{"a":"bar", "special":"separated"}'::jsonb))
select
a || jsonb_set(b, '{special}',
to_jsonb(concat_ws(',', nullif(a->>'special', ''), nullif(b->>'special', ''))))
from t;
┌────────────────────────────────────────────────────────┐
│ ?column? │
├────────────────────────────────────────────────────────┤
│ {"a": "bar", "b": "one", "special": "comma,separated"} │
└────────────────────────────────────────────────────────┘
nullif() and concat_ws() functions needed for cases where one or both "special" values is missing/null/empty
You can use jsonb_each on the two values followed by jsonb_object_agg to put them back into an object:
…
SET col3_jsonb = (
SELECT jsonb_object_agg(
key,
COALESCE(to_jsonb((old.value->>0) || ',' || (new.value->>0)), new.value, old.value)
)
FROM jsonb_each(example.old_obj) old
FULL OUTER JOIN jsonb_each(example.new_obj) new USING (key)
)
(online demo)
The casting of the arbitrary JSON value to the concatenable string required a trick. If you know that all your object properties have string values, you can simplify by using jsonb_each_text instead:
SELECT jsonb_object_agg(
key,
COALESCE(old.value || ',' || new.value, new.value, old.value)
)
FROM jsonb_each_text(example.old_obj) old
FULL OUTER JOIN jsonb_each_text(example.new_obj) new USING (key)
Related
We are using PostgREST to automatically generate a REST API for a Postgres database. Our primary keys have an external representation that's different from how we store them internally. For simplicity's sake lets pretend the ids are stored as integers but we represent them as hexadecimal strings outwardly.
It's simple enough to get PostgREST to convert to the external representation for read operations:
CREATE DOMAIN hexid AS bigint;
CREATE TABLE fruits (
fruit_id hexid PRIMARY KEY,
name text
);
CREATE OR REPLACE VIEW api_fruits AS
SELECT to_hex(fruit_id) as fruit_id, name FROM fruits;
INSERT INTO fruits(fruit_id, name) VALUES('51955', 'avocado');
PostgREST generates the expected representation when we GET api_fruits:
[
{
"fruit_id": "caf3",
"name": "avocado"
}
]
But that's about as far as we get with this solution. It's a one way transformation so we won't be able to POST/PATCH records this way. The way PostgREST works is to transform such requests into equivalent INSERT and UPDATE statements. But this view with its custom formatting is not updatable. This is what would happen if we tried:
ERROR: cannot insert into column "fruit_id" of view "api_fruits"
DETAIL: View columns that are not columns of their base relation are not updatable.
STATEMENT: WITH pgrst_source AS (WITH pgrst_payload AS (SELECT $1::json AS json_data), pgrst_body AS ( SELECT CASE WHEN json_typeof(json_data) = 'array' THEN json_data ELSE json_build_array(json_data) END AS val FROM pgrst_payload) INSERT INTO "api_x"."api_fruits"("fruit_id", "name") SELECT "fruit_id", "name" FROM json_populate_recordset (null::"api_x"."api_fruits", (SELECT val FROM pgrst_body)) _ RETURNING "api_x"."api_fruits".*) SELECT '' AS total_result_set, pg_catalog.count(_postgrest_t) AS page_total, CASE WHEN pg_catalog.count(_postgrest_t) = 1 THEN coalesce((
WITH data AS (SELECT row_to_json(_) AS row FROM pgrst_source AS _ LIMIT 1)
SELECT array_agg(json_data.key || '=eq.' || json_data.value)
FROM data CROSS JOIN json_each_text(data.row) AS json_data
WHERE json_data.key IN ('')
), array[]::text[]) ELSE array[]::text[] END AS header, '' AS body, nullif(current_setting('response.headers', true), '') AS response_headers, nullif(current_setting('response.status', true), '') AS response_status FROM (SELECT * FROM pgrst_source) _postgrest_t
We can't INSERT into "View columns that are not columns of their base relation".
The obvious workaround is to serve fruit_id as a straight column, just an integer. With some post and preprocessing at the nginx level we can hex encode it there (and hex decode incoming ids). I'm wondering if we can do better than that though. For large API operations, re-encoding the JSON will use a lot of memory and CPU time and it seems so unnecessary.
It would have been great to be able to use a custom CREATE CAST to take the incoming hexadecimal strings and turn them back into integers, something like this:
CREATE CAST (json AS hexid) WITH FUNCTION json_to_hexid AS ASSIGNMENT;
But alas custom casts are ignored on CREATE DOMAIN types. And we can't make a true custom column type because our cloud Postgres host (Google Cloud SQL) doesn't allow custom extensions.
It feels like some combination of INSTEAD OF triggers or rules could work. But when using query parameters to filter results using query parameters (e.g. select a fruit by id), I don't think there's an appropriate trigger to use. INSTEAD OF doesn't work for straight SELECT does it?
For example I've tested doing something like this to take care of INSERT and allow POST with PostgREST. It works:
CREATE OR REPLACE FUNCTION api_fruits_insert()
RETURNS trigger AS
$$
BEGIN
INSERT INTO fruits(fruit_id, name) VALUES (('x' || lpad(NEW.fruit_id, 16, '0'))::bit(64)::bigint::hexid, NEW.name);
RETURN NEW;
END
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER api_fruits_insert
INSTEAD OF INSERT
ON api_fruits
FOR EACH ROW
EXECUTE PROCEDURE api_fruits_insert();
The trouble is in the WHERE clause. Let's PATCH api_fruits?fruit_id=in.(7b,caf3) with {"name": "pear"}. This works out of the box since the name column is updatable but look at the query:
WITH pgrst_source AS (WITH pgrst_payload AS (SELECT $1::json AS json_data), pgrst_body AS ( SELECT CASE WHEN json_typeof(json_data) = 'array' THEN json_data ELSE json_build_array(json_data) END AS val FROM pgrst_payload) UPDATE "api_x"."api_fruits" SET "name" = _."name" FROM (SELECT * FROM json_populate_recordset (null::"api_x"."api_fruits" , (SELECT val FROM pgrst_body) )) _ WHERE "api_x"."api_fruits"."fruit_id" = ANY ($2) RETURNING 1) SELECT '' AS total_result_set, pg_catalog.count(_postgrest_t) AS page_total, array[]::text[] AS header, '' AS body, nullif(current_setting('response.headers', true), '') AS response_headers, nullif(current_setting('response.status', true), '') AS response_status FROM (SELECT * FROM pgrst_source) _postgrest_t
DETAIL: parameters: $1 = '{
"name": "pear"
}', $2 = '{7b,caf3}'
So we have essentially UPDATE api_fruits SET name='berry' WHERE fruit_id IN ('7b', 'caf3');. Surprisingly this works but it's a full table scan so Postgres can evaluate to_hex(fruit_id) for each row looking for matches. The same happens if we try to GET a record by fruit_id. How would we rewrite the WHERE clauses?
It really feels like some combination of just the right Postgres and PostgREST features should be able to get us to a point where it's all happening in Postgres without nginx's help and without excessive complexity. Any ideas?
I'm using postgresql 11, I have a jsonb which represent a row of that table, it's look like
{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com",...,"thirdpartyauthenticationkey":{}}
is there any method that I could gather all the "values" of the jsonb into a string which is separated by ',' and without the keys?
The string I want to obtain with the jsonb above is like
(test, Root, 0, superadmin#ae.com, ..., {})
I need to keep the ORDER of those values as what their keys were in the jsonb. Could I do that with postgresql?
You can use the jsonb_populate_record function (assuming your json data does match the users table). This will force the text value to match the order of your users table:
Schema (PostgreSQL v13)
CREATE TABLE users (
userid text,
rolename text,
loginerror int,
email text,
thirdpartyauthenticationkey json
)
Query #1
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
SELECT jsonb_populate_record(null::users, js),
jsonb_populate_record(null::users, js)::text AS record_as_text,
pg_typeof(jsonb_populate_record(null::users, js)::text)
FROM d
;
jsonb_populate_record
record_as_text
pg_typeof
(test,Root,0,superadmin#ae.com,{})
(test,Root,0,superadmin#ae.com,{})
text
(other,User,324,nope#ae.com,{})
(other,User,324,nope#ae.com,{})
text
Note that if you're building this string to insert it back into postgresql then you don't need to do that, since the result of jsonb_populate_record will match your table:
Query #2
WITH d(js) AS (
VALUES
('{"userid":"test", "rolename":"Root", "loginerror":0, "email":"superadmin#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb),
('{"userid":"other", "rolename":"User", "loginerror":324, "email":"nope#ae.com", "thirdpartyauthenticationkey":{}}'::jsonb)
)
INSERT INTO users
SELECT (jsonb_populate_record(null::users, js)).*
FROM d;
There are no results to be displayed.
Query #3
SELECT * FROM users;
userid
rolename
loginerror
email
thirdpartyauthenticationkey
test
Root
0
superadmin#ae.com
[object Object]
other
User
324
nope#ae.com
[object Object]
View on DB Fiddle
You can use jsonb_each_text() to get a set of a text representation of the elements, string_agg() to aggregate them in a comma separated string and concat() to put that in parenthesis.
SELECT concat('(', string_agg(value, ', '), ')')
FROM jsonb_each_text('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb) jet (key,
value);
db<>fiddle
You didn't provide DDL and DML of a (the) table the JSON may reside in (if it does, that isn't clear from your question). The demonstration above therefore only uses the JSON you showed as a scalar. If you have indeed a table you need to CROSS JOIN LATERAL and GROUP BY some key.
Edit:
If you need to be sure the order is retained and you don't have that defined in a table's structure as #Marth's answer assumes, then you can of course extract every value manually in the order you need them.
SELECT concat('(',
concat_ws(', ',
j->>'userid',
j->>'rolename',
j->>'loginerror',
j->>'email',
j->>'thirdpartyauthenticationkey'),
')')
FROM (VALUES ('{"userid":"test","rolename":"Root","loginerror":0,"email":"superadmin#ae.com","thirdpartyauthenticationkey":{}}'::jsonb)) v (j);
db<>fiddle
In my Postgres database, I have one of the table columns having jsonb datatype. In that column, I am storing the JSON array. Now, I want to remove or modify a specific JSON object inside the array.
My JSON array looks like
[
{
"ModuleId": 1,
"ModuleName": "XYZ"
},
{
"ModuleId": 2,
"ModuleName": "ABC"
}
]
Now, I want to perform two operations:
How can I remove the JSON object from the above array having ModuleId as 1?
How can I modify the JSON object i.e. change the ModuleName as 'CBA' whose ModuleId is 1?
Is there a way through which I could perform queried directly on JSON array?
Note: Postgres version is 12.0
Both problems require unnesting and aggregating back the (modified) JSON elements. For both problems I would create a function to make that easier to use.
create function remove_element(p_value jsonb, p_to_remove jsonb)
returns jsonb
as
$$
select jsonb_agg(t.element order by t.idx)
from jsonb_array_elements(p_value) with ordinality as t(element, idx)
where not t.element #> p_to_remove;
$$
language sql
immutable;
The function can be used like this, e.g. in an UPDATE statement:
update the_table
set the_column = remove_element(the_column, '{"ModuleId": 1}')
where ...
For the second problem a similar function comes in handy.
create function change_value(p_value jsonb, p_what jsonb, p_new jsonb)
returns jsonb
as
$$
select jsonb_agg(
case
when t.element #> p_what then t.element||p_new
else t.element
end order by t.idx)
from jsonb_array_elements(p_value) with ordinality as t(element, idx);
$$
language sql
immutable;
The || operator will overwrite an existing key, so this effectively replaces the old name with the new name.
You can use it like this:
update the_table
set the_column = change_value(the_column, '{"ModuleId": 1}', '{"ModuleName": "CBA"}')
where ...;
I think passing the JSON values is a bit more flexible then hardcoding the keys which makes the use of the function very limited. The first function could also be used to remove array elements by comparing multiple keys.
If you don't want to create the functions, replace the function call with the select from the functions.
For your both cases, consider using a subquery including dynamic logic to determine the index of the element which contains the value for ModuleId key equal to 1.
For the First case, use #- operator :
WITH s AS
(
SELECT ('{'||idx-1||'}')::text[] AS path, jsdata
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'ModuleId'='1'
)
UPDATE tab
SET jsdata = s.jsdata #- path
FROM s
and for the second case, use jsonb_set() function with path coming from thesubquery :
WITH s AS
(
SELECT ('{'||idx-1||',ModuleId}')::text[] AS path
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'ModuleId'='1'
)
UPDATE tab
SET jsdata = jsonb_set(jsdata,s.path,'"CBA"',false)
FROM s
Demo
I'm getting grips with the JSONB functionality in Postgres >= 9.5 (and loving it) but have hit a stumbling block. I've read about the ability to concatenate JSON fields, so '{"a":1}' || '{"b":2}' creates {"a":1,"b":2}, but I'd like to do this across the same field in multiple rows. e.g.:
select row_concat_??(data) from table where field = 'value'
I've discovered the jsonb_object_agg function which sounds like it would be what I want, except that the docs show it taking multiple arguments, and I only have one.
Any ideas how I would do this? jsonb_agg creates an array successfully so it feels like I'm really close.
After some digging around with custom aggregates in Postgres, I have the following:
DROP AGGREGATE IF EXISTS jsonb_merge(jsonb);
CREATE AGGREGATE jsonb_merge(jsonb) (
SFUNC = jsonb_concat(jsonb, jsonb),
STYPE = jsonb,
INITCOND = '{}'
)
Which is then usable as:
SELECT group_id, jsonb_merge(data) FROM table GROUP BY group_id
Use jsonb_each():
with data(js) as (
values
('{"a": 1}'::jsonb),
('{"b": 2}')
)
select jsonb_object_agg(key, value)
from data
cross join lateral jsonb_each(js);
jsonb_object_agg
------------------
{"a": 1, "b": 2}
(1 row)
I have the following case statement to prepare as a dynamic as shown below:
Example:
I have the case statement:
case cola
when cola between '2001-01-01' and '2001-01-05' then 'G1'
when cola between '2001-01-10' and '2001-01-15' then 'G2'
when cola between '2001-01-20' and '2001-01-25' then 'G3'
when cola between '2001-02-01' and '2001-02-05' then 'G4'
when cola between '2001-02-10' and '2001-02-15' then 'G5'
else ''
end
Note: Now I want to create dynamic case statement because of the values dates and name passing as a parameter and it may change.
Declare
dates varchar = '2001-01-01to2001-01-05,2001-01-10to2001-01-15,
2001-01-20to2001-01-25,2001-02-01to2001-02-05,
2001-02-10to2001-02-15';
names varchar = 'G1,G2,G3,G4,G5';
The values in the variables may change as per the requirements, it will be dynamic. So the case statement should be dynamic without using loop.
You may not need any function for this, just join to a mapping data-set:
with cola_map(low, high, value) as (
values(date '2001-01-01', date '2001-01-05', 'G1'),
('2001-01-10', '2001-01-15', 'G2'),
('2001-01-20', '2001-01-25', 'G3'),
('2001-02-01', '2001-02-05', 'G4'),
('2001-02-10', '2001-02-15', 'G5')
-- you can include as many rows, as you want
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
If your date ranges could collide, you can use DISTINCT ON or GROUP BY to avoid row duplication.
Note: you can use a simple sub-select too, I used a CTE, because it's more readable.
Edit: passing these data (as a single parameter) can be achieved by passing a multi-dimensional array (or an array of row-values, but that requires you to have a distinct, predefined composite type).
Passing arrays as parameters can depend on the actual client (& driver) you use, but in general, you can use the array's input representation:
-- sql
with cola_map(low, high, value) as (
select d[1]::date, d[2]::date, d[3]
from unnest(?::text[][]) d
)
select table_name.*,
coalesce(cola_map.value, '') -- else branch from case expression
from table_name
left join cola_map on table_name.cola between cola_map.low and cola_map.high
// client pseudo code
query = db.prepare(sql);
query.bind(1, "{{2001-01-10,2001-01-15,G2},{2001-01-20,2001-01-25,G3}}");
query.execute();
Passing each chunk of data separately is also possible with some clients (or with some abstractions), but this is highly depends on your driver/orm/etc. you use.