PostgreSQL transform value from jsonb column to other column - postgresql

I have a PostgreSQL database v10 with the following data:
CREATE TABLE test (
id INT,
custom_fields jsonb not null default '{}'::jsonb,
guest_profile_id character varying(100)
);
INSERT INTO test (id, custom_fields) VALUES (1, '[{"protelSurname": "Smith", "servicio_tags": ["protel-info"], "protelUniqueID": "[{\"ID\":\"Test1-ID\",\"Type\":\"21\",\"ID_Context\":\"GHA\"}{\"ID\":\"4842148\",\"Type\":\"1\",\"ID_Context\":\"protelIO\"}]", "protelGivenName": "Seth"}, {"value": "Test", "display_name": "Traces", "servicio_tags": ["trace"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (2, '[{"protelSurname": "Smith", "servicio_tags": ["protel-info"], "protelUniqueID": "[{\"ID\":\"Test2-ID\",\"Type\":\"21\",\"ID_Context\":\"GHA\"},{\"ID\":\"4842148\",\"Type\":\"1\",\"ID_Context\":\"protelIO\"}]", "protelGivenName": "Seth"}, {"value": "Test2", "display_name": "Traces", "servicio_tags": ["trace"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (3, '[{"value": "Test3-ID", "display_name": "Test", "servicio_tags": ["person-name"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (4, '[{"value": "Test4-ID", "display_name": "Test", "servicio_tags": ["profile-id"]}, {...}]');
There are way more records in the real table.
Goal: I want to transfer the TestX-ID values into the column guest_profile_id in the same row. And only those values not the other JSONB objects or values etc.
My try:
do $$
declare
colvar varchar;
begin
select x ->> 'ID' from (select jsonb_array_elements(f) from (
select (field ->>'protelUniqueID')::jsonb f
FROM guest_group gg,
lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["protel-info"]}'::jsonb
) d(f)) dd(x)
where x->>'ID_Context'='protelIO'
into colvar;
raise notice 'colvar: %', colvar;
end
$$;
execute format('UPDATE guest_group SET guest_profile_id = %s, colvar);
My Result: It only takes Test1-ID and stores it in all rows in the guest_profile_id column.
My Problem: I want to store each TestX-ID in the custom_fields column into the guest_profile_id column in the same row.
My assumption: I need to add a loop to this query. If the query up there does not find any value, the loop should try the next query: e.g.:
SELECT field ->>'value'
FROM guest_group gg
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["profile-id"]}'::jsonb
And then the next:
SELECT field ->>'value'
FROM guest_group gg
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["person-name"]}'::jsonb
When all TestX-ID values are copied into the guest_profile_id column in the same row, the goal is reached.
How can I put all this together? Thanks a lot for the help.

I want to store each TestX-ID in the custom_fields column into the guest_profile_id column in the same row.
No need for PL/PGSQL, loops or dynamic sql. Just use a single query of the form
UPDATE guest_group
SET guest_profile_id = (/* complex expression */);
In your case, with that complex expression it amounts to
UPDATE guest_group
SET guest_profile_id = (
SELECT x ->> 'ID'
FROM jsonb_array_elements(custom_fields) AS field,
jsonb_array_elements(field ->> 'protelUniqueID') AS dd(x)
WHERE value #> '{"servicio_tags": ["protel-info"]}'::jsonb
AND x->>'ID_Context' = 'protelIO'
);
If the query up there does not find any value, it should try the next query
You can use the COALESCE function for that, or add some OR conditions to your query, or even use a UNION. Alternatively, add a WHERE guest_profile_id IS NULL to the update statement to exclude those rows that already have a value, and do multiple successive updates.

Related

Query matching property in another table given a comma-separated string in JSONB

I would like to look up a property in another table B, where the source is part of a comma-separated string inside a JSONB column of table A.
create table option
(
optionid bigint not null primary key,
attributevalues jsonb default '{}'::jsonb
);
create table district
(
districtid bigint not null primary key,
uid varchar(11) not null,
name varchar(230) not null unique
);
INSERT into option values (1, '{"value": "N8UXIAycxy3,uVwyu3R4nZG,fuja8k8PCFO,y0eUmlYp7ey", "attribute": {"id": "K54wAf6EX0s"}}'::jsonb);
INSERT INTO district (districtid, uid, name) VALUES
(1, 'N8UXIAycxy3', 'district1'),
(2, 'uVwyu3R4nZG', 'district2'),
(3, 'fuja8k8PCFO', 'district3'),
(4, 'y0eUmlYp7ey', 'district4');
I can get all the items split by , but how do I "join" to look up the name (e.g. N8UXIAycxy3 --> district1)?
I tried to "join" in a traditional sense but this will not work as the district_uid is not accessible for the query as such:
SELECT UNNEST(STRING_TO_ARRAY(co.attributevalues #>> '{"K54wAf6EX0s", "value"}', ',')) AS district_uid
FROM option o
JOIN district d on district_uid = d.uid;
I would like to have the query result: district1,district2,district3,district4. Is this possible or do I need a loop?
DB Fiddle
You need to convert to array the comma separated string, i.e. attributevalues->>'value':
select name
from option
cross join unnest(string_to_array(attributevalues->>'value', ',')) as district_uid
join district on uid = district_uid
DB fiddle.

PostgreSQL using COALESCE or other Conditional Expressions to SET field

I have a PostgreSQL v10 DB with the following values:
CREATE TABLE test (
id INT,
custom_fields jsonb not null default '{}'::jsonb,
guest_profile_id character varying(100)
);
INSERT INTO test (id, custom_fields) VALUES (1, '[{"protelSurname": "Smith", "servicio_tags": ["protel-info"], "protelUniqueID": "[{\"ID\":\"Test1-ID\",\"Type\":\"21\",\"ID_Context\":\"GHA\"}{\"ID\":\"4842148\",\"Type\":\"1\",\"ID_Context\":\"protelIO\"}]", "protelGivenName": "Seth"}, {"value": "Test", "display_name": "Traces", "servicio_tags": ["trace"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (2, '[{"protelSurname": "Smith", "servicio_tags": ["protel-info"], "protelUniqueID": "[{\"ID\":\"Test2-ID\",\"Type\":\"21\",\"ID_Context\":\"GHA\"},{\"ID\":\"4842148\",\"Type\":\"1\",\"ID_Context\":\"protelIO\"}]", "protelGivenName": "Seth"}, {"value": "Test2", "display_name": "Traces", "servicio_tags": ["trace"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (3, '[{"value": "Test3-ID", "display_name": "Test", "servicio_tags": ["profile-id"]}, {...}]');
INSERT INTO test (id, custom_fields) VALUES (4, '[{"value": "Test4-ID", "display_name": "Test", "servicio_tags": ["person-name"]}, {...}]');
I have a query, which works and saves values from the custom_field row to the guest_profile_id column in the same row:
UPDATE guest_group
SET guest_profile_id = (
SELECT x ->> 'ID'
FROM jsonb_array_elements(custom_fields) AS field,
jsonb_array_elements((field ->> 'protelUniqueID') :: jsonb) AS dd(x)
WHERE value #> '{"servicio_tags": ["protel-info"]}'::jsonb
AND x->>'ID_Context' = 'protelIO'
);
But this only works for the first two rows. Therefor I want to use the next query-snippets in order to copy Test3-ID in row 3 to the guest_profile_id column and Test4-ID in row 4 to the guest_profile_id column.
1.
SELECT field ->>'value'
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["profile-id"]}'::jsonb
2.
SELECT field ->>'value'
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["person-name"]}'::jsonb
My problem: I do not know how to use COALESCE or other Conditional Expressions in order to chain those small queries. Because this should be possible if the first query uses NULL to set the field, COALESCE should help me to ignore this value and jump to the next query-snippet.
Desires result: I want that all the TestX-ID values from the table above are copied to the guest_profile_id column into the same row.
My try:
UPDATE test
SET guest_profile_id = COALESCE((
SELECT x ->> 'ID'
FROM jsonb_array_elements(custom_fields) AS field,
jsonb_array_elements((field ->> 'protelUniqueID') :: jsonb) AS dd(x)
WHERE value #> '{"servicio_tags": ["protel-info"]}'::jsonb
AND x->>'ID_Context' = 'protelIO'),(
SELECT field ->>'value'
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["profile-id"]}'::jsonb),(
SELECT field ->>'value'
cross join lateral jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["person-name"]}'::jsonb));
Gives me:
ERROR: syntax error at or near "cross"
LINE 9: cross join lateral jsonb_array_elements(custom_fields) ...
Thanks a lot for the help!
Some more brackets did the job around the select_queries:
guest_profile_id = COALESCE((first_select_query),((second_select_query)), ((…)))
UPDATE test
SET guest_profile_id = COALESCE((
SELECT x ->> 'ID'
FROM jsonb_array_elements(custom_fields) AS field,
jsonb_array_elements((field ->> 'protelUniqueID') :: jsonb) AS dd(x)
WHERE value #> '{"servicio_tags": ["protel-info"]}'::jsonb
AND x->>'ID_Context' = 'protelIO' LIMIT 1),((
SELECT field ->>'value'
FROM jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["profile-id"]}'::jsonb LIMIT 1)), ((
SELECT field ->>'value'
FROM jsonb_array_elements(custom_fields) AS field
WHERE value #> '{"servicio_tags": ["person-name"]}'::jsonb LIMIT 1
)));
Here the link to a fiddle: Query which works

Copy rows into same table, but change value of one field

I have a list of values:
(56957,85697,56325,45698,21367,56397,14758,39656)
and a 'template' row in a table.
I want to do this:
for value in valuelist:
{
insert into table1 (field1, field2, field3, field4)
select value1, value2, value3, (value)
from table1
where ID = (ID of template row)
}
I know how I would do this in code, like c# for instance, but I'm not sure how to 'loop' this while passing in a new value to the insert statement. (i know that code makes no sense, just trying to convey what I'm trying to accomplish.
There is no need to loop here, SQL is a set based language and you apply your operations to entire sets of data all at once as opposed to looping through row by row.
insert statements can come from either an explicit list of values or from the result of a regular select statement, for example:
insert into table1(col1, col2)
select col3
,col4
from table2;
There is nothing stopping you selecting your data from the same place you are inserting to, which will duplicate all your data:
insert into table1(col1, col2)
select col1
,col2
from table1;
If you want to edit one of these column values - say by incrementing the value currently held, you simply apply this logic to your select statement and make sure the resultant dataset matches your target table in number of columns and data types:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1;
Optionally, if you only want to do this for a subset of those values, just add a standard where clause:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1
where col1 = <your value>;
Now if this isn't enough for you to work it out by yourself, you can join your dataset to you values list to get a version of the data to be inserted for each value in that list. Because you want each row to join to each value, you can use a cross join:
declare #v table(value int);
insert into #v values(56957),(85697),(56325),(45698),(21367),(56397),(14758),(39656);
insert into table1(col1, col2, value)
select t.col1
,t.col2
,v.value
from table1 as t
cross join #v as v

Select value from an enumerated list in PostgreSQL

I want to select from an enumaration that is not in database.
E.g. SELECT id FROM my_table returns values like 1, 2, 3
I want to display 1 -> 'chocolate', 2 -> 'coconut', 3 -> 'pizza' etc. SELECT CASE works but is too complicated and hard to overview for many values. I think of something like
SELECT id, array['chocolate','coconut','pizza'][id] FROM my_table
But I couldn't succeed with arrays. Is there an easy solution? So this is a simple query, not a plpgsql script or something like that.
with food (fid, name) as (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
)
select t.id, f.name
from my_table t
join food f on f.fid = t.id;
or without a CTE (but using the same idea):
select t.id, f.name
from my_table t
join (
values
(1, 'chocolate'),
(2, 'coconut'),
(3, 'pizza')
) f (fid, name) on f.fid = t.id;
This is the correct syntax:
SELECT id, (array['chocolate','coconut','pizza'])[id] FROM my_table
But you should create a referenced table with those values.
What about creating another table that enumerate all cases, and do join ?
CREATE TABLE table_case
(
case_id bigserial NOT NULL,
case_name character varying,
CONSTRAINT table_case_pkey PRIMARY KEY (case_id)
)
WITH (
OIDS=FALSE
);
and when you select from your table:
SELECT id, case_name FROM my_table
inner join table_case on case_id=my_table_id;

How to insert default values in SQL table?

I have a table like this:
create table1 (field1 int,
field2 int default 5557,
field3 int default 1337,
field4 int default 1337)
I want to insert a row which has the default values for field2 and field4.
I've tried insert into table1 values (5,null,10,null) but it doesn't work and ISNULL(field2,default) doesn't work either.
How can I tell the database to use the default value for the column when I insert a row?
Best practice it to list your columns so you're independent of table changes (new column or column order etc)
insert into table1 (field1, field3) values (5,10)
However, if you don't want to do this, use the DEFAULT keyword
insert into table1 values (5, DEFAULT, 10, DEFAULT)
Just don't include the columns that you want to use the default value for in your insert statement. For instance:
INSERT INTO table1 (field1, field3) VALUES (5, 10);
...will take the default values for field2 and field4, and assign 5 to field1 and 10 to field3.
This works if all the columns have associated defaults and one does not want to specify the column names:
insert into your_table
default values
Try it like this
INSERT INTO table1 (field1, field3) VALUES (5,10)
Then field2 and field4 should have default values.
I had a case where I had a very simple table, and I basically just wanted an extra row with just the default values. Not sure if there is a prettier way of doing it, but here's one way:
This sets every column in the new row to its default value:
INSERT INTO your_table VALUES ()
Note: This is extra useful for MySQL where INSERT INTO your_table DEFAULT VALUES does not work.
If your columns should not contain NULL values, you need to define the columns as NOT NULL as well, otherwise the passed in NULL will be used instead of the default and not produce an error.
If you don't pass in any value to these fields (which requires you to specify the fields that you do want to use), the defaults will be used:
INSERT INTO
table1 (field1, field3)
VALUES (5,10)
You can write in this way
GO
ALTER TABLE Table_name ADD
column_name decimal(18, 2) NOT NULL CONSTRAINT Constant_name DEFAULT 0
GO
ALTER TABLE Table_name SET (LOCK_ESCALATION = TABLE)
GO
COMMIT
To insert the default values you should omit them something like this :
Insert into Table (Field2) values(5)
All other fields will have null or their default values if it has defined.
CREATE TABLE #dum (id int identity(1,1) primary key, def int NOT NULL default(5), name varchar(25))
-- this works
INSERT #dum (def, name) VALUES (DEFAULT, 'jeff')
SELECT * FROM #dum;
DECLARE #some int
-- this *doesn't* work and I think it should
INSERT #dum (def, name)
VALUES (ISNULL(#some, DEFAULT), 'george')
SELECT * FROM #dum;
CREATE PROC SP_EMPLOYEE --By Using TYPE parameter and CASE in Stored procedure
(#TYPE INT)
AS
BEGIN
IF #TYPE=1
BEGIN
SELECT DESIGID,DESIGNAME FROM GP_DESIGNATION
END
IF #TYPE=2
BEGIN
SELECT ID,NAME,DESIGNAME,
case D.ISACTIVE when 'Y' then 'ISACTIVE' when 'N' then 'INACTIVE' else 'not' end as ACTIVE
FROM GP_EMPLOYEEDETAILS ED
JOIN GP_DESIGNATION D ON ED.DESIGNATION=D.DESIGID
END
END