PostgreSQL not returning records just inserted - postgresql

I am trying to insert (clone) some records in a table and need to get source ids and ids that got generated. This simplified example demonstrates my issue. After new records are created, referencing their ids in a SELECT produces no results even though records do get created and subsequent SELECT on the table shows them. It feels like insert and select are happening in different transaction scopes.
CREATE TABLE tbl_value(
id int4 NOT NULL GENERATED ALWAYS AS identity PRIMARY KEY,
some_id INTEGER NOT NULL,
value VARCHAR NOT NULL
);
INSERT INTO tbl_value(some_id, value) VALUES(1000, 'value 1'), (1000, 'value 2'), (1000, 'value 3');
with
outer_input as
(
select id, some_id, value from tbl_value where id in (1,2)
),
inner_insert as
(
INSERT INTO tbl_value(some_id, value)
select 2000, value from outer_input
returning id
)
select * from tbl_value v inner join inner_insert i on v.id = i.id;

Related

How to pull out records based on array of values

Suppose the following structure:
CREATE SCHEMA IF NOT EXISTS my_schema;
CREATE TABLE IF NOT EXISTS my_schema.user (
id SERIAL PRIMARY KEY,
tag_id BIGINT NOT NULL
);
CREATE TABLE IF NOT EXISTS my_schema.conversation (
id SERIAL PRIMARY KEY,
user_ids BIGINT[] NOT NULL
);
INSERT INTO my_schema.user VALUES
(1, 55555),
(2, 77777);
INSERT INTO my_schema.conversation VALUES
(1, '{1,2}');
I can pull out the my_schema.conversation records if I know the my_schema.user.id values:
SELECT *
FROM my_schema.conversation
WHERE user_ids #> '{1}'
The above works, but I need to use my_schema.user.tag_id instead of my_schema.user.id:
How can I do this?
Fiddle
You would have to join the two tables on the array values
SELECT *
FROM my_schema.user u
JOIN my_schema.conversation c
ON u.id = any(c.chat_ids)
WHERE u.tag_id=55555;

Postgresql: 'upserting' into two tables using the same id with a unique constraint

I have two tables, one containing all the hot columns and one the static ones. The static table has an unique constraint. When the conflict on the unique constraint triggers only the hot columns in the other table should be updated using the the id from the static table.
For better clarity some code:
CREATE TABLE tag (
id bigserial PRIMARY KEY
, key text
, value text
-- UNIQUE (key, value) -- ?
);
CREATE TABLE tag_hotcolumns (
id bigserial PRIMARY KEY
, hot text
, stuff text
);
with s as (
select id, "key", "value"
from tag
where key = 'key1' and value = 'value1'
), i as (
insert into tag ("key", "value")
select 'key1', 'value1'
where not exists (select 1 from s)
returning id
)
select id
from i
union all
select id
from s
The second block works fine, but I can't get the returned id into the insert statement for the tag_hotcolumns...
I tried:
insert into tag_attributes (with s as (
select id, "key", "value"
from tag
where key = 'key1' and value = 'value1'
), i as (
insert into tag ("key", "value")
select 'key1', 'value1'
where not exists (select 1 from s)
returning id
)
select id, 'hot1', 'stuff1'
from i
union all
select id
from s);
And that gives me "WITH clause containing a data-modifying statement must be at the top level
LINE 5: ), i as ("
Any help would be greatly apreciated :)
dwhitemv from stackexchange helped me solve this. The solution you can find here:
https://dbfiddle.uk/?rdbms=postgres_13&fiddle=f72cae495e6eed579d904a5c7b48f05b

Copying records in a table with self referencing ids

I have a table with records which can reference another row in the same table so there is a parent-child relationship between rows in the same table.
What I am trying to achieve is to create the same data for another user so that they can see and manage their own version of this structure through the web ui where these rows are displayed as a tree.
Problem is when I bulk insert this data by only changing user_id, I lose the relation between rows because the parent_id values will be invalid for these new records and they should be updated as well with the newly generated ids.
Here is what I tried: (did not work)
Iterate over main_table
copy-paste the static values after each
do another insert on a temp table for holding old and new ids
update old parent_ids with new ids after loop ends
My attempt at doing such thing(last step is not included here)
create or replace function test_x()
returns void as
$BODY$
declare
r RECORD;
userId int8;
rowPK int8;
begin
userId := (select 1)
create table if not exists id_map (old_id int8, new_id int8);
create table if not exists temp_table as select * from main_table;
for r in select * from temp_table
loop
rowPK := insert into main_table(id, user_id, code, description, parent_id)
values(nextval('hibernate_sequence'), userId, r.code, r.description, r.parent_id) returning id;
insert into id_map (old_id, new_id) values (r.id, rowPK);
end loop;
end
$BODY$
language plpgsql;
My PostgreSQL version is 9.6.14.
DDL below for testing.
create table main_table(
id bigserial not null,
user_id int8 not null,
code varchar(3) not null,
description varchar(100) not null,
parent_id int8 null,
constraint mycompkey unique (user_id, code, parent_id),
constraint mypk primary key (id),
constraint myfk foreign key (parent_id) references main_table(id)
);
insert into main_table (id, user_id, code, description, parent_id)
values(0, 0, '01', 'Root row', null);
insert into main_table (id, user_id, code, description, parent_id)
values(1, 0, '001', 'Child row 1', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(2, 0, '002', 'Child row 2', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(3, 0, '002', 'Grand child row 1', 2);
How to write a procedure to accomplish this?
Thanks in advance.
It appears your task is coping all data for a given user to another while maintaining the hierarchical relationship within the new rows. The following accomplishes that.
It begins creating a new copy of the existing rows with the new user_id, including the old row parent_id. That will be user in the next (update) step.
The CTE logically begins with the new rows which have parent_id and joins to the old parent row. From here it joins to the old parent row to the new parent row using the code and description. At that point we have the new id along with the new parent is. At that point just update with those values. Actually for the update the CTE need only select those two columns, but I've left the intermediate columns so you trace through if you wish.
create or replace function copy_user_data_to_user(
source_user_id bigint
, target_user_id bigint
)
returns void
language plpgsql
as $$
begin
insert into main_table ( user_id,code, description, parent_id )
select target_user_id, code, description, parent_id
from main_table
where user_id = source_user_id ;
with n_list as
(select mt.id, mt.code, mt.description, mt.parent_id
, mtp.id p_id,mtp.code p_code,mtp.description p_des
, mtc.id c_id, mtc.code c_code, mtc.description c_description
from main_table mt
join main_table mtp on mtp.id = mt.parent_id
join main_table mtc on ( mtc.user_id = target_user_id
and mtc.code = mtp.code
and mtc.description = mtp.description
)
where mt.parent_id is not null
and mt.user_id = target_user_id
)
update main_table mt
set parent_id = n_list.c_id
from n_list
where mt.id = n_list.id;
return;
end ;
$$;
-- test
select * from copy_user_data_to_user(0,1);
select * from main_table;
CREATE TABLE 'table name you want to create' SELECT * FROM myset
but new table and myset column name should be equal and you can also
use inplace of * to column name but column name exist in new table
othwerwise getting errors

Insert a value from a table in another table as foreign key

I have two tables, cinema and theater.
Table Cinema
Columns:
id, name, is_active
Table Theater
Columns:
id, cinema_id
I'm doing insertion into the DB, in sequence. First, I'll insert into cinema and then into theater. The cinema_id.theater is a foreign key that reference cinema.id. After the insertion into cinema, I'll insert data into the theater, but I need the value from cinema's id before insert the data in cinema_id.
I was thinking about RETURNING id INTO cinema_id and, then, save into theater. But I really don't know how I can possibly do something like this.
Any thoughts? Is there any better way to do something like this?
You have two options.
The first one is using the lastval() function which returns the value of the last generated sequence value:
insert into cinema(name, is_active) values ('Cinema One', true);
insert into theater(cinema_id) values (lastval());
Alternatively you can pass the sequence name to the currval() function:
insert into theater(cinema_id)
values (currval(pg_get_serial_sequence('cinema', 'id')));
Alternatively you can chain the two statements using a CTE and the returning clause:
with new_cinema as (
insert into cinema (name, is_active)
values ('Cinema One', true)
returning id
)
insert into theater (cinema_id)
select id
from new_cinema;
In both statements I assume theater.id is also a generated value.
this way works.
with new_cinema as (
insert into cinema (name, is_active)
values ('Cinema One', true)
returning id
)
insert into theater (cinema_id)
select id
from new_cinema;
INSERT INTO tableB
(
columnA
)
SELECT
columnA
FROM
tableA
ORDER BY columnA desc
LIMIT 1

How to load data as nested JSONB from non-JSONB postgres tables

I'm trying to construct an object for use from my postgres backend. The tables in question look something like this:
We have some Things that essentially act as rows for a matrix where the columns are Field_Columns. Field_Values are filled cells.
Create Table Platform_User (
serial id PRIMARY KEY
)
Create Table Things (
serial id PRIMARY KEY,
INTEGER user_id REFERENCES Platform_User(id)
)
Create Table Field_Columns (
serial id PRIMARY KEY,
TEXT name,
)
Create Table Field_Values (
INTEGER field_column_id REFERENCES Field_Columns(id),
INTEGER thing_id REFERENCES Things(id)
TEXT content,
PRIMARY_KEY(field_column_id, thing_id)
)
This would be simple if I were trying to load just the Field_Values for a single Thing as JSON, which would look like this:
SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id
),
ARRAY(
SELECT Field_Values.content
FROM Fields_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
)
FROM Things
WHERE Thing.id = $1
however, I'd like to construct the JSON object to look like this when returned. I want to get an object of all the Fields:Field_Values objects for the Things that a user owns
{
14:
{
'first field':'asdf',
'other field':''
}
25:
{
'first field':'qwer',
'other field':'dfgdsfg'
}
43:
{
'first field':'',
'other field':''
}
}
My efforts to construct this query look like this, but I'm running into the problem where the JSONB object function doesn't want to construct an object where the value of the field is an object itself
SELECT (
JSONB_OBJECT(
ARRAY(SELECT Things.id::TEXT
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
),
ARRAY(SELECT JSONB_OBJECT(
ARRAY(
SELECT name
FROM Field_Columns
ORDER BY Field_Columns.id),
ARRAY(
SELECT Field_Values.content
FROM Field_Columns
LEFT JOIN Field_Values ON Field_Values.field_column_Id = Field_Columns.id
AND Field_Values.thing_id = Things.id
ORDER BY Field_Columns.id)
)
FROM Things
WHERE Things.user_id = $2
ORDER BY Things.id
)
)
) AS thing_fields
The specific error I get is function jsonb_object(text[], jsonb[]) does not exist. Is there a way to do this that doesn't involve copious text conversions and nonsense like that? Or will I just need to abandon trying to sort my data in the query and do it in my code instead.
Your DDL scripts are syntactically incorrect so I created these for you:
create table platform_users (
id int8 PRIMARY KEY
);
create table things (
id int8 PRIMARY KEY,
user_id int8 REFERENCES platform_users(id)
);
create table field_columns (
id int8 PRIMARY KEY,
name text
);
create table field_values (
field_column_id int8 REFERENCES field_columns(id),
thing_id int8 REFERENCES things(id),
content text,
PRIMARY KEY(field_column_id, thing_id)
);
I also created some scripts to populate the db:
insert into platform_users(id) values (1);
insert into platform_users(id) values (2);
insert into platform_users(id) values (3);
insert into platform_users(id) values (4);
insert into platform_users(id) values (5);
insert into things(id, user_id) values(1, 1);
insert into things(id, user_id) values(2, 1);
insert into things(id, user_id) values(3, 2);
insert into things(id, user_id) values(4, 2);
insert into field_columns(id, name) values(1, 'col1');
insert into field_columns(id, name) values(2, 'col2');
insert into field_values(field_column_id, thing_id, content) values(1, 1, 'thing1 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 1, 'thing1 val2');
insert into field_values(field_column_id, thing_id, content) values(1, 2, 'thing2 val1');
insert into field_values(field_column_id, thing_id, content) values(2, 2, 'thing2 val2');
Please include such scripts next time when you ask for help, and make sure that your scripts are correct. This will reduce the work needed to answer your question.
You can get your jsonb value by aggregating the key value pairs with jsonb_object_agg
select
t.id,
jsonb_object_agg(fc.name, fv.content)
from
things t inner join
field_values fv on fv.thing_id = t.id inner join
field_columns fc on fv.field_column_id = fc.id
group by 1
The results looking like this:
thing_id;jsonb_value
1;"{"col1": "thing1 val1", "col2": "thing1 val2"}"
2;"{"col1": "thing2 val1", "col2": "thing2 val2"}"