Make duplicate row in Postgresql - postgresql

I am writing migration script to migrate database. I have to duplicate the row by incrementing primary key considering that different database can have n number of different columns in the table. I can't write each and every column in query. If i simply just copy the row then, I am getting duplicate key error.
Query: INSERT INTO table_name SELECT * FROM table_name WHERE id=255;
ERROR: duplicate key value violates unique constraint "table_name_pkey"
DETAIL: Key (id)=(255) already exist
Here, It's good that I don't have to mention all column names. I can select all columns by giving *. But, same time I am also getting duplicate key error.
What's the solution of this problem? Any help would be appreciated. Thanks in advance.

If you are willing to type all column names, you may write
INSERT INTO table_name (
pri_key
,col2
,col3
)
SELECT (
SELECT MAX(pri_key) + 1
FROM table_name
)
,col2
,col3
FROM table_name
WHERE id = 255;
Other option (without typing all columns , but you know the primary key ) is to CREATE a temp table, update it and re-insert within a transaction.
BEGIN;
CREATE TEMP TABLE temp_tab ON COMMIT DROP AS SELECT * FROM table_name WHERE id=255;
UPDATE temp_tab SET pri_key_col = ( select MAX(pri_key_col) + 1 FROM table_name );
INSERT INTO table_name select * FROM temp_tab;
COMMIT;

This is just a DO block but you could create a function that takes things like the table name etc as parameters.
Setup:
CREATE TABLE public.t1 (a TEXT, b TEXT, c TEXT, id SERIAL PRIMARY KEY, e TEXT, f TEXT);
INSERT INTO public.t1 (e) VALUES ('x'), ('y'), ('z');
Code to duplicate values without the primary key column:
DO $$
DECLARE
_table_schema TEXT := 'public';
_table_name TEXT := 't1';
_pk_column_name TEXT := 'id';
_columns TEXT;
BEGIN
SELECT STRING_AGG(column_name, ',')
INTO _columns
FROM information_schema.columns
WHERE table_name = _table_name
AND table_schema = _table_schema
AND column_name <> _pk_column_name;
EXECUTE FORMAT('INSERT INTO %1$s.%2$s (%3$s) SELECT %3$s FROM %1$s.%2$s', _table_schema, _table_name, _columns);
END $$
The query it creates and runs is: INSERT INTO public.t1 (a,b,c,e,f) SELECT a,b,c,e,f FROM public.t1. It's selected all the columns apart from the PK one. You could put this code in a function and use it for any table you wanted, or just use it like this and edit it for whatever table.

Related

I'm getting column "my_column" contains null values' when adding a composite primary key

Is it not supposed to delete null values before altering the table? I'm confused...
My query looks roughly like this:
BEGIN;
DELETE FROM my_table
WHERE my_column IS NULL;
ALTER TABLE my_table DROP CONSTRAINT my_table_pk;
ALTER TABLE my_table ADD PRIMARY KEY (id, my_column);
-- this is to repopulate the data afterwards
INSERT INTO my_table (name, other_table_id, my_column)
SELECT
ya.name,
ot.id,
my_column
FROM other_table ot
LEFT JOIN yet_another ya
ON ya.id = ot."fileId"
WHERE NOT EXISTS (
SELECT
1
FROM my_table mt
WHERE ot.id = mt.other_table_id AND ot.my_column = mt.my_column
) AND my_column IS NOT NULL;
COMMIT;
sorry for naming
There are two possible explanations:
A concurrent session inserted a new row with a NULL value between the start of the DELETE and the start of ALTER TABLE.
To avoid that, lock the table in SHARE mode before you DELETE.
There is a row where id has a NULL value.

PostgreSQL insert current sequence value to another field with condition

the issue:
i need to do something like this
drop table if exists tt_t;
create temp table tt_t(id serial primary key, main_id int, external_id int);
insert into tt_t(main_id, external_id)
select currval(pg_get_serial_sequence('tt_t', 'id')), 1
where not exists (select from tt_t where external_id = 1);
but execution raises an error
SQL Error [55000]: ERROR: currval of sequence "tt_t_id_seq" is not yet defined in this session
solution:
there is a way to solve this with anonymous code block
do
$$
begin
if not exists(select from tt_t where external_id = 1)
then
insert into tt_t(external_id, main_id)
values(1, currval(pg_get_serial_sequence('tt_t', 'id')));
end if;
end;
$$
;
but anonymous blocks has some restrictions e.g. Dapper parameters not working with PostgreSQL through npgsql connection, is postgres anonymous function parameterization supported?
how do i fix it without anonymous code block (UPD: and without any DDL changes)?
probable solution:
insert into tt_t(id, main_id, external_id)
select nextval(pg_get_serial_sequence('tt_t', 'id')), currval(pg_get_serial_sequence('tt_t', 'id')), 1
where not exists (select from tt_t where external_id = 1);
shorter code has been proposed to me
insert into tt_t(id, main_id, external_id)
select nextval(pg_get_serial_sequence('tt_t', 'id')), lastval(), 1
where not exists (select from tt_t where external_id = 1);
but i'm not sure if nextval will be calculated first
What about using a default value:
drop table if exists tt_t;
create temp table tt_t(id serial primary key, main_id int default lastval(), external_id int);
insert into tt_t(external_id)
select 1
where not exists (select * from tt_t where external_id = 1);
In theory it shouldn't be possible that another nextval() is called between the one for the id and the lastval(). However I am not 100% sure if there are some corner cases that I don't know of.
The following works as well (even if one or more of the external_id values already exist).
insert into tt_t(external_id)
select *
from (values (1),(2),(3)) x (external_id)
where not exists (select *
from tt_t
where external_id = x.external_id);

How to stop the "insert or update on table ...violates foreign key constraint"?

How to construct an INSERT statement so that it would not generate the error "insert or update on table ... violates foreign key constraint" in case if the foreign key value does not exist in the reference table?
I just need no record created in this case and success response.
Thank you
Use a query as the source for the INSERT statement:
insert into the_table (id, some_data, some_fk_column
select *
from (
values (42, 'foobar', 100)
) as x(id, some_data, some_fk_column)
where exists (select *
from referenced_table rt
where rt.primary_key_column = x.some_fk_column);
This can also be extended to a multi-row insert:
insert into the_table (id, some_data, some_fk_column
select *
from (
values
(42, 'foobar', 100),
(24, 'barfoo', 101)
) as x(id, some_data, some_fk_column)
where exists (select *
from referenced_table rt
where rt.primary_key_column = x.some_fk_column);
You didn't show us your table definitions so I had to make up the table and column names. You will have to translate that to your names.
You could create a function with plpgsql, which inserts a row and catches the exception:
CREATE FUNCTION customInsert(int,varchar) RETURNS VOID
AS $$
BEGIN
INSERT INTO foo VALUES ($1,$2);
EXCEPTION
WHEN foreign_key_violation THEN --do nothing
END;
$$ LANGUAGE plpgsql
You can then call this function by this:
SELECT customInsert(1,'hello');
This function tries to insert the given parameters into the table foo and catches the foreign_key_violation error if occurs.
Of course you can generalise the function more, to be able to insert in more than one table, but your question sounded like this was only needed for one specific table.

IF Exists doesn't seem to work for a Table Drop if already exists

Was getting this error each and every time tried to execute a DROP Table if exists
Step 1: Created a Table
CREATE TABLE Work_Tables.dbo.Drop_Table_Test (RowID INT IDENTITY(1,1), Data VARCHAR(50))
INSERT INTO Work_Tables.dbo.Drop_Table_Test
SELECT 'Test' UNION
SELECT 'Test1' UNION
SELECT 'Test2' UNION
SELECT 'Test3'
Step 2: Wrote a IF Exists block to check if the Table exists.
IF EXISTS (SELECT 1 FROM Work_Tables.dbo.SysObjects WHERE NAME LIKE 'Drop_Table_Test' AND XType = 'U')
BEGIN
PRINT 'IN'
DROP TABLE Work_Tables.dbo.Drop_Table_Test
END
CREATE TABLE Work_Tables.dbo.Drop_Table_Test (RowID INT IDENTITY(1,1), Data VARCHAR(50), NAME VARCHAR(20), PreCheck INT)
INSERT INTO Work_Tables.dbo.Drop_Table_Test (Data, Name, PreCheck)
SELECT 'Test','SRK',1 UNION
SELECT 'Test1','Daya',2 UNION
SELECT 'Test2','Dinesh',3 UNION
SELECT 'Test3','Suresh',4
On running the Step 2 Code its obvious the Table has to be Dropped and recreated with the same name but it didn't even enter the Begin End block.
I feel that its because have added few more columns in the second try, but still not clear why it should have problems as we are to DROP the table.
You can not drop and create the same table in the same batch in SQL Server.
Break your code up into separate batches so the table can be dropped before you try and recreate it. Add GO after END in your BEGIN / END statement.
IF EXISTS (SELECT 1 FROM Work_Tables.dbo.SysObjects WHERE NAME LIKE 'Drop_Table_Test' AND XType = 'U')
BEGIN
PRINT 'IN'
DROP TABLE Work_Tables.dbo.Drop_Table_Test
END
GO --Add this...
....
Straight from Microsoft's Documentation:
DROP TABLE and CREATE TABLE should not be executed on the same table in the same batch. Otherwise an unexpected error may occur.
You can try to use this syntax:
IF OBJECT_ID('dbo.Drop_Table_Test', 'U') IS NOT NULL
DROP TABLE dbo.Drop_Table_Test;
IF EXISTS will drop the table only when your table Drop_Table_Test does not contain any row. In case if it contains the data then it will not drop the table.

Update column of a inserted row with with its generated id in a single query

Say I have a table, created as follows:
CREATE TABLE test_table (id serial, unique_id varchar(50) primary key, name varchar(50));
test_table
----------
id | unique_id | name
In that table, I would like to update the unique_id field with the newly inserted id concatenated with the inserted name in a single go.
Usually this is accomplished by two queries. (PHP way)
$q = "INSERT INTO table (unique_id,name) values ('uid','abc') returning id||name as unique_id;";
$r = pg_query($dbconn,$q);
$row = pg_fetch_array($r);
$q1 = "UPDATE test_table set unique_id =".$row['unique_id']." where unique_id='uid'";
$r1 = pg_query($dbconn,$q1);
Is there any way to do the above in a single query?
You can have several options here, you could create a AFTER trigger which uses the generated ID for an direct update of the same row:
CREATE TRIGGER test_table_insert ON AFTER INSERT ON test_table FOR EACH ROW EXECUTE PROCEDURE test_table_insert();
And in your function you update the value:
CREATE FUNCTION test_table_insert() RETURNS TRIGGER AS $$
BEGIN
UPDATE test_table SET uniqid = NEW.id::text || NEW.name WHERE id = NEW.id;
END;
$$ LANGUAGE plpgsql;
You need to add the function before the trigger.
An other option would be to do it directly in the insert:
INSERT INTO table (id, unique_id, name) values (nextval('test_table_id_seq'), 'abc', currval('test_table_id_seq')::text || 'abc') returning id;
But as a_horse_with_no_name pointed out, I think you may have a problem in your database design.