PostgreSql: duplicate pkey error when inserting a new records to a restored database's table - postgresql

I used the commands pg_dump and psql to backup my production DB and restore it into my development server.
Now when I try to simply insert a new record to one of my tables I get the following error message:
ERROR: duplicate key value violates unique constraint
"communication_methods_pkey" DETAIL: Key (id)=(13) already exists.
How come that the id is already in use? I need to update something in order to have the id increment counter back on the right track?

It sounds like the sequences used to do the primary key for each table are not on the correct value. It is interesting that pg_dump did not include a sequence setval at the end of it (I believe it is supposed to).
Postgres recommends the following process to correct sequences: https://wiki.postgresql.org/wiki/Fixing_Sequences
Essentially, it takes you through identifying all your sequences and creating a sql script to run to set them to 1 more than your inserted value's ids.

Related

What happens to existing data with psql dbname < pg_dump_file [duplicate]

This question already has an answer here:
will pg_restore overwrite the existing tables?
(1 answer)
Closed 9 months ago.
I have an database on aws' rds and I use a pg_dump from a local version of the database, then psql dbname > pg_dump_file with proper arguments for remote upload to populate the database.
I'd like to know what is expected to happen if that rds db already contains data. More specifically:
Data present in the local dump, but absent in rds
Data present on rds, but absent in the local data
Data present in both but that have been modified
My current understanding:
New data will be added and be present in both after upload
Data in rds should be unaffected?
The data from the pg_dump will be present in both (assuming the same pk, but different fields otherwise)
Is that about correct? I've been reading this, but it's a little thin on how the restore is actually performed, so I'm having a harder time figuring that out. Thanks.
EDIT: following #wildplasser comment, by looking at the pg_dump file it appears that the following happens:
CREATE TABLE [....]
ALTER TABLE [setting table owner]
ALTER SEQUENCE [....]
For each table in the db. Then, again one table at a time:
COPY [tablename] (list of cols) FROM stdin;
[data to be copied]
Finally, more ALTER statements to set contraints, foreign keys etc.
So I guess the ultimate answer is "it depends". One could I suppose remove the CREATE TABLE [...], ALTER TABLE, ALTER SEQUENCE statements if those are already created as they should. I am not positive yet what happens if one tries CREATE TABLE with an existing table (error thrown perhaps?).
Then I guess the COPY statements would overwrite whatever already exists. Or perhaps throw an error. I'll have to test that. I'll write up an answer once I figure it out.
So the answer is a bit dull. Turns out that even if one removes the initial statements before the copy, if the table as an primary key (thus uniqueness constrains) then it won't work:
ERROR: duplicate key value violates unique constraint
So one gets shutdown pretty quickly there. One would have I guess to rewrite the dump as a list of UPDATE statements instead, but then I guess might as well write a script to do so. Unsure if pg_dump is all that useful in that case.

Is it possible to get another field of row I'm trying to duplicate in PSQL or MyBatis?

I have a table 'client', which has 3 columns - id, siebel_id, phone_number.
PhoneNumber has a unique constraint. If I save a new client with an existing number, I'll get an error ERROR: duplicate key value violates unique constraint "phone_number_unique".
Is it possible to make PSQL or MyBatis showing 'siebel_id' of a record where the phone number already saved?
I mean to get a message like
'ERROR: duplicate key value violates unique constraint "phone_number_unique"
Detail: Key (phone_number)=(+79991234567) already exists on siebel_id...'
No, it's not possible to tweak the internal message that the PostgreSQL database engine returns accompannying an error. Well... unless you recompiled the whole PostgreSQL database from scratch, and I would assume this is off the table.
However, you can easily search for the offending row using SQL, as in:
select siebel_id from client where phone_number = '+79991234567';

Imported data, duplicate key value violates unique constraint

I am migrating data from MSSQL.
I created the database in PostgreSQL via npgsql generated migration. I moved the data across and now when the code tries to insert a value I am getting
'duplicate key value violates unique constraint'
The npgsql tries to insert a column with Id 1..how ever the table already has Id over a thousand.
Npgsql.EntityFrameworkCore.PostgreSQL is 2.2.3 (latest)
In my context builder, I have
modelBuilder.ForNpgsqlUseIdentityColumns();
In which direction should I dig to resolve such an issue?
The code runs fine if the database is empty and doesn't have any imported data
Thank you
The values inserted during the migration contained the primary key value, so the sequence behind the column wasn't incremented and is kept at 1. A normal insert - without specifying the PK value - calls the sequence, get the 1, which already exists in the table.
To fix it, you can bump the sequence to the current max value.
SELECT setval(
pg_get_serial_sequence('myschema.mytable','mycolumn'),
max(mycolumn))
FROM myschema.mytable;
If you already know the sequence name, you can shorten it to
SELECT setval('my_sequence_name', max(mycolumn))
FROM myschema.mytable;

How to ensure validity of foreign keys in Postgres

Using Postgres 10.6
The issue:
Some data in my tables violates the foreign key constraints (not sure how). The constraints are ON DELETE CASCADE ON UPDATE CASCADE
On a pg_dump of the database, those foreign keys are dropped (due to being in an invalid state?)
A pg_restore is done into a blank database, which no longer has the foreign keys
The new database has all its primary keys updated to valid keys not used in a second database. Tables which had invalid data do not have their foreign keys updated, due to the now missing constraint.
A pg_dump of the new database is done, then the database is deleted
On a pg_restore into a second database which has the foreign key constraints, the data gets imported in an invalid state, and corrupts the new database.
What I want to do is this: Every few hours (or once a day, depending of how long the query would take), is to verify that all data in all the tables which have foreign keys are valid.
I have read about ALTER TABLE ... VALIDATE CONSTRAINT ... but this wouldn't fix my issue, as the data is not currently marked as NOT VALID. I know could do statements like:
DELETE FROM a WHERE a.b_id NOT IN ( SELECT b.id )
However, I have 144 tables with foreign keys, so this would be rather tedious. I would also maybe not want to immediately delete the data, but log the issue and inform user about a correction which will happen.
Of course, I'd like to know how the original corruption occurred, and prevent that; however at the moment I'm just trying to prevent it from spreading.
Example table:
CREATE TABLE dependencies (
...
from_task int references tasks(id) ON DELETE CASCADE ON UPDATE CASCADE NOT NULL,
to_task int references tasks(id) ON DELETE CASCADE ON UPDATE CASCADE NOT NULL,
...
);
Dependencies will end up with values for to_task and from_task which do not exist in the tasks table (see image)
Note:
Have tried EXPLAIN ANALYZE nothing odd
pg_tablespace, has just two records. pg_default and pg_global
relforcerowsecurity, relispartition are both 'false' on both tables
Arguments to pg_dump (from c++ call) arguments << "--file=" + fileName << "--username=" + connection.userName() << databaseName << "--format=c"
This is either an index (or table) corruption problem, or the constraint has been created invalid to defer the validity check till later.
pg_dump will never silently "drop" a constraint — perhaps there was an error while restoring the dump that you didn't notice.
The proper fix is to clean up the data that violate the constraint and re-create it.
If it is a data corruption problem, check your hardware.
There is no need to regularly check for data corruption, PostgreSQL is not in the habit of corrupting data by itself.
The best test would be to take a pg_dump regularly and see if restoring the dump causes any errors.

Postgresql auto-increment id's after restoring?

I imported all tables from MySQL to PostgreSQL but now I have problems with id's.
The way I converted my MySQL DB was simple exported DB and copied all "INSERTS" with edited syntax, import was successful because I can see all
the data correct.
SQLSTATE[23505]: Unique violation: 7 ERROR: duplicate key value violates unique constraint "elements_pkey"
DETAIL: Key (id)=(1) already exists.
Is there any way to fix issues with id's?
It works after reset the sequence.
SELECT setval('my_sequence_name', (SELECT max(id) FROM my_table));