[Issue resolved. See Answer below.]
I have just encountered a series of “duplicate key value violates unique constraint” errors with a system that has been working well for months. And I cannot determine why they occurred.
Here is the error:
org.springframework.dao.DuplicateKeyException: PreparedStatementCallback;
SQL [
INSERT INTO transaction_item
(transaction_group_id, transaction_type, start_time, end_time) VALUES
(?, ?::transaction_type_enum, ?, ?)
];
ERROR: duplicate key value violates unique constraint "transaction_item_pkey"
Detail: Key (transaction_id)=(67109) already exists.;
Here is the definition of the relevant SEQUENCE and TABLE:
CREATE SEQUENCE transaction_id_seq AS bigint;
CREATE TABLE transaction_item (
transaction_id bigint PRIMARY KEY DEFAULT NEXTVAL('transaction_id_seq'),
transaction_group_id bigint NOT NULL,
transaction_type transaction_type_enum NOT NULL,
start_time timestamp NOT NULL,
end_time timestamp NOT NULL
);
And here is the only SQL statement used for inserting to that table:
INSERT INTO transaction_item
(transaction_group_id, transaction_type, start_time, end_time) VALUES
(:transaction_group_id, :transaction_type::transaction_type_enum, :start_time, :end_time)
As you can see, I’m not explicitly trying to set the value of transaction_id. I've defined a default value for the column definition, and using that to fetch a value formthe SEQUENCE.
I have been under the impression that the above approach is safe, even for use in high-concurrency situation. A SEQUENCE should never return the same value twice, right?
I’d really appreciate some help to understand why this has occurred, and how to fix it. Thank you!
I found the cause of this issue.
A few months ago (during development of this system) an issue was discovered that meant it was necessary to purge any existing test data from the database. I did this using DELETE FROM statements for all TABLES and ALTER ... RESTART statements for all SEQUENCES. These statements were added to the Liquibase configuration to be executing during startup for the new code. From inspecting the logs at the time, it appears that an instance of the system was still running at the time of the migration. And this happened: The new instance of the system deleted all data from the TRANSACTION_ITEM table, the still-running instance then added more data to that table, and then the new instance restarted the SEQUENCE use for inserting those records. So yesterday, when I received the duplicate key violations, it was because the SEQUENCE finally reached the ID values corresponding to the TRANSACTION_ITEM records that were added by still-running instance back when DB purge and migration occurred.
Long story, but it all makes sense now. Thanks to those who commented on this issue.
Related
I start multiple programs that all more or less simultaneously do
CREATE TABLE IF NOT EXISTS log (...);
Sometimes this works perfectly. But most of the time, one or more of the programs crash with the error:
23505: duplicate key value violates unique constraint "pg_class_relname_nsp_index".
Can somebody explain to me how the actual Christmas tree CREATE TABLE IF NOT EXISTS is giving me an error message about the table already existing? Isn't that, like, the entire point of this command?? What is going on here? More to the point, how do I get it to actually work correctly?
After this command, there's also a couple of CREATE INDEX IF NOT EXISTS commands. These occasionally fail in a similar way too. But most of the time, it's the CREATE TABLE statement that fails.
You can reproduce this with 2 parallel sessions:
First session:
begin;
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
Notice that the first session did not commit yet, so the table does not really exists.
Second session:
begin;
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
The second session will now block, as the name "log" is reserved by the first session. But it is not yet known, if the transaction, that reserved it, will be committed or not.
Then, when you commit the first session, the second will fail:
ERROR: duplicate key value violates unique constraint "pg_class_relname_nsp_index"
DETAIL: Key (relname, relnamespace)=(log_id_seq, 2200) already exists.
To avoid it you have to make sure that the check for existence of a table, is done after some common advisory lock is taken:
begin;
select pg_advisory_xact_lock(12345);
-- any bigint value, but has to be the same for all parallel sessions
create table if not exists log(id bigint generated always as identity, t timestamp with time zone, message text not null);
commit;
I have a question I know this was posted many times but I didn't find an answer to my problem. The problem is that I have a table and a column "id" I want it to be unique number just as normal. This type of column is serial and the next value after each insert is coming from a sequence so everything seems to be all right but it still sometimes shows this error. I don't know why. In the documentation, it says the sequence is foolproof and always works. If I add a UNIQUE constraint to that column will it help? I worked before many times on Postres but this error is showing for me for the first time. I did everything as normal and I never had this problem before. Can you help me to find the answer that can be used in the future for all tables that will be created? Let's say we have something easy like this:
CREATE TABLE comments
(
id serial NOT NULL,
some_column text NOT NULL,
CONSTRAINT id_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE interesting.comments OWNER TO postgres;
If i add:
ALTER TABLE comments ADD CONSTRAINT id_id_key UNIQUE(id)
Will if be enough or is there some other thing that should be done?
This article explains that your sequence might be out of sync and that you have to manually bring it back in sync.
An excerpt from the article in case the URL changes:
If you get this message when trying to insert data into a PostgreSQL
database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're
working with has somehow become out of sync, likely because of a mass
import process (or something along those lines). Call it a "bug by
design", but it seems that you have to manually reset the a primary
key index after restoring from a dump file. At any rate, to see if
your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is
out of sync. Back up your PG database (just in case), then run this command:
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher
than any existing primary key in the sequence.
Intro
I also encountered this problem and the solution proposed by #adamo was basically the right solution. However, I had to invest a lot of time in the details, which is why I am now writing a new answer in order to save this time for others.
Case
My case was as follows: There was a table that was filled with data using an app. Now a new entry had to be inserted manually via SQL. After that the sequence was out of sync and no more records could be inserted via the app.
Solution
As mentioned in the answer from #adamo, the sequence must be synchronized manually. For this purpose the name of the sequence is needed. For Postgres, the name of the sequence can be determined with the command PG_GET_SERIAL_SEQUENCE. Most examples use lower case table names. In my case the tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.) and their names all started with a capital letter.
In an e-mail from 2004 (link) I got the right hint.
(Let's assume for all examples, that Foo is the table's name and Foo_id the related column.)
Command to get the sequence name:
SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id');
So, the table name must be in double quotes, surrounded by single quotes.
1. Validate, that the sequence is out-of-sync
SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')) AS "Current Value", MAX("Foo_id") AS "Max Value" FROM "Foo";
When the Current Value is less than Max Value, your sequence is out-of-sync.
2. Correction
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')), (SELECT (MAX("Foo_id") + 1) FROM "Foo"), FALSE);
Replace the table_name to your actual name of the table.
Gives the current last id for the table. Note it that for next step.
SELECT MAX(id) FROM table_name;
Get the next id sequence according to postgresql. Make sure this id is higher than the current max id we get from step 1
SELECT nextVal('"table_name_id_seq"');
if it's not higher than then use this step 3 to update the next sequence.
SELECT setval('"table_name_id_seq"', (SELECT MAX(id) FROM table_name)+1);
The primary key is already protecting you from inserting duplicate values, as you're experiencing when you get that error. Adding another unique constraint isn't necessary to do that.
The "duplicate key" error is telling you that the work was not done because it would produce a duplicate key, not that it discovered a duplicate key already commited to the table.
For future searchs, use ON CONFLICT DO NOTHING.
Referrence - https://www.calazan.com/how-to-reset-the-primary-key-sequence-in-postgresql-with-django/
I had the same problem try this:
python manage.py sqlsequencereset table_name
Eg:
python manage.py sqlsequencereset auth
you need to run this in production settings(if you have)
and you need Postgres installed to run this on the server
From http://www.postgresql.org/docs/current/interactive/datatype.html
Note: Prior to PostgreSQL 7.3, serial implied UNIQUE. This is no longer automatic. If you wish a serial column to be in a unique constraint or a primary key, it must now be specified, same as with any other data type.
In my case carate table script is:
CREATE TABLE public."Survey_symptom_binds"
(
id integer NOT NULL DEFAULT nextval('"Survey_symptom_binds_id_seq"'::regclass),
survey_id integer,
"order" smallint,
symptom_id integer,
CONSTRAINT "Survey_symptom_binds_pkey" PRIMARY KEY (id)
)
SO:
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass),
MAX(id)
FROM public."Survey_symptom_binds";
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass) less than MAX(id) !!!
Try to fix the proble:
SELECT setval('"Survey_symptom_binds_id_seq"', (SELECT MAX(id) FROM public."Survey_symptom_binds")+1);
Good Luck every one!
I had the same problem. It was because of the type of my relations. I had a table property which related to both states and cities. So, at first I had a relation from property to states as OneToOne, and the same for cities. And I had the same error "duplicate key violates unique constraint". That means that: I can only have one property related to one state and city. But that doesnt make sense, because a city can have multiple properties. So the problem is the relation. The relation should be ManyToOne. Many properties to One city
Table name started with a capital letter if tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.)
SELECT setval('"Table_name_Id_seq"', (SELECT MAX("Id") FROM "Table_name") + 1)
WHERE
NOT EXISTS (
SELECT *
FROM (SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Table_name"', 'Id')) AS seq, MAX("Id") AS max_id
FROM "Table_name") AS seq_table
WHERE seq > max_id
)
try that CLI
it's just a suggestion to enhance the adamo code (thanks a lot adamo)
SELECT setval('tableName_columnName_seq', (SELECT MAX(columnName) FROM tableName));
For programatically solution at Django. Based on Paolo Melchiorre's answer, I wrote a chunk as a function to be called before any .save()
from django.db import connection
def setSqlCursor(db_table):
sql = """SELECT pg_catalog.setval(pg_get_serial_sequence('"""+db_table+"""', 'id'), MAX(id)) FROM """+db_table+""";"""
with connection.cursor() as cursor:
cursor.execute(sql)
I have similar problem but I solved it by removing all the foreign key in my Postgresql
I am migrating data from MSSQL.
I created the database in PostgreSQL via npgsql generated migration. I moved the data across and now when the code tries to insert a value I am getting
'duplicate key value violates unique constraint'
The npgsql tries to insert a column with Id 1..how ever the table already has Id over a thousand.
Npgsql.EntityFrameworkCore.PostgreSQL is 2.2.3 (latest)
In my context builder, I have
modelBuilder.ForNpgsqlUseIdentityColumns();
In which direction should I dig to resolve such an issue?
The code runs fine if the database is empty and doesn't have any imported data
Thank you
The values inserted during the migration contained the primary key value, so the sequence behind the column wasn't incremented and is kept at 1. A normal insert - without specifying the PK value - calls the sequence, get the 1, which already exists in the table.
To fix it, you can bump the sequence to the current max value.
SELECT setval(
pg_get_serial_sequence('myschema.mytable','mycolumn'),
max(mycolumn))
FROM myschema.mytable;
If you already know the sequence name, you can shorten it to
SELECT setval('my_sequence_name', max(mycolumn))
FROM myschema.mytable;
I am puzzled by a weird Postgres problem I encounter in the trivial database shown below: If I first insert a tag and explicitly specify its ID and then try to insert another tag without passing an ID, then this second insert fails. If I try a third time (again without ID), the insert succeeds.
DROP DATABASE IF EXISTS mydb;
CREATE DATABASE mydb;
\c mydb
DROP SCHEMA public;
CREATE SCHEMA core;
CREATE TABLE core.tag
(
id serial PRIMARY KEY,
title text NOT NULL
);
-- this works: all columns specified explicitly
INSERT INTO core.tag(id, title) VALUES (1, 'known tag');
-- omitting the tag ID fails with
-- ERROR: duplicate key value violates unique constraint "tag_pkey"
-- DETAIL: Key (id)=(1) already exists.
INSERT INTO core.tag(title) VALUES ('unknown tag');
-- this works again ?!?
INSERT INTO core.tag(title) VALUES ('unknown tag');
The issue only seems to occur on a freshly created database and once it does, it does not seem to happen again. I have never come across anything like this - so far, I have just inserted data with or without explicit ID and AFAICS, nothing ever failed like this...
Does anyone have an idea what's going on here ?!?
Environment: PostgreSQL 9.1.3 on Mac OSX 10.7.5
Of course this fails.
What happens?
When you create the table, a sequence is also created that generates the values for the ID column. The sequence starts with 1 but it is only used if you do not specify a value for the ID column.
Now when you run
INSERT INTO core.tag(id, title) VALUES (1, 'known tag');
you bypass Postgres' automatic assigment of the ID value, the sequence "stays" at one.
Now when you run
INSERT INTO core.tag(title) VALUES ('unknown tag');
Postgres takes the next value from the sequence - which is 1. But that alreay exists so the insert fails. After taking the value from the sequence, the next value is 2, so the subsequent insert without specifying an ID value gets the 2 and succeeds.
The solution is to either never include the ID column in your inserts. Or - if you do - request the ID from the sequence:
INSERT INTO core.tag(id, title) VALUES (nextval('tag_id_seq'), 'known tag');
When a serial column is created it is automatically associated with a sequence which is named <table_name>_<column_name>_seq. And that's the name I used in the above statement.
More details about how the serial "data type" works are in the manual: http://www.postgresql.org/docs/current/static/datatype-numeric.html#DATATYPE-SERIAL
I have a question I know this was posted many times but I didn't find an answer to my problem. The problem is that I have a table and a column "id" I want it to be unique number just as normal. This type of column is serial and the next value after each insert is coming from a sequence so everything seems to be all right but it still sometimes shows this error. I don't know why. In the documentation, it says the sequence is foolproof and always works. If I add a UNIQUE constraint to that column will it help? I worked before many times on Postres but this error is showing for me for the first time. I did everything as normal and I never had this problem before. Can you help me to find the answer that can be used in the future for all tables that will be created? Let's say we have something easy like this:
CREATE TABLE comments
(
id serial NOT NULL,
some_column text NOT NULL,
CONSTRAINT id_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE interesting.comments OWNER TO postgres;
If i add:
ALTER TABLE comments ADD CONSTRAINT id_id_key UNIQUE(id)
Will if be enough or is there some other thing that should be done?
This article explains that your sequence might be out of sync and that you have to manually bring it back in sync.
An excerpt from the article in case the URL changes:
If you get this message when trying to insert data into a PostgreSQL
database:
ERROR: duplicate key violates unique constraint
That likely means that the primary key sequence in the table you're
working with has somehow become out of sync, likely because of a mass
import process (or something along those lines). Call it a "bug by
design", but it seems that you have to manually reset the a primary
key index after restoring from a dump file. At any rate, to see if
your values are out of sync, run these two commands:
SELECT MAX(the_primary_key) FROM the_table;
SELECT nextval('the_primary_key_sequence');
If the first value is higher than the second value, your sequence is
out of sync. Back up your PG database (just in case), then run this command:
SELECT setval('the_primary_key_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
That will set the sequence to the next available value that's higher
than any existing primary key in the sequence.
Intro
I also encountered this problem and the solution proposed by #adamo was basically the right solution. However, I had to invest a lot of time in the details, which is why I am now writing a new answer in order to save this time for others.
Case
My case was as follows: There was a table that was filled with data using an app. Now a new entry had to be inserted manually via SQL. After that the sequence was out of sync and no more records could be inserted via the app.
Solution
As mentioned in the answer from #adamo, the sequence must be synchronized manually. For this purpose the name of the sequence is needed. For Postgres, the name of the sequence can be determined with the command PG_GET_SERIAL_SEQUENCE. Most examples use lower case table names. In my case the tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.) and their names all started with a capital letter.
In an e-mail from 2004 (link) I got the right hint.
(Let's assume for all examples, that Foo is the table's name and Foo_id the related column.)
Command to get the sequence name:
SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id');
So, the table name must be in double quotes, surrounded by single quotes.
1. Validate, that the sequence is out-of-sync
SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')) AS "Current Value", MAX("Foo_id") AS "Max Value" FROM "Foo";
When the Current Value is less than Max Value, your sequence is out-of-sync.
2. Correction
SELECT SETVAL((SELECT PG_GET_SERIAL_SEQUENCE('"Foo"', 'Foo_id')), (SELECT (MAX("Foo_id") + 1) FROM "Foo"), FALSE);
Replace the table_name to your actual name of the table.
Gives the current last id for the table. Note it that for next step.
SELECT MAX(id) FROM table_name;
Get the next id sequence according to postgresql. Make sure this id is higher than the current max id we get from step 1
SELECT nextVal('"table_name_id_seq"');
if it's not higher than then use this step 3 to update the next sequence.
SELECT setval('"table_name_id_seq"', (SELECT MAX(id) FROM table_name)+1);
The primary key is already protecting you from inserting duplicate values, as you're experiencing when you get that error. Adding another unique constraint isn't necessary to do that.
The "duplicate key" error is telling you that the work was not done because it would produce a duplicate key, not that it discovered a duplicate key already commited to the table.
For future searchs, use ON CONFLICT DO NOTHING.
Referrence - https://www.calazan.com/how-to-reset-the-primary-key-sequence-in-postgresql-with-django/
I had the same problem try this:
python manage.py sqlsequencereset table_name
Eg:
python manage.py sqlsequencereset auth
you need to run this in production settings(if you have)
and you need Postgres installed to run this on the server
From http://www.postgresql.org/docs/current/interactive/datatype.html
Note: Prior to PostgreSQL 7.3, serial implied UNIQUE. This is no longer automatic. If you wish a serial column to be in a unique constraint or a primary key, it must now be specified, same as with any other data type.
In my case carate table script is:
CREATE TABLE public."Survey_symptom_binds"
(
id integer NOT NULL DEFAULT nextval('"Survey_symptom_binds_id_seq"'::regclass),
survey_id integer,
"order" smallint,
symptom_id integer,
CONSTRAINT "Survey_symptom_binds_pkey" PRIMARY KEY (id)
)
SO:
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass),
MAX(id)
FROM public."Survey_symptom_binds";
SELECT nextval('"Survey_symptom_binds_id_seq"'::regclass) less than MAX(id) !!!
Try to fix the proble:
SELECT setval('"Survey_symptom_binds_id_seq"', (SELECT MAX(id) FROM public."Survey_symptom_binds")+1);
Good Luck every one!
I had the same problem. It was because of the type of my relations. I had a table property which related to both states and cities. So, at first I had a relation from property to states as OneToOne, and the same for cities. And I had the same error "duplicate key violates unique constraint". That means that: I can only have one property related to one state and city. But that doesnt make sense, because a city can have multiple properties. So the problem is the relation. The relation should be ManyToOne. Many properties to One city
Table name started with a capital letter if tables were created by an ORM middleware (like Hibernate or Entity Framework Core etc.)
SELECT setval('"Table_name_Id_seq"', (SELECT MAX("Id") FROM "Table_name") + 1)
WHERE
NOT EXISTS (
SELECT *
FROM (SELECT CURRVAL(PG_GET_SERIAL_SEQUENCE('"Table_name"', 'Id')) AS seq, MAX("Id") AS max_id
FROM "Table_name") AS seq_table
WHERE seq > max_id
)
try that CLI
it's just a suggestion to enhance the adamo code (thanks a lot adamo)
SELECT setval('tableName_columnName_seq', (SELECT MAX(columnName) FROM tableName));
For programatically solution at Django. Based on Paolo Melchiorre's answer, I wrote a chunk as a function to be called before any .save()
from django.db import connection
def setSqlCursor(db_table):
sql = """SELECT pg_catalog.setval(pg_get_serial_sequence('"""+db_table+"""', 'id'), MAX(id)) FROM """+db_table+""";"""
with connection.cursor() as cursor:
cursor.execute(sql)
I have similar problem but I solved it by removing all the foreign key in my Postgresql