I have a Postgres 10.6 table with a serial ID column.
When I attempt to insert into it:
INSERT INTO table (col1, col2) VALUES ('foo', 'bar');
excluding the ID column from the column list, I get:
ERROR: duplicate key value violates unique constraint "customer_invoice_pkey"
Detail: Key (id)=(1234) already exists.
Subsequent runs of the query increment the ID in the error message (1235, 1236 etc.)
How can this be happening?
Having a serial column does not prevent you from inserting rows with an explicit value for id. The sequence value is only a default value that is used when id is not specified in the INSERT statement.
So there must have been some “rogue” inserts of that kind. From PostgreSQL v11 on, you can use identity columns (GENERATED ALWAYS AS IDENTITY) to make overriding the sequence value harder.
You could use the setval function to set the sequence to a value higher than the maximum id in the table to work around the problem.
Related
Postgres 12:
CREATE TABLE l_table (
id INT generated always as identity,
w_id int NOT null references w_table(id),
primary key (w_id, id)
)PARTITION BY LIST (w_id);
CREATE table l1 PARTITION OF l_table FOR VALUES IN (1);
insert into l1 (w_id) values (1);
I'm getting:
ERROR: null value in column "id" violates not-null constraint
If I replace INT generated always as identity with SERIAL it works. This is odd as in another table the generated always as identity works with null. Using default as value does not work either.
GAAI is supposed to be the SQL standard way of replacing SERIAL, even It's the suggested one. What am I missing here?
Thanks.
What am I missing here?
You're trying to insert into the partition table l1 directly, instead of the partitioned l_table. This ignores the identity column on the parent table, tries to insert the default null, and fails the non-null constraint that every identity column has. If you instead do
insert into l_table (w_id) values (1);
it will work and route the inserted row into the right partition.
Using default as value does not work either.
Apparently it's quite hard to do that. How to DEFAULT Partitioned Identity Column? over at dba.SE discusses some workarounds.
I have a table with 2 columns:
channels TEXT
rowid INTEGER PRIMARY KEY
I included an index on channels
CREATE UNIQUE INDEX channels_index on mytable (lower(channels))
so that VisitToronto will be a conflict with visittoronto
All works well and the conflict fires.
ERROR: duplicate key value violates unique constraint "channels_index"
DETAIL: Key (lower(words))=(hello world) already exists.
I can not figure out the syntax to trap this conflict. ON CONFLICT channels doesn't work ON CONFLICT ON CONSTRAINT channels_index doesn't work
The closest I've got is:
ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
Any direction would be appreciated.
TIA
Use the index expression, i.e. lower(channels):
insert into my_table (channels) values
('VisitToronto');
insert into my_table (channels)
values ('visittoronto')
on conflict (lower(channels)) do
update set channels = excluded.channels;
select *
from my_table;
id | channels
----+--------------
1 | visittoronto
(1 row)
You are not able to use a constraint because the index is on an expression. In the case Postgres cannot create a constraint:
alter table my_table add constraint channels_unique unique using index channels_index;
ERROR: index "channels_index" contains expressions
LINE 1: alter table my_table add constraint channels_unique unique u...
^
DETAIL: Cannot create a primary key or unique constraint using such an index.
I have a table being populated during an ETL routine a column at a time.
The mandatory columns (which are foreign keys) are set first and at once, so the initial state of the table is:
key | fkey | a
-------|--------|-------
1 | 1 | null
After processing A values, I insert them using SQL Alchemy with PostgreSQL dialect for a simple upsert:
upsert = sqlalchemy.sql.text("""
INSERT INTO table
(key, a)
VALUES (:key, :a)
ON CONFLICT (key) DO UPDATE SET
a = EXCLUDED.a
""")
But this fails because it apparently tried to insert the fkey value as null.
psycopg2.IntegrityError: null value in column "fkey" violates not-null constraint
DETAIL: Failing row contains (1, null, 0).
Is the syntax really correct? Why is it failing? Does SQLAlchemy has any participation on this error or is it translating PLSQL correctly?
My suspicion is that the constraint checks happen before the CONFLICT resolution triggers, so although it would actually work because fkey is guaranteed to be not null before and won't be overwritten, the constraint check only looks at the tentative insertion and the table constraints.
This is a current documented limitation of PostgreSQL, an area where it breaks the spec.
Currently, only UNIQUE, PRIMARY KEY, REFERENCES (foreign key), and EXCLUDE constraints are affected by this setting. NOT NULL and CHECK constraints are always checked immediately when a row is inserted or modified (not at the end of the statement). Uniqueness and exclusion constraints that have not been declared DEFERRABLE are also checked immediately.
You can't defer the NOT NULL constraint, and it seems you understand the default behavior, seen here.
CREATE TABLE foo ( a int NOT NULL, b int UNIQUE, c int );
INSERT INTO foo (a,b,c) VALUES (1,2,3);
INSERT INTO foo (b,c) VALUES (2,3);
ERROR: null value in column "a" violates not-null constraint
DETAIL: Failing row contains (null, 2, 3).
I have a table
CREATE TABLE users (
id BIGSERIAL NOT NULL PRIMARY KEY,
created_at TIMESTAMP DEFAULT NOW()
);
First I run
INSERT INTO users (id) VALUES (1);
After I run
INSERT INTO users (created_at) VALUES ('2016-11-10T09:37:59+00:00');
And I get
ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
Why id sequence is not incremented when I insert "id" by myself?
That is because the DEFAULT clause only gets evaluated if you either omit the column in the SET clause or insert the special value DEFAULT.
In your first INSERT, the DEFAULT clause is not evaluated, so the sequence is not increased. Your second INSERT uses the DEFAULT clause, the sequence is increased and returns the value 1, which collides with the value explicitly given in the previous INSERT.
Don't mix INSERTs with automatic value creation using sequences and INSERTs that explicitly specify the column. Or if you have to, you sould make sure that the values cannot collide, e.g. by using even numbers for automatically generated values and odd numbers for explicit INSERTs.
I create table so:
CREATE TABLE mytable(
name CHARACTER VARYING CONSTRAINT exact_11char CHECK( CHAR_LENGTH(name) = 11 ) ,
age INTEGER
)
Then add id PRIMARY KEY column
ALTER TABLE mytable ADD COLUMN id BIGSERIAL PRIMARY KEY
Then, when trying insert in column name data, which character length isn't 11, happened error from CONSTRAINT.
Ok, but also, id column sequence is incremenmted on each failed attempts.
How to make so: on failed (reason CONSTRAINT) attempts, not increment auto_inceremented column?
postgreSQL version is: 9.2
Since the sequence operations are non-transactional. So there is no simple way exists in PostgreSQL to stop the increment operation on sequence when the corresponding insert fails.
Check the link to create a gapless sequences.
Gapless Sequences for Primary Keys