Conditional unique constraint not updating correctly - postgresql

I need to enforce uniqueness on a column but only when other column is true. For example:
create temporary table test(id serial primary key, property character varying(50), value boolean);
insert into test(property, value) values ('a', false);
insert into test(property, value) values ('a', true);
insert into test(property, value) values ('a', false);
And I enforce the uniqueness with a conditional index:
create unique index on test(property) where value = true;
So far so good, the problem arises when I try to change the row that has the value set to true. It works if I do:
update test set value = new_value from (select id, id=3 as new_value from test where property = 'a')new_test where test.id = new_test.id
But it doesn't when I do:
update test set value = new_value from (select id, id=1 as new_value from test where property = 'a')new_test where test.id = new_test.id
And I get:
ERROR: duplicate key value violates unique constraint "test_property_idx"
DETAIL: Key (property)=(a) already exists.
********** Error **********
ERROR: duplicate key value violates unique constraint "test_property_idx"
SQL state: 23505
Detail: Key (property)=(a) already exists.
Basically it works if the row with value true has a primary key with a bigger value than the current row which is truthy. Any idea on how to circumvent it?
Of course I could do:
update test set value = false where property='a';
update test set value = true where property = 'a' and id = 1;
However, I'm running these queries from node and it is preferable to run only one query.
I'm using Postgres 9.5

Your problem is that UPDATE statements cannot have an ORDER BY clause in SQL (it can have in some RDBMS, but not in PostgreSQL).
The usual solution to this is to make the constraint deferrable. But you use a partial unique index & indexes cannot be declared as deferrable.
Use an exclusion constraint instead: they are the generalization of unique constraints & can be partial too.
ALTER TABLE test
ADD EXCLUDE (property WITH =) WHERE (value = true)
DEFERRABLE INITIALLY DEFERRED;

Related

Postgres ON CONFLICT DO UPDATE doesn't trigger if other constraints fail

Consider the following SQL:
CREATE TABLE external_item (
id SERIAL PRIMARY KEY,
external_id TEXT UNIQUE,
enabled BOOLEAN NOT NULL CHECK (enabled = false OR external_id IS NOT NULL)
);
INSERT INTO external_item (id, enabled, external_id)
VALUES (1, true, 'ext_id_1');
INSERT INTO external_item (id, enabled)
VALUES (1, true)
ON CONFLICT (id)
DO UPDATE
SET enabled = excluded.enabled;
--> ERROR: new row for relation "external_item" violates check constraint "external_item_check"
--> DETAIL: Failing row contains (1, null, t).
The query fails because it the insert doesn't pass the check constraint. Everything works fine if you omit the CHECK from the table definition. Is there some way to set constraint priority or something so that Postgres would resort to the ON CONFLICT DO UPDATE statement before asserting other checks?

Want Not Null and Default to False if Supplied value is NULL

I want to achieve this:
column bool is not null
when supplied value is null it will fill in with default value false
thought this will make it:
create table public.testnull
(
xid integer not null, bool boolean default false
)
test got error
insert into public.testnotnull values(2, null)
ERROR: null value in column "bool" violates not-null constraint
DETAIL: Failing row contains (2, null).
SQL state: 23502
this will run but won't use default. Please don't tell me to use trigger.
CREATE TABLE public.testnull
(
xid integer NOT NULL, bool boolean DEFAULT false
)
You need to use the DEFAULT keyword instead of NULL in your INSERT statement.
From the docs:
DEFAULT: The corresponding column will be filled with its default value. An identity column will be filled with a new value generated by the associated sequence. For a generated column, specifying this is permitted but merely specifies the normal behavior of computing the column from its generation expression.
Also, always explicitly specify column names when using INSERT.
Speaking from decades of experience: unless you're using an ORM it's impossible to keep your CREATE TABLE definitions and INSERT statements in-sync, and eventually you'll add a new column or alter an existing column somewhere that the INSERT statements aren't expecting and everything will break.
INSERT INTO table ( xid, bool ) VALUES ( 2, DEFAULT )
Please don't tell me to use trigger.
However, if you want to change the NULL into DEFAULT or FALSE in a statement like this: INSERT INTO table ( xid, bool ) VALUES ( 2, NULL ) then you have to use a TRIGGER. There's no real way around that.
(You could use a VIEW with a custom INSERT handler, of course, but that's the same thing as creating a trigger).

How to convert an existing column to a foreign key that allows null values - PostgreSQL

I have a Postgres database with table t and column fk_c. I want to convert the column to a foreign key that references c_id in table lookup_c and allows null values. How can I do this?
ALTER TABLE public.t ADD CONSTRAINT "fk_t_c" FOREIGN KEY ("fk_c" ) REFERENCES "public"."lookup_c" ("c_id");
does not work because there rows with null values in column fk_c and I get
ERROR: insert or update on table "t" violates foreign key constraint "fk_t_c"
DETAIL: Key (fk_c)=() is not present in table "lookup_c".
The error message indicates that you have empty strings in that column, not null values.
You need to set those values to null before creating the foreign key:
update t
set fk_c = null
where trim(fk_c) = '';

Postgres Insert if not exists, Update if exists on non-unique column?

I am currently using ON CONFLICT SET to update if there's a duplicate value on a unique column.
INSERT INTO `table`(col1, col2) VALUES ('v1', 'v2')
ON CONFLICT (col1)
DO UPDATE SET
col2 = 'v3'
From the example, col1 is a unique field. But how do I do this if col1 is not unique?
I tried without the unique constraint now I'm getting:
Invalid column reference: 7 ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
By the very definition you cannot have a conflict on a non-unique column. But since you do not duplicates just make it unique.
Alter table "table" add constraint col1_uk unique(col1);
Ended up using 2 successive queries. (1) Try to update first, if no rows updated, (2) try to insert.
In php:
// Try update
$return = execute("UPDATE table SET ... WHERE ...col1='unique_value'...");
// If update returns no value, insert
if (!$return) {
execute("
INSERT INTO table (...)
SELECT ...values...
WHERE NOT EXISTS (SELECT 1 FROM table WHERE ...col1='unique_value'...)
");
}

PostgreSQL partial unique index and upsert

I'm trying to do an upsert to a table that has partial unique indexes
create table test (
p text not null,
q text,
r text,
txt text,
unique(p,q,r)
);
create unique index test_p_idx on test(p) where q is null and r is null;
create unique index test_pq_idx on test(p, q) where r IS NULL;
create unique index test_pr_idx on test(p, r) where q is NULL;
In plain terms, p is not null and only one of q or r can be null.
Duplicate inserts throw constraint violations as expected
insert into test(p,q,r,txt) values ('p',null,null,'a'); -- violates test_p_idx
insert into test(p,q,r,txt) values ('p','q',null,'b'); -- violates test_pq_idx
insert into test(p,q,r,txt) values ('p',null, 'r','c'); -- violates test_pr_idx
However, when I'm trying to use the unique constraint for an upsert
insert into test as u (p,q,r,txt) values ('p',null,'r','d')
on conflict (p, q, r) do update
set txt = excluded.txt
it still throws the constraint violation
ERROR: duplicate key value violates unique constraint "test_pr_idx"
DETAIL: Key (p, r)=(p, r) already exists.
But I'd expect the on conflict clause to catch it and do the update.
What am I doing wrong? Should I be using an index_predicate?
index_predicate
Used to allow inference of partial unique indexes. Any
indexes that satisfy the predicate (which need not actually be partial
indexes) can be inferred. Follows CREATE INDEX format.
https://www.postgresql.org/docs/9.5/static/sql-insert.html
I don't think it's possible to use multiple partial indexes as a conflict target. You should try to achieve the desired behaviour using a single index. The only way I can see is to use a unique index on expressions:
drop table if exists test;
create table test (
p text not null,
q text,
r text,
txt text
);
create unique index test_unique_idx on test (p, coalesce(q, ''), coalesce(r, ''));
Now all three tests (executed twice) violate the same index:
insert into test(p,q,r,txt) values ('p',null,null,'a'); -- violates test_unique_idx
insert into test(p,q,r,txt) values ('p','q',null,'b'); -- violates test_unique_idx
insert into test(p,q,r,txt) values ('p',null, 'r','c'); -- violates test_unique_idx
In the insert command you should pass the expressions used in the index definition:
insert into test as u (p,q,r,txt)
values ('p',null,'r','d')
on conflict (p, coalesce(q, ''), coalesce(r, '')) do update
set txt = excluded.txt;