Postgres constraint for one "active" case per department - postgresql

I have a situation where a department has many cases, but only 1 case can be active.
In the cases table, there is are department_id: bigint references departments.id and active: boolean columns. How can I set up a unique constraint so that a department could have 3 cases, but only 1 can ever be marked as active: TRUE?

You can have 2 simple posibilities:
Make the boolean have only two posibilities: true and null and then set a unique constraint for (department_id, active)
If you can't have only true and null set another column (in a trigger or a calculated column) with the true/null value based in true/false/null column.
Add an unique index over the expression
unique over expresion
create unique index "only one active por department" on cases
(department_id, (case active when true then true else null end));
Living example: https://onecompiler.com/postgresql/3y7fe9hye
example of calculated column
For alternative 2:
select version();
create table cases(
department_id integer,
active boolean,
value text
);
insert into cases values
(1, true, 'a'),
(1, false, 'b'),
(1, false, 'c'),
(2, true, 'd'),
(2, false, 'e'),
(2, null, 'f');
-- select * from cases;
alter table cases
add column active_true boolean GENERATED ALWAYS
AS (case active when true then true else null end) stored;
alter table cases
add unique (department_id, active_true);
-- ok:
insert into cases (department_id, active, value) values (1,false, 'g');
select * from cases;
-- fails:
insert into cases (department_id, active, value) values (1,true , 'h');
select * from cases;
obtains:
version
-----------------------------------------------------------------------------------------------------------------------------
PostgreSQL 14.3 (Debian 14.3-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
(1 row)
CREATE TABLE
INSERT 0 6
ALTER TABLE
ALTER TABLE
INSERT 0 1
department_id | active | value | active_true
---------------+--------+-------+-------------
1 | t | a | t
1 | f | b |
1 | f | c |
2 | t | d | t
2 | f | e |
2 | | f |
1 | f | g |
(7 rows)
psql:commands.sql:32: ERROR: duplicate key value violates unique constraint "cases_department_id_active_true_key"
DETAIL: Key (department_id, active_true)=(1, t) already exists.
Live example: https://onecompiler.com/postgresql/3y7exvrvq

You can use a partial unique constraint:
create unique index on cases (department_id)
where active

Related

PostgresSQL: Validate specific value exists once in the column

I'm trying to check the uniqueness where for the same values in the column APP will exist only specific value in the column STATUS.
Example:
Each app can be only once "true".
APP cannot be a PRIMARY KEY.
The DB should restrict adding additional row with (app1, true) and permit (app1, false)
ID
APP
STATUS
1
app1
false
2
app1
true
3
app1
false
4
app2
false
5
app2
false
6
app2
false
7
app2
true
Maybe constraint, or indexing can help here?
Per #lemon suggestion in comments you could use a trigger or as an alternative:
create table unique_test (
id integer,
app varchar,
status boolean,
unique(app, status));
insert into unique_test values (1, 'app1', null);
insert into unique_test values (2, 'app2', null);
insert into unique_test values (3, 'app1', 'true');
insert into unique_test values (4, 'app1', null);
insert into unique_test values (5, 'app1', 'true');
ERROR: duplicate key value violates unique constraint
"unique_test_app_status_key"
DETAIL: Key (app, status)=(app1, t) already exists.
insert into unique_test values (5, 'app1', null);
insert into unique_test values (6, 'app2', 'true');
select * from unique_test ;
id | app | status
----+------+--------
1 | app1 | NULL
2 | app2 | NULL
3 | app1 | t
4 | app1 | NULL
5 | app1 | NULL
6 | app2 | t
This takes advantage of the fact that UNIQUE sees NULL's as unique from each other. FYI in the upcoming Postgres 15 this behavior can be modified so that is not so, see Unique and NULLS.

Why is my equal not working on 2 identical String

I'm using postgresql 13 and I'm trying to fetch data from a table based on one of its column.
Said table is defined as follow :
create table my_table (
my_table_id int8 not null,
value varchar(255) not null,
another_table_id int8 not null,
primary key (my_table_id) );
create index my_table__lower_value__idx
ON my_table USING btree (lower((value)::text));
Now, when I'm running both query :
first to select a row with a where clause based on a value defined in another table (column my_table_id)
second to select the same row and the same table based on a value defined in this table (column value).
Second query is not returning any row.
See below :
db > select * from my_table where my_table_id = 1001;
my_table_id | value | another_table_id
------------+--------+-----------------
1 | value1 | 1001
(1 row)
db > select * from my_table where lower(value) = lower('value1');
my_table_id | value | another_table_id
------------+--------+-----------------
(0 rows)
Mind you, if I ran this query with some other values, it works :
db > select * from my_table where my_table_id = 1002;
my_table_id | value | another_table_id
------------+--------+-----------------
2 | value2 | 1002
(1 row)
db > select * from my_table where lower(value) = lower('value2');
my_table_id | value | another_table_id
------------+---------+-----------------
2 | value2 | 1002
(1 row)
Why this difference ?
What I've tried so far :
using select * from my_table where value in (select value from my_table where another_table_id = 1001); does not work
using lower on each part of equal statement: still not working on first case.
using LIKE keyword : it works fine in both cases

postgres: temporary column default that is unique and not nullable, without relying on a sequence?

Hi, I want to add a unique, non-nullable column to a table.
It
already has data. I would therefore like to instantly populate the
new column with unique values, eg 'ABC123', 'ABC124', 'ABC125', etc.
The data will eventually be wiped and
replaced with proper data, so i don't want to introduce a sequence
just to populate the default value.
Is it possible to generate a default value for the existing rows, based on something like rownumber()? I realise the use case is ridiculous but is it possible to achieve... if so how?
...
foo text not null unique default 'ABC'||rownumber()' -- or something similar?
...
can be applied generate_series?
select 'ABC' || generate_series(123,130)::text;
ABC123
ABC124
ABC125
ABC126
ABC127
ABC128
ABC129
ABC130
Variant 2 add column UNIQUE and not null
begin;
alter table test_table add column foo text not null default 'ABC';
with s as (select id,(row_number() over(order by id))::text t from test_table) update test_table set foo=foo || s.t from s where test_table.id=s.id;
alter table test_table add CONSTRAINT unique_foo1 UNIQUE(foo);
commit;
results
select * from test_table;
id | foo
----+------
1 | ABC1
2 | ABC2
3 | ABC3
4 | ABC4
5 | ABC5
6 | ABC6

Why is there a difference on UPDATE query results when a `UNIQUE INDEX` is involved?

I stumbled into Why would I get a duplicate key error when updating a row? so I tried a few things on https://extendsclass.com/postgresql-online.html.
Given the following schema:
create table scientist (id integer PRIMARY KEY, firstname varchar(100), lastname varchar(100));
insert into scientist (id, firstname, lastname) values (1, 'albert', 'einstein');
insert into scientist (id, firstname, lastname) values (2, 'isaac', 'newton');
insert into scientist (id, firstname, lastname) values (3, 'marie', 'curie');
select * from scientist;
CREATE UNIQUE INDEX fl_idx ON scientist(firstname, lastname);
when I run this query:
UPDATE scientist AS c SET
firstname = new_values.F,
lastname = new_values.L
FROM (
SELECT * FROM
UNNEST(
ARRAY[1, 1]::numeric[],
ARRAY['one', 'v']::text[],
ARRAY['three', 'f']::text[]
) AS T(
I,
F,
L
)
) AS new_values
WHERE c.id = new_values.I
RETURNING c.id, c.firstname, c.lastname;
I get back:
id firstname lastname
1 one three
whereas if I don't create the index (CREATE UNIQUE INDEX fl_idx ON scientist(firstname, lastname);) I get:
id firstname lastname
1 v f
So I am not sure why the UNIQUE INDEX affects the result and why there isn't a duplicate key value violates unique constraint exception when I change my UNNEST to (similar to what happens on the SO question I mentioned above) since the id is a PRIMARY KEY:
UNNEST(
ARRAY[1, 1]::numeric[],
ARRAY['one', 'one']::text[],
ARRAY['three', 'three']::text[]
)
The postgres version I run the above queries was:
PostgreSQL 11.11 (Debian 11.11-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
From Update:
When using FROM you should ensure that the join produces at most one output row for each row to be modified. In other words, a target row shouldn't join to more than one row from the other table(s). If it does, then only one of the join rows will be used to update the target row, but which one will be used is not readily predictable.
In your case you have two matches for 1, so the choice is completely dependent in which order rows are read.
Here example when index is present for both runs and results are different:
db<>fiddle demo 1
db<>fiddle demo 2
Do you know why I don't get the "duplicate key value violates unique constraint" error?
There is no duplicate key neither on column id nor pair first_name/last_name after update is performed.
Scenario 1:
+-----+------------+----------+
| id | firstname | lastname |
+-----+------------+----------+
| 2 | isaac | newton |
| 3 | marie | curie |
| 1 | v | f |
+-----+------------+----------+
Scenario 2:
+-----+------------+----------+
| id | firstname | lastname |
+-----+------------+----------+
| 2 | isaac | newton |
| 3 | marie | curie |
| 1 | one | three |
+-----+------------+----------+
EDIT:
Using "UPSERT" and trying to insert/update row twice:
INSERT INTO scientist (id,firstname, lastname)
VALUES (1, 'one', 'three'), (1, 'v', 'f')
ON CONFLICT (id)
DO UPDATE SET firstname = excluded.firstname;
-- ERROR: ON CONFLICT DO UPDATE command cannot affect row a second time

unique index violation during update

I have run into a unique index violation in a bigger db. The original problem occurs in a stored pl/pgsql function.
I have simplified everything to show my problem. I can reproduce it in a rather simple table:
CREATE TABLE public.test
(
id integer NOT NULL DEFAULT nextval('test_id_seq'::regclass),
pos integer,
text text,
CONSTRAINT text_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.test
OWNER TO root;
GRANT ALL ON TABLE public.test TO root;
I define a unique index on 'pos':
CREATE UNIQUE INDEX test_idx_pos
ON public.test
USING btree
(pos);
Before the UPDATE the data in the table looks like this:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 5 | testpos4
4 | 4 | testpos3
(4 Zeilen)
tr: (4 rows)
Now I want to decrement all 'pos' values by 1 that are bigger than 2 and get an error (tr are my translations from German to English):
testdb=# UPDATE test SET pos = pos - 1 WHERE pos > 2;
FEHLER: doppelter Schlüsselwert verletzt Unique-Constraint »test_idx_pos«
tr: ERROR: duplicate key violates unique constraint »test_idx_pos«
DETAIL: Schlüssel »(pos)=(4)« existiert bereits.
tr: key »(pos)=(4) already exists.
If the UPDATE had run complete the table would look like this and be unique again:
testdb=# SELECT * FROM test;
id | pos | text
----+-----+----------
2 | 1 | testpos1
3 | 2 | testpos2
1 | 4 | testpos4
4 | 3 | testpos3
(4 Zeilen)
tr: (4 rows)
How can I avoid such situation? I learned that stored pl/pgsql functions are embedded into transactions, so this problem shouldn't appear?
Unique indexes are evaluated per row not per statement (which is e.g. different to Oracle's implementation)
The solution to this problem is to use a unique constraint which can be deferred and thus is evaluated at the end of the transaction.
So instead of the unique index, define a constraint:
alter table test add constraint test_idx_pos unique (pos)
deferrable initially deferred;