postgresql unique constraint for any integer from two columns (or from array) - postgresql

How to guarantee a uniqueness of any integer from the two columns / array?
Example: I create a table and insert one row in it:
CREATE TABLE mytest(a integer NOT NULL, b integer NOT NULL);
INSERT INTO mytest values (1,2);
What UNIQUE INDEX should I create to not allow add any of the following values
INSERT INTO mytest values (1,3); # because 1 is already there
INSERT INTO mytest values (3,1); # because 1 is already there
INSERT INTO mytest values (2,3); # because 2 is already there
INSERT INTO mytest values (3,2); # because 2 is already there
I can have array of two elements instead of two columns if it helps somehow.
Surely, I can invent some workaround, the following come into my mind:
create separate table for all numbers, have unique index there, and add values to two tables with transaction. If the number is not unique, it won't be added to the second table, and transaction fails
add two rows instead of one, and have additional field for id-of-the-pair.
But I want to have one table and I need one row with two elements in it. Is that possible?

You can use exclusion constraint on table along with intarray to quickly perform search of overlapping arrays:
CREATE EXTENSION intarray;
CREATE TABLE test (
a int[],
EXCLUDE USING gist (a gist__int_ops WITH &&)
);
INSERT INTO test values('{1,2}');
INSERT INTO test values('{2,3}');
>> ERROR: conflicting key value violates exclusion constraint "test_a_excl"
>> DETAIL: Key (a)=({2,3}) conflicts with existing key (a)=({1,2}).

Related

Unique partial constraint in Postgres?

I have a table and want to update some of the rows like this:
CREATE TABLE abc(i INTEGER, deleted BOOLEAN);
CREATE UNIQUE INDEX myidx ON abc(i) WHERE NOT deleted;
INSERT INTO abc VALUES (4), (5);
UPDATE abc SET i = i - 1;
Which works ok because of the order in which the UPDATE is processing the rows, but when the UPDATE is attempted like this, it fails:
UPDATE abc SET i = i + 1;
ERROR: 23505: duplicate key value violates unique constraint "myidx"
DETAIL: Key (i)=(4) already exists.
SCHEMA NAME: public
TABLE NAME: abc
CONSTRAINT NAME: myidx
LOCATION: _bt_check_unique, nbtinsert.c:534
Time: 0.472 ms
The reason of the error is, in the middle of the update 2 rows would have had the value i = 4, even though at the end of the update all rows would have had unique values.
So I thought of changing the index into a deferred constraint, but according to the docs, this is not possible as my index is partial (so it only enforces uniqueness on some rows):
A uniqueness restriction covering only some rows cannot be written as a unique constraint, but it is possible to enforce such a restriction by creating a unique partial index.
The docs say to use partial indexes, but those can't be deferred, so I go back to the original problem.
So far my solution would be to set i = NULL whenever I mark deleted = true so it's not considered duplicated by my constraint anymore.
Is there a better solution to this? Maybe a way to make the UPDATE go always in the direction I want?
Please note:
I cannot DELETE the row, that's why the deleted column is there. The actual delete is done after some human validation happens.
Update:
The reason I'm bulk-updating that unique column is because this table contains a sequence that is used in the UI for sorting the records (the users drag and drop the records as they wish). And they can also delete them (so I shift the sequences of elements occurring after the one that was deleted).
The actual columns look more like this (name TEXT, description TEXT, ..., sequence NUMBER).
That sequence row is what in the simplified case I called i. So say I have 3 records with (name, sequence):
("Laptop", 1)
("Mobile", 2)
("Desktop", 3)
And I the user deletes the middle one, I want to end up with:
("Laptop", 1)
("Desktop", 2) // <--- updated here

Unique Identifier in multiple schemas

As the title suggests I want to have a unique ID as a primary key but over multiple schemas. I know about UUID but it's just too costly.
Is there any way to work this around a serial?
You can create a global sequence and use that in your table instead of the automatic sequence that a serial column creates.
create schema global;
create schema s1;
create schema s2;
create sequence global.unique_id;
create table s1.t1
(
id integer default nextval('global.unique_id') primary key
);
create table s2.t1
(
id integer default nextval('global.unique_id') primary key
);
The difference to a serial column is, that the sequence unique_id doesn't "know" it's used by the id columns. A "serial sequence" is automatically dropped if the corresponding column (or table) is dropped which is not what you want with a global sequence.
There is one drawback however: you can't make sure that duplicate values across those two table are inserted manually. If you want to make sure the sequence is always used to insert values, you can create a trigger that always fetches a value from the sequence.

postgresql how to create a exclude constraint on integer array to prevent array value overlap?

When I add an exclude constraint on a table to prevent two rows with same value in int[], I got this error message:
data type integer[] has no default operator class for access method "gist"
I have a table like this:
CREATE TABLE x (
id SERIAL PRIMARY KEY,
ref_id INT REFERENCES x,
purchase_ids INT[],
EXCLUDE USING GIST(purchase_ids WITH &&) WHERE(ref_id IS NULL)
);
My colleague figured it out by create an extension:
CREATE EXTENSION IF NOT EXISTS intarray;
After created this extension, I can add exclude constraint for preventing two rows with same element in array.

How to apply sequence order in trigger?

In Informix, I need to update 2 (two) tables when the trigger is executed. Let's say Table_A and Table_B. In Table_A, there is a int8 (long data type) column as primary key. When a new record is inserted, this primary key column will retrieve the value from a sequence. This is the code:
sequence_A.nextVal
In Table_B, there is a foreign key column that references the primary key in Table_A. In order to make the primary key and the foreign key tally, I use sequence_A.currVal to insert the value into this foreign key column.
I did try the code below but Informix give me syntax error.
create trigger The_Trigger
insert on The_Table referencing new as n
for each row
(
insert into Table_A(...) value(sequence_A.nextVal, ...)
insert into Table_B(...) value(sequence_A.currVal, ...)
)
If I separate the insert statement into 2 (two) difference triggers, it works. Thus I was thinking to create 2 (two) triggers on The_Table. Let's say Trigger_A and Trigger_B, may I know how can I ensure that Trigger_A will get execute first then only Thrigger_B. Can I specify the order execution on triggers? Can this be done? And how?
In your first attempt, you omitted the comma between the two INSERT statements, and the keyword is VALUES, not VALUE:
CREATE TRIGGER The_Trigger
INSERT ON The_Table REFERENCING NEW AS n
FOR EACH ROW
(
INSERT INTO Table_A(...) VALUES(sequence_A.nextVal, ...),
INSERT INTO Table_B(...) VALUES(sequence_A.currVal, ...)
)
With those two changes, I believe you would have sequential execution.
Given a sufficiently recent version of Informix, you can have multiple triggers for a single event on a single table. The sequence of execution is the sequence in which the triggers are defined.

How can I enforce a constraint only if a column is not null in Postgresql?

I would like a solution to enforce a constraint only if a column is not null. I can't seem to find a way to do this in the documentation.
create table mytable(
table_identifier_a INTEGER,
table_identifier_b INTEGER,
table_value1,...)
Do to the nature of the data, I will have identifier b and a value when the table is created. After we receive additional data, I will be able to populate identifier a. At this point I would like to ensure a unique key of (identifier_a, value1) but only if identifier_a exists.
Hopefully that makes sense, Any one have any ideas?
Ummm. Unique constraints don't prevent multiple NULL values.
CREATE TABLE mytable (
table_identifier_a INTEGER NULL,
table_identifier_b INTEGER NOT NULL,
table_value1 INTEGER NOT NULL,
UNIQUE(table_identifier_a, table_identifier_b)
);
Note that we can insert muliple NULLs into it, even when identifier_b
matches:
test=# INSERT INTO mytable values(NULL, 1, 2);
INSERT 0 1
test=# INSERT INTO mytable values(NULL, 1, 2);
INSERT 0 1
test=# select * from mytable;
table_identifier_a | table_identifier_b | table_value1
--------------------+--------------------+--------------
| 1 | 2
| 1 | 2
(2 rows)
But we can't create duplicate (a,b) pairs:
test=# update mytable set table_identifier_a = 3;
ERROR: duplicate key value violates unique constraint "mytable_table_identifier_a_key"
Of course, you do have an issue: Your table has no primary key. You
probably have a data model problem. But you didn't provide enough
details to fix that.
If it is feasible to complete the entire operation within one transaction, it is possible to change the time which postgres evaluates the constraint, i.e.:
START;
SET CONSTRAINTS <...> DEFERRED;
<SOME INSERT/UPDATE/DELETE>
COMMIT;
In this case, the constraint is evaluated at commit. See:
Postgres 7.4 Doc - Set constraints or Postgres 8.3 Doc
Actually, I would probably break this out into Two tables. You're modeling two different kinds of things. The first one is the initial version, which is only partial, and the second is the whole thing. Once the information needed to bring the first kind of thing to the second, move the row from one table to the other.
You could handle this using a trigger instead of a constraint.
If I were you, I'd split the table into two tables, and possibly create view which combines them as needed.