ConstraintViolationException while running an update using JPQL - postgresql

I ran into an issue with this update query:
UPDATE RankingPosition rp SET rp.position = rp.position + 1 WHERE rp.ranking = :ranking AND rp.position >= :position
I have specified a unique constraint on the position column. The query violates the unique constraint.
I am wondering what I can do to get around this. It is obvious that the final state of the UPDATE would not violate the unique constraint. Unfortunately it looks like the constraint is validated after each individual update, not after the whole update statement is executed.
Is there any way I can overcome this issue?
I already checked that it is working when I remove the unique constraint, and yes, the positions are unique afterwards.

Related

Is it possible to access current column data on conflict

I want to get such behaviour on inserting data (conflict on id):
if there is no model with same id in db do INSERT
if there is entry with same id in db and that entry is newer (updated_at field) do NOT UPDATE
if there is entry with same id in db and that entry is older (updated_at field) do UPDATE
I'm using Ecto for that and want to work on constraints, however I cannot find an option to do so in documentation. Pseudo code of constraint could look like:
CHECK: NULL(current.updated_at) or incoming.updated_at > current.updated_at
Is such behaviour possible in Postgres?
PostgreSQL does not support CHECK constraints that reference table
data other than the new or updated row being checked. While a CHECK
constraint that violates this rule may appear to work in simple tests,
it cannot guarantee that the database will not reach a state in which
the constraint condition is false (due to subsequent changes of the
other row(s) involved). This would cause a database dump and reload to
fail. The reload could fail even when the complete database state is
consistent with the constraint, due to rows not being loaded in an
order that will satisfy the constraint. If possible, use UNIQUE,
EXCLUDE, or FOREIGN KEY constraints to express cross-row and
cross-table restrictions.
If what you desire is a one-time check against other rows at row
insertion, rather than a continuously-maintained consistency
guarantee, a custom trigger can be used to implement that. (This
approach avoids the dump/reload problem because pg_dump does not
reinstall triggers until after reloading data, so that the check will
not be enforced during a dump/reload.)
That should be simple using the WHERE clause of ON CONFLICT ... DO UPDATE:
INSERT INTO mytable (id, entry) VALUES (42, '2021-05-29 12:00:00')
ON CONFLICT (id)
DO UPDATE SET entry = EXCLUDED.entry
WHERE mytable.entry < EXCLUDED.entry;

Unique partial constraint in Postgres?

I have a table and want to update some of the rows like this:
CREATE TABLE abc(i INTEGER, deleted BOOLEAN);
CREATE UNIQUE INDEX myidx ON abc(i) WHERE NOT deleted;
INSERT INTO abc VALUES (4), (5);
UPDATE abc SET i = i - 1;
Which works ok because of the order in which the UPDATE is processing the rows, but when the UPDATE is attempted like this, it fails:
UPDATE abc SET i = i + 1;
ERROR: 23505: duplicate key value violates unique constraint "myidx"
DETAIL: Key (i)=(4) already exists.
SCHEMA NAME: public
TABLE NAME: abc
CONSTRAINT NAME: myidx
LOCATION: _bt_check_unique, nbtinsert.c:534
Time: 0.472 ms
The reason of the error is, in the middle of the update 2 rows would have had the value i = 4, even though at the end of the update all rows would have had unique values.
So I thought of changing the index into a deferred constraint, but according to the docs, this is not possible as my index is partial (so it only enforces uniqueness on some rows):
A uniqueness restriction covering only some rows cannot be written as a unique constraint, but it is possible to enforce such a restriction by creating a unique partial index.
The docs say to use partial indexes, but those can't be deferred, so I go back to the original problem.
So far my solution would be to set i = NULL whenever I mark deleted = true so it's not considered duplicated by my constraint anymore.
Is there a better solution to this? Maybe a way to make the UPDATE go always in the direction I want?
Please note:
I cannot DELETE the row, that's why the deleted column is there. The actual delete is done after some human validation happens.
Update:
The reason I'm bulk-updating that unique column is because this table contains a sequence that is used in the UI for sorting the records (the users drag and drop the records as they wish). And they can also delete them (so I shift the sequences of elements occurring after the one that was deleted).
The actual columns look more like this (name TEXT, description TEXT, ..., sequence NUMBER).
That sequence row is what in the simplified case I called i. So say I have 3 records with (name, sequence):
("Laptop", 1)
("Mobile", 2)
("Desktop", 3)
And I the user deletes the middle one, I want to end up with:
("Laptop", 1)
("Desktop", 2) // <--- updated here

Adding a new constraint in postgresql checks the rows added before?

Let's suppose I have a table called Clients(ID,Name,Phone) which has several rows in it, with some of them empty in the column «Phone».
If I decide to add a new NOT NULL constraint to said table in the column «Phone», does PostgreSQL will check the rows that were already in the table, or is it only going to work to the rows added after the constraint's declaration ?
I think the documentation is pretty clear:
SET/DROP NOT NULL
These forms change whether a column is marked to allow null values or
to reject null values. You can only use SET NOT NULL when the column
contains no null values.
So, using this form, you cannot add such a constraint without checking the previous values.
If you use add table_constraint, then you can do the same thing using a CHECK cosntraint:
ADD table_constraint [ NOT VALID ]
This form adds a new constraint to a table using the same syntax as
CREATE TABLE, plus the option NOT VALID, which is currently only
allowed for foreign key and CHECK constraints. If the constraint is
marked NOT VALID, the potentially-lengthy initial check to verify that
all rows in the table satisfy the constraint is skipped. The
constraint will still be enforced against subsequent inserts or
updates (that is, they'll fail unless there is a matching row in the
referenced table, in the case of foreign keys; and they'll fail unless
the new row matches the specified check constraints). But the database
will not assume that the constraint holds for all rows in the table,
until it is validated by using the VALIDATE CONSTRAINT option.
So, you cannot add a NOT NULL constraint using alter table. You can do essentially the same thing using CHECK. Then, you by-pass the checking using NOT VALID. Otherwise, the checking takes place.

PostgreSQL multi-column unique constraint causes errors

I am new to postgresql and have a question about multiple column unique constraint.
I got this error when tried to add rows to the table:
ERROR: duplicate key value violates unique constraint "i_rb_on"
DETAIL: Key (a_fk, b_fk)=(296, 16) already exists.
I used this code (short version):
INSERT INTO rb_on (a_fk, b_fk) SELECT a.pk, b.pk FROM A, B WHERE NOT EXISTS (SELECT * FROM rb_on WHERE a_fk=a.pk AND b_fk=b.pk);
i_rb_on is unique constraint / columns (a_fk, b_fk).
It seems that my WHERE NOT EXISTS doesn't provide a protection against the duplicate key error for this kind of unique key.
UPDATE:
INSERT INTO tabA (mark_done, log_time, tabB_fk, tabC_fk)
SELECT FALSE, '2003-09-02 04:05:06', tabB.pk, tabC.pk FROM tabB, tabC, tabD, tabE, tabF
WHERE (tabC.sf_id='SUMMER' AND tabC.sf_status IN(0,1)
AND tabE.inventory_status=0)
AND tabF.tabD_fk=tabD.pk
AND tabD.tabE_fk=tabE.pk
AND tabE.tabB_fk=tabB.pk
AND tabF.tabC_fk=tabC.pk
AND NOT EXISTS (SELECT *
FROM tabA
WHERE tabB_fk=tabB.pk AND tabC_fk=tabC.pk);
In tabA unique index:
CREATE UNIQUE INDEX i_tabA
ON tabA
USING btree
(tabB_fk , tabC_fk );
Only one row (of many) must be inserted into the tabA.
Your WHERE NOT EXISTS never provides proper protection against a unique violation. It only seems to most of the time. The WHERE NOT EXISTS can run concurrently with another insert, so the row is still inserted multiple times and all but one of the inserts causes a unique violation.
For that reason it's often better to just run the insert and let the violation happen if the row already exists.
I can't help you with the exact problem described unless you show the data (as SQL CREATE TABLE and INSERTs) and the real query.
BTW, please don't use old style A, B joins. Use A INNER JOIN B ON (...). It makes it easier to tell which join conditions are supposed to apply to which parts of the query, and harder to forget a join condition. You seem to have done that; you're attempting to insert a cartesian product. I suspect it's just an editing mistake in the query.
I added LIMIT 1 to the end: ...WHERE tabB_fk=tabB.pk AND tabC_fk=tabC.pk) LIMIT1 ;
and it did the trick.
I created a function with LIMIT 1 and ...EXCEPTION WHEN unique_violation THEN ... and it also worked.
But when LIMIT 1 and "NOT EXISTS" are used, I think, it is not necessary to use unique_violation error handling.

Prevent dupe records in SQL Server

So I have this stored procedure to insert a message into my database. I wanted to prevent users from posting duplicate messages within a short period of time, whether on accident or on purpose (either a laggy connection or a spammer).
This is what the insert statement looks like:
IF NOT EXISTS (SELECT * FROM tblMessages WHERE message = #message and ip = #ip and datediff(minute,timestamp, getdate()) < 10)
BEGIN
INSERT INTO tblMessages (ip, message, votes, latitude, longitude, location, timestamp, flags, deleted, username, parentId)
VALUES (#ip, #message, 0, #latitude, #longitude, #location, GETDATE(), 0, 0, #username, #parentId)
END
You can see I check to see if the same user has posted the same message within 10 minutes, and if not, I post it. I still saw one dupe come through yesterday. When I checked the timestamp of both messages in the database, they were exactly the same, down to the second, so I'm guessing when this 'exists' check ran on each insert, both came back empty, so they both inserted fine (at basically the same exact time).
What's a way I can prevent this from happening correctly?
I reckon you need a trigger
A unique constraint/index isn't clever enough to deal with the 10 minute gap between posts for a given message and ip.
CREATE TRIGGER TRG_tblMessages_I FRO INSERT
AS
SET NOCOUNT ON;
IF EXISTS (SELECT *
FROM tblMessages M
JOIN INSERTED I ON M.message = I.message and M.ip = I.ip
WHERE
datediff(minute, M.timestamp, I.timestamp) < 10)
BEGIN
RAISERRROR ('blah', 16, 1)
ROLLBACK TRAN
END
Edit: you need an extra condition to ignore the same row you have just inserted (eg using surrogate key)
Actually Derek Kromm isn't far off; Essentially you do want a unique constraint, you just want range for one of the columns.
You can express this as a filtered index which enforces the uniqueness on the columns you want but with a filter to match timestamps within a 10 minutes range.
CREATE NONCLUSTERED INDEX IX_UNC_tblMessages
ON tblMessages (message, ip, timestamp)
WHERE datediff(minute, timestamp, getdate()) < 10)
On the difference between a unique constraint and a filtered index which maintains uniqueness (MSDN):
There are no significant differences between creating a UNIQUE
constraint and creating a unique index independent of a constraint.
Data validation occurs in the same manner and the query optimizer does
not differentiate between a unique index created by a constraint or
manually created. However, you should create a UNIQUE or PRIMARY KEY
constraint on the column when data integrity is the objective. By
doing this the objective of the index will be clear.
The only aspect of this I'm not sure about is the use of getdate(). I'm not sure what effect that will have on the index and performance- this you will want to test for yourself.
Add a unique constraint to the table to absolutely prevent it from happening
ALTER TABLE tblMessages ADD CONSTRAINT uq_tblMessages UNIQUE (message,ip,timestamp)
I think, the easiest way is to use a triggers to check the sender and body of message in existing records in the table.
or, as Derek said, you can use the constraint, but with another condition:
ALTER TABLE tblMessages ADD CONSTRAINT uq_tblMessages UNIQUE (message,ip,username, parentId)
but constraint will generate exception (and you will need to handle it).