T-SQL Merge not warning about constraint violation - tsql

I have a trigger INSTEAD OF INSERT, UPDATE
and I am using MERGE to merge update / insert the rows.
There is a FK Constraint on one of the columns, so if I try to insert something violating this constraint, it should show an error.
It shows an error if I try it directly with an INSERT statement. But it does not when the insert is being in the WHEN NOT MATCHED part of the MERGE statement.
Why ?

Related

How to fix the " there is no unique or exclusion constraint matching the ON CONFLICT specification " without defined contraint in table

My query :
insert into report_order_control_measure(cm_id,report_type,sort_order,temporal_start_date,temporal_end_date)
values(2,'WAR',220,null,null)
ON CONFLICT (cm_id,report_type,sort_order,temporal_start_date,temporal_end_date) DO NOTHING
Output
ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification SQL state: 42P10
There is no constraint defined in the table . but I would like to throw an error if all the columns in the insert makes a row and exists
Any help to make it work without changing the table properties ?
Use the INSERT INTO ... WHERE NOT EXISTS syntax:
INSERT INTO report_order_control_measure(cm_id,report_type,sort_order,temporal_start_date,temporal_end_date)
SELECT 2,'WAR',220,null,null
WHERE NOT EXISTS (SELECT 1
FROM report_order_control_measure
WHERE cm_id=2
AND report_type='WAR'
AND sort_order=220
AND temporal_start_date IS NULL
AND temporal_end_date IS NULL)
This is only usable in setups where there are not simultaneous/concurrent INSERTS with the same values.
The only 100% safe way to insert without duplicates in all scenarios is to add a unique constraint.

insert into, on conflict do update, why fail? postgresql, pgadmin

I am trying to insert a new table into a big old table to update multiple rows ,
here is my query:
INSERT INTO site_settings (set_id, set_sit_id,set_setting_name,set_setting_type)
SELECT set_id, set_sit_id, replace(TempTable2.stp_device_pool_filter, '${siteShortName}', TempTable2.sit_short_name), set_setting_type from TempTable2 where set_setting_type='DEVICE_POOL'
ON CONFLICT (set_id) DO UPDATE
SET set_sit_id=excluded.set_sit_id,
set_setting_name=excluded.set_setting_name,
set_setting_type=excluded.set_setting_type;
it returns me the message:
duplicate key value violates unique constraint "unique_site_setting"
DETAIL: Key (set_sit_id, set_setting_name, set_setting_type)=(13, SBA123-rr, DEVICE_POOL) already exists.
However, I used to use the similar query to update a much more complicated table, it worked.
don't know what's the problem

Trigger sometimes fails with duplicate key error

I'm using a PostgreSQL RDS instance in AWS. Basically, there is a query that inserts data into a first table, let's call it table. The data there can have duplicates in some fields (except for the primary key obviously).
Then there is the trigger that updates another table, infotable, allowing no duplicates.
The trigger:
CREATE TRIGGER insert_infotable AFTER INSERT ON table
FOR EACH ROW EXECUTE PROCEDURE insert_infotable();
The relevant part of the trigger function looks like this:
CREATE OR REPLACE FUNCTION insert_infotable() RETURNS trigger AS $insert_infotable$
BEGIN
--some irrelevant code
IF NOT EXISTS (SELECT * FROM infotable WHERE col1 = NEW.col1 AND col2 = NEW.col2) THEN
INSERT INTO infotable(col1, col2, col3, col4, col5, col6) values (--some values--);
END IF;
RETURN NEW;
END;
$insert_infotable$ LANGUAGE plpgsql;
The table infotable has a UNIQUE constraint on the columns col1 and col2.
In general all is working fine, but rarely, about once in 1k inserts, the trigger returns an error 'duplicate key value violates unique constraint "unique_col1_and_col2"' for table infotable. Which shouldn't happen since there is the IF NOT EXISTS part in the trigger function.
The first question is what might be the cause of this? The only thing I can think of is races where two users are getting the same info simultaneously, both trigger the trigger but then one updates the second table via trigger and the second user gets the duplicate error. And because of that his whole insert query fails, including the insert to the main table.
If that's the case, what can I do about it? Is using a lock on insert a good idea for a table that is supposed to have 100+ users inserting data simultaneously?
And if yes, what type of lock should I use and what table should I lock -- the main table, or the second one, which gets modified by the trigger? (or I guess should I have the lock with my main insert statement or inside the trigger function?)
Yes, this is a race condition. Two such triggers running concurrently won't see each other's modifications, because the transactions are not yet committed.
Since you have a unique constraint on infotable, you can simply use
INSERT INTO infotable ...
ON CONFLICT (col1, col2) DO NOTHING;

How to set Ignore Duplicate Key in Postgresql while table creation itself

I am creating a table in Postgresql 9.5 where id is the primary key. While inserting rows in the table if anyone tries to insert duplicate id, i want it to get ignored instead of raising exception. Is there any way such that i can set this while table creation itself that duplicate entries get ignored.
There are many techniques to resolve duplicate insertion issue while writing insertion query i.e. using ON CONFLICT DO NOTHING, or using WHERE EXISTS clause etc. But i want to handle this at table creation end so that the person writing insertion query doesn't need to bother any.
Creating RULE is one of the possible solution. Are there other possible solutions? Maybe something like this:
`CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (FILLFACTOR=90, IGNORE_DUP_KEY = ON))`
Although exact this statement doesn't work on Postgresql 9.5 on my machine.
add a trigger before insert or rule on insert do instead - otherwise has to be handled by inserting query. both solutions will require more resources on each insert.
Alternative way to use function with arguments for insert, that will check for duplicates, so end users will use function instead of INSERT statement.
WHERE EXISTS sub-query is not atomic btw - so you can still have exception after check...
9.5 ON CONFLICT DO NOTHING is the best solution still

Is it possible to catch a foreign key violation in postgres

I'm trying to insert data into a table which has a foreign key constraint. If there is a constraint violation in a row that I'm inserting, I want to chuck that data away.
The issue is that postgres returns an error every time I violate the constraint. Is it possible for me to have some statement in my insert statement like 'ON FOREIGN KEY CONSTRAINT DO NOTHING'?
EDIT:
This is the query that I'm trying to do, where info is a dict:
cursor.execute("INSERT INTO event (case_number_id, date, \
session, location, event_type, worker, result) VALUES \
(%(id_number)s, %(date)s, %(session)s, \
%(location)s, %(event_type)s, %(worker)s, %(result)s) ON CONFLICT DO NOTHING", info)
It errors out when there is a foreign key violation
If you're only inserting a single row at a time, you can create a savepoint before the insert and rollback to it when the insert fails (or release it when the insert succeeds).
For Postgres 9.5 or later, you can use INSERT ... ON CONFLICT DO NOTHING which does what it says. You can also write ON CONFLICT DO UPDATE SET column = value..., which will automagically convert your insert into an update of the row you are conflicting with (this functionality is sometimes called "upsert").
This does not work because OP is dealing with a foreign key constraint rather than a unique constraint. In that case, you can most easily use the savepoint method I described earlier, but for multiple rows it may prove tedious. If you need to insert multiple rows at once, it should be reasonably performant to split them into multiple insert statements, provided you are not working in autocommit mode, all inserts occur in one transaction, and you are not inserting a very large number of rows.
Sometimes, you really do need multiple inserts in a single statement, because the round-trip overhead of talking to your database plus the cost of having savepoints on every insert is simply too high. In this case, there are a number of imperfect approaches. Probably the least bad is to build a nested query which selects your data and joins it against the other table, something like this:
INSERT INTO table_A (column_A, column_B, column_C)
SELECT A_rows.*
FROM VALUES (...) AS A_rows(column_A, column_B, column_C)
JOIN table_B ON A_rows.column_B = table_B.column_B;