CASCADE DELETE just once - postgresql

I have a Postgresql database on which I want to do a few cascading deletes. However, the tables aren't set up with the ON DELETE CASCADE rule. Is there any way I can perform a delete and tell Postgresql to cascade it just this once? Something equivalent to
DELETE FROM some_table CASCADE;
The answers to this older question make it seem like no such solution exists, but I figured I'd ask this question explicitly just to be sure.

No. To do it just once you would simply write the delete statement for the table you want to cascade.
DELETE FROM some_child_table WHERE some_fk_field IN (SELECT some_id FROM some_Table);
DELETE FROM some_table;

This command will delete all data from all tables that have a foreign key to the specified table, plus everything that foreign keys to those tables, and so on. Proceed with extreme caution.
If you really want DELETE FROM some_table CASCADE; which means "remove all rows from table some_table", you can use TRUNCATE instead of DELETE and CASCADE is always supported. However, if you want to use selective delete with a where clause, TRUNCATE is not good enough.
USE WITH CARE - This will drop all rows of all tables which have a foreign key constraint on some_table and all tables that have constraints on those tables, etc.
Postgres supports CASCADE with TRUNCATE command:
TRUNCATE some_table CASCADE;
Handily this is transactional (i.e. can be rolled back), although it is not fully isolated from other concurrent transactions, and has several other caveats. Read the docs for details.

I wrote a (recursive) function to delete any row based on its primary key. I wrote this because I did not want to create my constraints as "on delete cascade". I wanted to be able to delete complex sets of data (as a DBA) but not allow my programmers to be able to cascade delete without thinking through all of the repercussions.
I'm still testing out this function, so there may be bugs in it -- but please don't try it if your DB has multi column primary (and thus foreign) keys. Also, the keys all have to be able to be represented in string form, but it could be written in a way that doesn't have that restriction. I use this function VERY SPARINGLY anyway, I value my data too much to enable the cascading constraints on everything.
Basically this function is passed in the schema, table name, and primary value (in string form), and it will start by finding any foreign keys on that table and makes sure data doesn't exist-- if it does, it recursively calls itsself on the found data. It uses an array of data already marked for deletion to prevent infinite loops. Please test it out and let me know how it works for you. Note: It's a little slow.
I call it like so:
select delete_cascade('public','my_table','1');
create or replace function delete_cascade(p_schema varchar, p_table varchar, p_key varchar, p_recursion varchar[] default null)
returns integer as $$
declare
rx record;
rd record;
v_sql varchar;
v_recursion_key varchar;
recnum integer;
v_primary_key varchar;
v_rows integer;
begin
recnum := 0;
select ccu.column_name into v_primary_key
from
information_schema.table_constraints tc
join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema
and tc.constraint_type='PRIMARY KEY'
and tc.table_name=p_table
and tc.table_schema=p_schema;
for rx in (
select kcu.table_name as foreign_table_name,
kcu.column_name as foreign_column_name,
kcu.table_schema foreign_table_schema,
kcu2.column_name as foreign_table_primary_key
from information_schema.constraint_column_usage ccu
join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema
join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema
join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema
join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema
where ccu.table_name=p_table and ccu.table_schema=p_schema
and TC.CONSTRAINT_TYPE='FOREIGN KEY'
and tc2.constraint_type='PRIMARY KEY'
)
loop
v_sql := 'select '||rx.foreign_table_primary_key||' as key from '||rx.foreign_table_schema||'.'||rx.foreign_table_name||'
where '||rx.foreign_column_name||'='||quote_literal(p_key)||' for update';
--raise notice '%',v_sql;
--found a foreign key, now find the primary keys for any data that exists in any of those tables.
for rd in execute v_sql
loop
v_recursion_key=rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name||'='||rd.key;
if (v_recursion_key = any (p_recursion)) then
--raise notice 'Avoiding infinite loop';
else
--raise notice 'Recursing to %,%',rx.foreign_table_name, rd.key;
recnum:= recnum +delete_cascade(rx.foreign_table_schema::varchar, rx.foreign_table_name::varchar, rd.key::varchar, p_recursion||v_recursion_key);
end if;
end loop;
end loop;
begin
--actually delete original record.
v_sql := 'delete from '||p_schema||'.'||p_table||' where '||v_primary_key||'='||quote_literal(p_key);
execute v_sql;
get diagnostics v_rows= row_count;
--raise notice 'Deleting %.% %=%',p_schema,p_table,v_primary_key,p_key;
recnum:= recnum +v_rows;
exception when others then recnum=0;
end;
return recnum;
end;
$$
language PLPGSQL;

If I understand correctly, you should be able to do what you want by dropping the foreign key constraint, adding a new one (which will cascade), doing your stuff, and recreating the restricting foreign key constraint.
For example:
testing=# create table a (id integer primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "a_pkey" for table "a"
CREATE TABLE
testing=# create table b (id integer references a);
CREATE TABLE
-- put some data in the table
testing=# insert into a values(1);
INSERT 0 1
testing=# insert into a values(2);
INSERT 0 1
testing=# insert into b values(2);
INSERT 0 1
testing=# insert into b values(1);
INSERT 0 1
-- restricting works
testing=# delete from a where id=1;
ERROR: update or delete on table "a" violates foreign key constraint "b_id_fkey" on table "b"
DETAIL: Key (id)=(1) is still referenced from table "b".
-- find the name of the constraint
testing=# \d b;
Table "public.b"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
Foreign-key constraints:
"b_id_fkey" FOREIGN KEY (id) REFERENCES a(id)
-- drop the constraint
testing=# alter table b drop constraint b_a_id_fkey;
ALTER TABLE
-- create a cascading one
testing=# alter table b add FOREIGN KEY (id) references a(id) on delete cascade;
ALTER TABLE
testing=# delete from a where id=1;
DELETE 1
testing=# select * from a;
id
----
2
(1 row)
testing=# select * from b;
id
----
2
(1 row)
-- it works, do your stuff.
-- [stuff]
-- recreate the previous state
testing=# \d b;
Table "public.b"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
Foreign-key constraints:
"b_id_fkey" FOREIGN KEY (id) REFERENCES a(id) ON DELETE CASCADE
testing=# alter table b drop constraint b_id_fkey;
ALTER TABLE
testing=# alter table b add FOREIGN KEY (id) references a(id) on delete restrict;
ALTER TABLE
Of course, you should abstract stuff like that into a procedure, for the sake of your mental health.

Yeah, as others have said, there's no convenient 'DELETE FROM my_table ... CASCADE' (or equivalent). To delete non-cascading foreign key-protected child records and their referenced ancestors, your options include:
Perform all the deletions explicitly, one query at a time, starting with child tables (though this won't fly if you've got circular references); or
Perform all the deletions explicitly in a single (potentially massive) query; or
Assuming your non-cascading foreign key constraints were created as 'ON DELETE NO ACTION DEFERRABLE', perform all the deletions explicitly in a single transaction; or
Temporarily drop the 'no action' and 'restrict' foreign key constraints in the graph, recreate them as CASCADE, delete the offending ancestors, drop the foreign key constraints again, and finally recreate them as they were originally (thus temporarily weakening the integrity of your data); or
Something probably equally fun.
It's on purpose that circumventing foreign key constraints isn't made convenient, I assume; but I do understand why in particular circumstances you'd want to do it. If it's something you'll be doing with some frequency, and if you're willing to flout the wisdom of DBAs everywhere, you may want to automate it with a procedure.
I came here a few months ago looking for an answer to the "CASCADE DELETE just once" question (originally asked over a decade ago!). I got some mileage out of Joe Love's clever solution (and Thomas C. G. de Vilhena's variant), but in the end my use case had particular requirements (handling of intra-table circular references, for one) that forced me to take a different approach. That approach ultimately became recursively_delete (PG 10.10).
I've been using recursively_delete in production for a while, now, and finally feel (warily) confident enough to make it available to others who might wind up here looking for ideas. As with Joe Love's solution, it allows you to delete entire graphs of data as if all foreign key constraints in your database were momentarily set to CASCADE, but offers a couple additional features:
Provides an ASCII preview of the deletion target and its graph of
dependents.
Performs deletion in a single query using recursive CTEs.
Handles circular dependencies, intra- and inter-table.
Handles composite keys.
Skips 'set default' and 'set null' constraints.

I cannot comment Palehorse's answer so I added my own answer.
Palehorse's logic is ok but efficiency can be bad with big data sets.
DELETE FROM some_child_table sct
WHERE exists (SELECT FROM some_Table st
WHERE sct.some_fk_fiel=st.some_id);
DELETE FROM some_table;
It is faster if you have indexes on columns and data set is bigger than few records.

You can use to automate this, you could define the foreign key constraint with ON DELETE CASCADE.
I quote the the manual of foreign key constraints:
CASCADE specifies that when a referenced row is deleted, row(s)
referencing it should be automatically deleted as well.

I took Joe Love's answer and rewrote it using the IN operator with sub-selects instead of = to make the function faster (according to Hubbitus's suggestion):
create or replace function delete_cascade(p_schema varchar, p_table varchar, p_keys varchar, p_subquery varchar default null, p_foreign_keys varchar[] default array[]::varchar[])
returns integer as $$
declare
rx record;
rd record;
v_sql varchar;
v_subquery varchar;
v_primary_key varchar;
v_foreign_key varchar;
v_rows integer;
recnum integer;
begin
recnum := 0;
select ccu.column_name into v_primary_key
from
information_schema.table_constraints tc
join information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name and ccu.constraint_schema=tc.constraint_schema
and tc.constraint_type='PRIMARY KEY'
and tc.table_name=p_table
and tc.table_schema=p_schema;
for rx in (
select kcu.table_name as foreign_table_name,
kcu.column_name as foreign_column_name,
kcu.table_schema foreign_table_schema,
kcu2.column_name as foreign_table_primary_key
from information_schema.constraint_column_usage ccu
join information_schema.table_constraints tc on tc.constraint_name=ccu.constraint_name and tc.constraint_catalog=ccu.constraint_catalog and ccu.constraint_schema=ccu.constraint_schema
join information_schema.key_column_usage kcu on kcu.constraint_name=ccu.constraint_name and kcu.constraint_catalog=ccu.constraint_catalog and kcu.constraint_schema=ccu.constraint_schema
join information_schema.table_constraints tc2 on tc2.table_name=kcu.table_name and tc2.table_schema=kcu.table_schema
join information_schema.key_column_usage kcu2 on kcu2.constraint_name=tc2.constraint_name and kcu2.constraint_catalog=tc2.constraint_catalog and kcu2.constraint_schema=tc2.constraint_schema
where ccu.table_name=p_table and ccu.table_schema=p_schema
and TC.CONSTRAINT_TYPE='FOREIGN KEY'
and tc2.constraint_type='PRIMARY KEY'
)
loop
v_foreign_key := rx.foreign_table_schema||'.'||rx.foreign_table_name||'.'||rx.foreign_column_name;
v_subquery := 'select "'||rx.foreign_table_primary_key||'" as key from '||rx.foreign_table_schema||'."'||rx.foreign_table_name||'"
where "'||rx.foreign_column_name||'"in('||coalesce(p_keys, p_subquery)||') for update';
if p_foreign_keys #> ARRAY[v_foreign_key] then
--raise notice 'circular recursion detected';
else
p_foreign_keys := array_append(p_foreign_keys, v_foreign_key);
recnum:= recnum + delete_cascade(rx.foreign_table_schema, rx.foreign_table_name, null, v_subquery, p_foreign_keys);
p_foreign_keys := array_remove(p_foreign_keys, v_foreign_key);
end if;
end loop;
begin
if (coalesce(p_keys, p_subquery) <> '') then
v_sql := 'delete from '||p_schema||'."'||p_table||'" where "'||v_primary_key||'"in('||coalesce(p_keys, p_subquery)||')';
--raise notice '%',v_sql;
execute v_sql;
get diagnostics v_rows = row_count;
recnum := recnum + v_rows;
end if;
exception when others then recnum=0;
end;
return recnum;
end;
$$
language PLPGSQL;

The delete with the cascade option only applied to tables with foreign keys defined. If you do a delete, and it says you cannot because it would violate the foreign key constraint, the cascade will cause it to delete the offending rows.
If you want to delete associated rows in this way, you will need to define the foreign keys first. Also, remember that unless you explicitly instruct it to begin a transaction, or you change the defaults, it will do an auto-commit, which could be very time consuming to clean up.

When you creating new table, you can add some constrains like UNIQUE, or NOT NULL, also you can show SQL which action it should do when you trying to DELETE rows, which has REFERENCES on another tables
CREATE TABLE company (
id SERIAL PRIMARY KEY,
name VARCHAR(128),
year DATE);
CREATE TABLE employee (
id SERIAL PRIMARY KEY,
first_name VARCHAR(128) NOT NULL,
last_name VARCHAR(128) NOT NULL,
company_id INT REFERENCES company(id) ON DELETE CASCADE,
salary INT,
UNIQUE (first_name, last_name));
So after that you can just DELETE any rows which you need, for example:
DELETE
FROM company
WHERE id = 2;

Related

Import CSV Data into Postgres DB with two tables which depend on each other

Hei,
i have a question about the best practice here. I have a Golang Project which uses as Postgres Database and specific Migrations. The Database has many tables and some depend on each other (Table A has FK to Table B, Table B has FK to Table A). My "problem" is now that i have to import data from CSV files, which i do with the COPY ... FROM ... WITH Command. Each CSV file contains the Data for a specific table.
If i try to use the copy command i get the error: "insert or update on table "b" violates foreign key constraint". Thats right, because in table a is no data right now. And cause of the FKs the problem happens on both sides.
So what is the best way to import the data?
Thanks :)
Possible solution approach:
1.) create the table without the FK constraints
2.) load the data into tables
3.) add the FK constraints to tables with ALTER TABLE:
Example:
ALTER TABLE table_name
ADD CONSTRAINT constraint_name constraint_definition;
You can defer deferrable constraints until the end of a transaction:
create table a (id serial primary key, b_id bigint);
create table b (id serial primary key, a_id bigint references a(id) deferrable);
alter table a
add constraint fk_b_id foreign key (b_id) references b(id) deferrable;
begin transaction;
SET CONSTRAINTS ALL DEFERRED;
--your `COPY...FROM...WITH` goes here
insert into b values (1,1);--without deferring constraints it fails here
insert into a values (1,1);
commit;
Problem is, you have to make sure your foreign key constraints are deferrable in the first place - by default they are not, so set constraints all deferred; won't affect them.
You can do this dynamically for the time of your import (online demo):
CREATE TEMP TABLE queries_to_make_constraints_deferrable AS
SELECT format('alter table %I.%I alter constraint %I deferrable',
v.schemaname, v.tablename, con.conname) as query
FROM pg_catalog.pg_constraint con
INNER JOIN pg_catalog.pg_class rel ON rel.oid = con.conrelid
INNER JOIN pg_catalog.pg_namespace nsp ON nsp.oid = connamespace
INNER JOIN (VALUES
('public','table1'),--list your tables here
('public','some_other_table'),
('public','a'),
('public','b')) v(schemaname,tablename)
ON nsp.nspname = v.schemaname AND rel.relname=v.tablename
WHERE con.contype='f' --foreign keys
AND con.condeferrable is False; --non-deferrable
do $$
declare rec record;
begin
for rec in select query from queries_to_make_constraints_deferrable
loop execute rec.query;
end loop;
end $$ ;
Carry out your import in a transaction with deferred constraints, then undo your alterations by replacing deferrable with not deferrable:
begin transaction;
SET CONSTRAINTS ALL DEFERRED;
--your `COPY...FROM...WITH` goes here
insert into b values (1,1);--without deferring constraints it fails here
insert into a values (1,1);
commit;
do $$
declare rec record;
begin
for rec in select query from queries_to_make_constraints_deferrable
loop execute replace(rec.query,'deferrable','not deferrable');
end loop;
end $$ ;
As already stated, an alternative would be to set up your schema without these constraints and add them after importing the data. That might require you to find and separate them from their table definitions, which again calls for a similar dynamic sql.

Insert entries into a table with uniqueness check skipped

I have this table which has some uniqueness constraints and when I do a particular insert statement I want make sure that uniqueness constraints are disabled and not enabled..
Is possible to disable the unique constrains defined for a table and then enable it again - or perhaps even better only for a particular query?
I am using postgres version 12
My tables are initially created without any uniqueness constrains but are altered to have an uniqueness constrain if it need to have one..
-- Create table with EntityId
CREATE TABLE IF NOT EXISTS public.{table_name}
({table_name}_id BIGSERIAL PRIMARY KEY);
-- Create table_registration
CREATE TABLE IF NOT EXISTS public.{table_name}_registration
(
entry_id bigint REFERENCES |entityName|({table_name}_id),
row_id BIGSERIAL PRIMARY KEY,
valid tsrange,
entry_name text,
registration tsrange,
registration_by varchar(255),
|NewAttribute| |datatype|
);
Later are constrains added like this
ALTER TABLE {table_name}_registration
DROP Constraint IF EXISTS {table_name}_{string.Join('_', listOfAttributes)}_key,
ADD Constraint {table_name}_{string.Join('_', listOfAttributes)}_key UNIQUE({string.Join(',', listOfAttributes)});
Prior to doing this all uniqueness is dropped using this
-- Drop all uniqueness constraints
DO $$
DECLARE r RECORD;
BEGIN
FOR r in SELECT
conrelid::regclass,
conname
FROM
pg_constraint
WHERE
contype IN ('u')
AND connamespace = 'public'::regnamespace
LOOP
EXECUTE format('ALTER TABLE %I DROP CONSTRAINT %I', r.conrelid::text, r.conname::text);
END LOOP;
END;
$$

Can we use a select query in alter table command in db2 database

I have a foreign key constraint on my table created using the following command in db2
ALTER TABLE "ADDRESS" ADD FOREIGN KEY("CITY_ID") REFERENCES CITY("ID");
Now I am trying to drop the constraint. Since there was no name given to the constraint while creating, the alter command to drop the foreign key does not work.
Can I use a select command inside the alter table command so that I can query the SYSCAT.TABCONST table to get the constraint id?
Something like
ALTER TABLE ADDRESS DROP FOREIGN KEY
(SELECT CONSTNAME FROM SYSCAT.TABCONST where tabname='ADDRESS' and TYPE='F')
--#SET TERMINATOR #
BEGIN
EXECUTE IMMEDIATE (SELECT 'ALTER TABLE '||TABSCHEMA||'.'||TABNAME||' DROP CONSTRAINT '||CONSTNAME FROM SYSCAT.REFERENCES WHERE TABSCHEMA=USER AND TABNAME='ADDRESS' AND REFTABSCHEMA=USER AND REFTABNAME='CITY');
END#
Note that there may be multiple ADDRESS -> CITY references. But this will work if there is only one such a foreign key between these tables.
We assume here that both tables are in the current user's schema.
You can use compound SQL for this. Here is an example:
--#SET TERMINATOR #
begin
declare v_fkname varchar(128) default '';
declare v_sql varchar(1024);
declare v_not_found integer default 0;
declare not_found condition for sqlstate '02000';
declare continue handler for not_found set v_not_found=1 ;
set v_fkname = (select constname from syscat.tabconst where tabname='ACTORS' and tabschema='USER1' and type='F');
if v_not_found = 0
then
set v_sql='ALTER TABLE actors DROP FOREIGN KEY '||v_fkname ;
execute immediate(v_sql);
end if;
end#
Remember that you will also need to verify afterwards that no objects have become invalid as a result of this change.

Avoid exclusive access locks on referenced tables when DROPping in PostgreSQL

Why does dropping a table in PostgreSQL require ACCESS EXCLUSIVE locks on any referenced tables? How can I reduce this to an ACCESS SHARED lock or no lock at all? i.e. is there a way to drop a relation without locking the referenced table?
I can't find any mention of which locks are required in the documentation, but unless I explicitly get locks in the correct order when dropping multiple tables during concurrent operations, I can see deadlocks waiting on an AccessExclusiveLock in the logs, and acquiring this restrictive lock on commonly-referenced tables is causing momentary delays to other processes when tables are deleted.
To clarify,
CREATE TABLE base (
id SERIAL,
PRIMARY KEY (id)
);
CREATE TABLE main (
id SERIAL,
base_id INT,
PRIMARY KEY (id),
CONSTRAINT fk_main_base (base_id)
REFERENCES base (id)
ON DELETE CASCADE ON UPDATE CASCADE
);
DROP TABLE main; -- why does this need to lock base?
For anyone googling and trying to understand why their drop table (or drop foreign key or add foreign key) got stuck for a long time:
PostgreSQL (I looked at versions 9.4 to 13) foreign key constraints are actually implemented using triggers on both ends of the foreign key.
If you have a company table (id as primary key) and a bank_account table (id as primary key, company_id as foreign key pointing to company.id), then there are actually 2 triggers on the bank_account table and also 2 triggers on the company table.
table_name
timing
trigger_name
function_name
bank_account
AFTER UPDATE
RI_ConstraintTrigger_c_1515961
RI_FKey_check_upd
bank_account
AFTER INSERT
RI_ConstraintTrigger_c_1515960
RI_FKey_check_ins
company
AFTER UPDATE
RI_ConstraintTrigger_a_1515959
RI_FKey_noaction_upd
company
AFTER DELETE
RI_ConstraintTrigger_a_1515958
RI_FKey_noaction_del
Initial creation of those triggers (when creating the foreing key) requires SHARE ROW EXCLUSIVE lock on those tables (it used to be ACCESS EXCLUSIVE lock in version 9.4 and earlier). This lock does not conflict with "data reading locks", but will conflict with all other locks, for example a simple INSERT/UPDATE/DELETE into company table.
Deletion of those triggers (when droping the foreign key, or the whole table) requires ACCESS EXCLUSIVE lock on those tables. This lock conflicts with every other lock!
So imagine a scenario, where you have a transaction A running that first did a simple SELECT from company table (causing it to hold an ACCESS SHARE lock for company table until the transaction is commited or rolled back) and is now doing some other work for 3 minutes. You try to drop the bank_account table in transaction B. This requires ACCESS EXCLUSIVE lock, which will need to wait until the ACCESS SHARE lock is released first.
In addition of that all other transactions, which want to access the company table (just SELECT, or maybe INSERT/UPDATE/DELETE), will be queued to wait on the ACCESS EXCLUSIVE lock, which is waiting on the ACCESS SHARE lock.
Long running transactions and DDL changes require delicate handling.
-- SESSION#1
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
BEGIN;
CREATE TABLE base (
id SERIAL
, dummy INTEGER
, PRIMARY KEY (id)
);
CREATE TABLE main (
id SERIAL
, base_id INTEGER
, PRIMARY KEY (id)
, CONSTRAINT fk_main_base FOREIGN KEY (base_id) REFERENCES base (id)
-- comment the next line out ( plus maybe tghe previous one)
ON DELETE CASCADE ON UPDATE CASCADE
);
-- make some data ...
INSERT INTO base (dummy)
SELECT generate_series(1,10)
;
-- make some FK references
INSERT INTO main(base_id)
SELECT id FROM base
WHERE random() < 0.5
;
COMMIT;
BEGIN;
DROP TABLE main; -- why does this need to lock base?
SELECT pg_backend_pid();
-- allow other session to check the locks
-- and attempt an update to "base"
SELECT pg_sleep(20);
-- On rollback the other session will fail.
-- On commit the other session will succeed.
-- In both cases the other session must wait for us to complete.
-- ROLLBACK;
COMMIT;
-- SESSION#2
-- (Start this after session#1 from a different terminal)
SET search_path = tmp, pg_catalog;
PREPARE peeklock(text) AS
SELECT dat.datname
, rel.relname as relrelname
, cat.relname as catrelname
, lck.locktype
-- , lck.database, lck.relation
, lck.page, lck.tuple
-- , lck.virtualxid, lck.transactionid
-- , lck.classid
, lck.objid, lck.objsubid
-- , lck.virtualtransaction
, lck.pid, lck.mode, lck.granted, lck.fastpath
FROM pg_locks lck
LEFT JOIN pg_database dat ON dat.oid = lck.database
LEFT JOIN pg_class rel ON rel.oid = lck.relation
LEFT JOIN pg_class cat ON cat.oid = lck.classid
WHERE EXISTS(
SELECT * FROM pg_locks l
JOIN pg_class c ON c.oid = l.relation AND c.relname = $1
WHERE l.pid =lck.pid
)
;
EXECUTE peeklock( 'base' );
BEGIN;
-- attempt to perfom some DDL
ALTER TABLE base ALTER COLUMN id TYPE BIGINT;
-- attempt to perfom some DML
UPDATE base SET id = id+100;
COMMIT;
EXECUTE peeklock( 'base' );
\d base
SELECT * FROM base;
I suppose DDL locks everything it touches exclusively for the sake of simplicity — you're not supposed to run DDL involving not-temporary tables during normal operation anyway.
To avoid deadlock you may use advisory lock:
start transaction;
select pg_advisory_xact_lock(0);
drop table main;
commit;
This would ensure that only one client is concurrently running DDL involving referenced tables so it wouldn't matter in which order would other locks be acquired.
You can avoid locking table for long time by dropping foreign key first:
start transaction;
select pg_advisory_xact_lock(0);
alter table main drop constraint fk_main_base;
commit;
start transaction;
drop table main;
commit;
This would still need to lock base exclusively, but for much shorter time.

Foreign keys in postgresql can be violated by trigger

I've created some tables in postgres, added a foreign key from one table to another and set ON DELETE to CASCADE. Strangely enough, I have some fields that appear to be violating this constraint.
Is this normal behaviour? And if so, is there a way to get the behaviour I want (no violations possible)?
Edit:
I orginaly created the foreign key as part of CREATE TABLE, just using
... REFERENCES product (id) ON UPDATE CASCADE ON DELETE CASCADE
The current code pgAdmin3 gives is
ALTER TABLE cultivar
ADD CONSTRAINT cultivar_id_fkey FOREIGN KEY (id)
REFERENCES product (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE;
Edit 2:
To Clarify, I have a sneaking suspicion that the constraints are only checked when updates/inserts happen but are then never looked at again. Unfortunately I don't know enough about postgres to find out if this is true or how fields could end up in the database without those checks being run.
If this is the case, is there some way to check all the foreign keys and fix those problems?
Edit 3:
A constraint violation can be caused by a faulty trigger, see below
I tried to create a simple example that shows foreign key constraint being enforced. With this example I prove I'm not allowed to enter data that violates the fk and I prove that if the fk is not in place during insert, and I enable the fk, the fk constraint throws an error telling me data violates the fk. So I'm not seeing how you have data in the table that violates a fk that is in place. I'm on 9.0, but this should not be different on 8.3. If you can show a working example that proves your issue that might help.
--CREATE TABLES--
CREATE TABLE parent
(
parent_id integer NOT NULL,
first_name character varying(50) NOT NULL,
CONSTRAINT pk_parent PRIMARY KEY (parent_id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE parent OWNER TO postgres;
CREATE TABLE child
(
child_id integer NOT NULL,
parent_id integer NOT NULL,
first_name character varying(50) NOT NULL,
CONSTRAINT pk_child PRIMARY KEY (child_id),
CONSTRAINT fk1_child FOREIGN KEY (parent_id)
REFERENCES parent (parent_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
)
WITH (
OIDS=FALSE
);
ALTER TABLE child OWNER TO postgres;
--CREATE TABLES--
--INSERT TEST DATA--
INSERT INTO parent(parent_id,first_name)
SELECT 1,'Daddy'
UNION
SELECT 2,'Mommy';
INSERT INTO child(child_id,parent_id,first_name)
SELECT 1,1,'Billy'
UNION
SELECT 2,1,'Jenny'
UNION
SELECT 3,1,'Kimmy'
UNION
SELECT 4,2,'Billy'
UNION
SELECT 5,2,'Jenny'
UNION
SELECT 6,2,'Kimmy';
--INSERT TEST DATA--
--SHOW THE DATA WE HAVE--
select parent.first_name,
child.first_name
from parent
inner join child
on child.parent_id = parent.parent_id
order by parent.first_name, child.first_name asc;
--SHOW THE DATA WE HAVE--
--DELETE PARENT WHO HAS CHILDREN--
BEGIN TRANSACTION;
delete from parent
where parent_id = 1;
--Check to see if any children that were linked to Daddy are still there?
--None there so the cascade delete worked.
select parent.first_name,
child.first_name
from parent
right outer join child
on child.parent_id = parent.parent_id
order by parent.first_name, child.first_name asc;
ROLLBACK TRANSACTION;
--TRY ALLOW NO REFERENTIAL DATA IN--
BEGIN TRANSACTION;
--Get rid of fk constraint so we can insert red headed step child
ALTER TABLE child DROP CONSTRAINT fk1_child;
INSERT INTO child(child_id,parent_id,first_name)
SELECT 7,99999,'Red Headed Step Child';
select parent.first_name,
child.first_name
from parent
right outer join child
on child.parent_id = parent.parent_id
order by parent.first_name, child.first_name asc;
--Will throw FK check violation because parent 99999 doesn't exist in parent table
ALTER TABLE child
ADD CONSTRAINT fk1_child FOREIGN KEY (parent_id)
REFERENCES parent (parent_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE;
ROLLBACK TRANSACTION;
--TRY ALLOW NO REFERENTIAL DATA IN--
--DROP TABLE parent;
--DROP TABLE child;
Everything I've read so far seems to suggest that constraints are only checked when the data is inserted. (Or when the constraint is created) For example the manual on set constraints.
This makes sense and - if the database works properly - should be good enough. I'm still curious how I managed to circumvent this or if I just read the situation wrong and there was never a real constraint violation to begin with.
Either way, case closed :-/
------- UPDATE --------
There was definitely a constraint violation, caused by a faulty trigger. Here's a script to replicate:
-- Create master table
CREATE TABLE product
(
id INT NOT NULL PRIMARY KEY
);
-- Create second table, referencing the first
CREATE TABLE example
(
id int PRIMARY KEY REFERENCES product (id) ON DELETE CASCADE
);
-- Create a (broken) trigger function
--CREATE LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION delete_product()
RETURNS trigger AS
$BODY$
BEGIN
DELETE FROM product WHERE product.id = OLD.id;
-- This is an error!
RETURN null;
END;
$BODY$
LANGUAGE plpgsql;
-- Add it to the second table
CREATE TRIGGER example_delete
BEFORE DELETE
ON example
FOR EACH ROW
EXECUTE PROCEDURE delete_product();
-- Now lets add a row
INSERT INTO product (id) VALUES (1);
INSERT INTO example (id) VALUES (1);
-- And now lets delete the row
DELETE FROM example WHERE id = 1;
/*
Now if everything is working, this should return two columns:
(pid,eid)=(1,1). However, it returns only the example id, so
(pid,eid)=(0,1). This means the foreign key constraint on the
example table is violated.
*/
SELECT product.id AS pid, example.id AS eid FROM product FULL JOIN example ON product.id = example.id;