How to prevent insert, update and delete on inherited tables in PostgreSQL using BEFORE triggers - postgresql

When using table inheritance, I would like to enforce that insert, update and delete statements should be done against descendant tables. I thought a simple way to do this would be using a trigger function like this:
CREATE FUNCTION test.prevent_action() RETURNS trigger AS $prevent_action$
BEGIN
RAISE EXCEPTION
'% on % is not allowed. Perform % on descendant tables only.',
TG_OP, TG_TABLE_NAME, TG_OP;
END;
$prevent_action$ LANGUAGE plpgsql;
...which I would reference from a trigger defined specified using BEFORE INSERT OR UPDATE OR DELETE.
This seems to work fine for inserts, but not for updates and deletes.
The following test sequence demonstrates what I've observed:
DROP SCHEMA IF EXISTS test CASCADE;
psql:simple.sql:1: NOTICE: schema "test" does not exist, skipping
DROP SCHEMA
CREATE SCHEMA test;
CREATE SCHEMA
-- A function to prevent anything
-- Used for tables that are meant to be inherited
CREATE FUNCTION test.prevent_action() RETURNS trigger AS $prevent_action$
BEGIN
RAISE EXCEPTION
'% on % is not allowed. Perform % on descendant tables only.',
TG_OP, TG_TABLE_NAME, TG_OP;
END;
$prevent_action$ LANGUAGE plpgsql;
CREATE FUNCTION
CREATE TABLE test.people (
person_id SERIAL PRIMARY KEY,
last_name text,
first_name text
);
psql:simple.sql:17: NOTICE: CREATE TABLE will create implicit sequence "people_person_id_seq" for serial column "people.person_id"
psql:simple.sql:17: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "people_pkey" for table "people"
CREATE TABLE
CREATE TRIGGER prevent_action BEFORE INSERT OR UPDATE OR DELETE ON test.people
FOR EACH ROW EXECUTE PROCEDURE test.prevent_action();
CREATE TRIGGER
CREATE TABLE test.students (
student_id SERIAL PRIMARY KEY
) INHERITS (test.people);
psql:simple.sql:24: NOTICE: CREATE TABLE will create implicit sequence "students_student_id_seq" for serial column "students.student_id"
psql:simple.sql:24: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "students_pkey" for table "students"
CREATE TABLE
--The trigger successfully prevents this INSERT from happening
--INSERT INTO test.people (last_name, first_name) values ('Smith', 'Helen');
INSERT INTO test.students (last_name, first_name) values ('Smith', 'Helen');
INSERT 0 1
INSERT INTO test.students (last_name, first_name) values ('Anderson', 'Niles');
INSERT 0 1
UPDATE test.people set first_name = 'Oh', last_name = 'Noes!';
UPDATE 2
SELECT student_id, person_id, first_name, last_name from test.students;
student_id | person_id | first_name | last_name
------------+-----------+------------+-----------
1 | 1 | Oh | Noes!
2 | 2 | Oh | Noes!
(2 rows)
DELETE FROM test.people;
DELETE 2
SELECT student_id, person_id, first_name, last_name from test.students;
student_id | person_id | first_name | last_name
------------+-----------+------------+-----------
(0 rows)
So I'm wondering what I've done wrong that allows updates and deletes directly against the test.people table in this example.

The trigger is set to execute FOR EACH ROW, but there is no row in test.people, that's why it's not run.
As a sidenote, you may issue select * from ONLY test.people to list the rows in test.people that don't belong to child tables.
The solution seems esasy: set a trigger FOR EACH STATEMENT instead of FOR EACH ROW, since you want to forbid the whole statement anyway.
CREATE TRIGGER prevent_action BEFORE INSERT OR UPDATE OR DELETE ON test.people
FOR EACH STATEMENT EXECUTE PROCEDURE test.prevent_action();

Related

duplicate key value error that is difficult to understand

I have a table:
user_id | project_id | permission
--------------------------------------+--------------------------------------+------------
5911e84b-ab6f-4a51-942e-dab979882725 | b4f6d926-ac69-461f-9fd7-1992a1b1c5bc | owner
7e3581a4-f542-4abc-bbda-36fb91ea4bff | eff09e2a-c54b-4081-bde5-68de5d32dd73 | owner
46f9f2e3-edd1-40df-aa52-4bdc354abd38 | 59df2db8-5067-4bc2-b268-3fb1308d9d41 | owner
9089038d-4b77-4774-a095-a621fb73059a | 4f26ace1-f072-42d0-bd0d-ffbae9103b3f | owner
5911e84b-ab6f-4a51-942e-dab979882725 | 59df2db8-5067-4bc2-b268-3fb1308d9d41 | rw
I have a trigger on update:
--------------------------------------------------------------------------------
-- trigger that consumes the queue once the user responds
\set obj_name 'sharing_queue_on_update_trigger'
create or replace function :obj_name()
returns trigger as $$
begin
if new.status = 'accepted' then
-- add to the user_permissions table
insert into core.user_permissions (project_id, user_id, permission)
values (new.project, new.grantee, new.permission);
end if;
-- remove from the queue
delete from core.sharing_queue
where core.sharing_queue.grantee = new.grantee
and core.sharing_queue.project = new.project;
return null;
end;
$$ language plpgsql;
create trigger "Create a user_permission entry when user accepts invitation"
after update on core.sharing_queue
for each row
when (new.status != 'awaiting')
execute procedure :obj_name();
When I run the following update:
update sharing_queue set status='accepted' where project = 'eff09e2a-c54b-4081-bde5-68de5d32dd73';
The record in the following queue is supposed to fodder a new record in the first table presented.
grantor | maybe_grantee_email | project | permission | creation_date | grantee | status
--------------------------------------+---------------------+--------------------------------------+------------+---------------+--------------------------------------+----------
7e3581a4-f542-4abc-bbda-36fb91ea4bff | edmund#gmail.com | eff09e2a-c54b-4081-bde5-68de5d32dd73 | rw | | 46f9f2e3-edd1-40df-aa52-4bdc354abd38 | awaiting
(1 row)
Specifically, the grantee with id ending in 38, with the project_id ending in 73 is supposed feed a new record in the first table.
However, I get the following duplicate index error:
ERROR: duplicate key value violates unique constraint "pk_project_permissions_id"
DETAIL: Key (user_id, project_id)=(46f9f2e3-edd1-40df-aa52-4bdc354abd38, eff09e2a-c54b-4081-bde5-68de5d32dd73) already exists.
CONTEXT: SQL statement "insert into core.user_permissions (project_id, user_id, permission)
values (new.project, new.grantee, new.permission)
returning new"
I don't see how I'm violating the index. There is no record with the user and project combination in the first table presented. Right?
I'm new to using triggers this much. I'm wondering if somehow I might be triggering a "double" entry that cancels the transaction.
Any pointers would be greatly appreciated.
Requested Addendum
Here is the schema for user_permissions
--------------------------------------------------------------------------------
-- 📖 user_permissions
drop table if exists user_permissions;
create table user_permissions (
user_id uuid not null,
project_id uuid not null,
permission project_permission not null,
constraint pk_project_permissions_id primary key (user_id, project_id)
);
comment on column user_permissions.permission is 'Enum owner | rw | read';
comment on table user_permissions is 'Cannot add users directly; use sharing_queue';
-- ⚠️ deleted when the user is deleted
alter table user_permissions
add constraint fk_permissions_users
foreign key (user_id) references users(id)
on delete cascade;
-- ⚠️ deleted when the project is deleted
alter table user_permissions
add constraint fk_permissions_projects
foreign key (project_id) references projects(id)
on delete cascade;
Depending on the contents of the queue, the issue may be that you're not specifying that the record needs to be changed in your trigger:
create trigger "Create a user_permission entry when user accepts invitation"
after update on core.sharing_queue
for each row
when ((new.status != 'awaiting')
and (old.status IS DISTINCT FROM new.status))
execute procedure :obj_name();
Without the distinct check, the trigger would run once for each row where project = 'eff09e2a-c54b-4081-bde5-68de5d32dd73'.
The suggestions were helpful as they inspired the direction of the subsequent implementation. The initial fix using #TonyArra's additional when clause seemed to do the trick. The clause was no longer required once I created a series of on conflict UPSERT contingencies.

Generating incremental numbers based on a different column

I have got a composite primary key in a table in PostgreSQL (I am using pgAdmin4)
Let's call the the two primary keys productno and version.
version represents the version of productno.
So if I create a new dataset, then it needs to be checked if a dataset with this productno already exists.
If productno doesn't exist yet, then version should be (version) 1
If productno exists once, then version should be 2
If productno exists twice, then version should be 3
... and so on
So that we get something like:
productno | version
-----|-----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
I found a quite similar problem: auto increment on composite primary key
But I can't use this solution because PostgreSQL syntax is obviously a bit different - so tried a lot around with functions and triggers but couldn't figure out the right way to do it.
You can keep the version numbers in a separate table (one for each "base PK" value). That is way more efficient than doing a max() + 1 on every insert and has the additional benefit that it's safe for concurrent transactions.
So first we need a table that keeps track of the version numbers:
create table version_counter
(
product_no integer primary key,
version_nr integer not null
);
Then we create a function that increments the version for a given product_no and returns that new version number:
create function next_version(p_product_no int)
returns integer
as
$$
insert into version_counter (product_no, version_nr)
values (p_product_no, 1)
on conflict (product_no)
do update
set version_nr = version_counter.version_nr + 1
returning version_nr;
$$
language sql
volatile;
The trick here is the the insert on conflict which increments an existing value or inserts a new row if the passed product_no does not yet exists.
For the product table:
create table product
(
product_no integer not null,
version_nr integer not null,
created_at timestamp default clock_timestamp(),
primary key (product_no, version_nr)
);
then create a trigger:
create function increment_version()
returns trigger
as
$$
begin
new.version_nr := next_version(new.product_no);
return new;
end;
$$
language plpgsql;
create trigger base_table_insert_trigger
before insert on product
for each row
execute procedure increment_version();
This is safe for concurrent transactions because the row in version_counter will be locked for that product_no until the transaction inserting the row into the product table is committed - which will commit the change to the version_counter table as well (and free the lock on that row).
If two concurrent transactions insert the same value for product_no, one of them will wait until the other finishes.
If two concurrent transactions insert different values for product_no, they can work without having to wait for the other.
If we then insert these rows:
insert into product (product_no) values (1);
insert into product (product_no) values (2);
insert into product (product_no) values (3);
insert into product (product_no) values (1);
insert into product (product_no) values (3);
insert into product (product_no) values (2);
The product table looks like this:
select *
from product
order by product_no, version_nr;
product_no | version_nr | created_at
-----------+------------+------------------------
1 | 1 | 2019-08-23 10:50:57.880
1 | 2 | 2019-08-23 10:50:57.947
2 | 1 | 2019-08-23 10:50:57.899
2 | 2 | 2019-08-23 10:50:57.989
3 | 1 | 2019-08-23 10:50:57.926
3 | 2 | 2019-08-23 10:50:57.966
Online example: https://rextester.com/CULK95702
You can do it like this:
-- Check if pk exists
SELECT pk INTO temp_pk FROM table a WHERE a.pk = v_pk1;
-- If exists, inserts it
IF temp_pk IS NOT NULL THEN
INSERT INTO table(pk, versionpk) VALUES (v_pk1, temp_pk);
END IF;
So - I got it work now
So if you want a column to update depending on another column in pg sql - have a look at this:
This is the function I use:
CREATE FUNCTION public.testfunction()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE v_productno INTEGER := NEW.productno;
BEGIN
IF NOT EXISTS (SELECT *
FROM testtable
WHERE productno = v_productno)
THEN
NEW.version := 1;
ELSE
NEW.version := (SELECT MAX(testtable.version)+1
FROM testtable
WHERE testtable.productno = v_productno);
END IF;
RETURN NEW;
END;
$BODY$;
And this is the trigger that runs the function:
CREATE TRIGGER testtrigger
BEFORE INSERT
ON public.testtable
FOR EACH ROW
EXECUTE PROCEDURE public.testfunction();
Thank you #ChechoCZ, you definetly helped me getting in the right direction.

Postgres 9.5 CREATE TABLE LIKE INCLUDING ALL NO FK Constraints? [duplicate]

Foreign key constraints are not copied when using
create table table_name ( like source_table INCLUDING ALL)'
in Postgres. How can I create a copy of an existing table including all foreign keys.
There is no option to automatically create foreign keys in CREATE TABLE ... LIKE ....
For the documentation:
LIKE source_table [ like_option ... ]
Not-null constraints are always copied to the new table. CHECK
constraints will be copied only if INCLUDING CONSTRAINTS is specified [...]
Indexes, PRIMARY KEY, and UNIQUE constraints on the original table
will be created on the new table only if the INCLUDING INDEXES clause
is specified.
In practice it's easy with GUI tools. For example, in PgAdmin III:
copy declaration (DDL) of source_table to query tool (ctrl-e),
edit the declaration,
execute sql.
In an SQL script you can use the following function. Important assumption: source table foreign keys have correct names i.e. their names contain source table name (what is a typical situation).
create or replace function create_table_like(source_table text, new_table text)
returns void language plpgsql
as $$
declare
rec record;
begin
execute format(
'create table %s (like %s including all)',
new_table, source_table);
for rec in
select oid, conname
from pg_constraint
where contype = 'f'
and conrelid = source_table::regclass
loop
execute format(
'alter table %s add constraint %s %s',
new_table,
replace(rec.conname, source_table, new_table),
pg_get_constraintdef(rec.oid));
end loop;
end $$;
Example of use:
create table base_table (base_id int primary key);
create table source_table (id int primary key, base_id int references base_table);
select create_table_like('source_table', 'new_table');
\d new_table
Table "public.new_table"
Column | Type | Modifiers
---------+---------+-----------
id | integer | not null
base_id | integer |
Indexes:
"new_table_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"new_table_base_id_fkey" FOREIGN KEY (base_id) REFERENCES base_table(base_id)
One more way is to dump the table structure, change it's name in dump, and load it again:
pg_dump -s -t old databases | sed 's/old/new/g' | psql

Update column of a inserted row with with its generated id in a single query

Say I have a table, created as follows:
CREATE TABLE test_table (id serial, unique_id varchar(50) primary key, name varchar(50));
test_table
----------
id | unique_id | name
In that table, I would like to update the unique_id field with the newly inserted id concatenated with the inserted name in a single go.
Usually this is accomplished by two queries. (PHP way)
$q = "INSERT INTO table (unique_id,name) values ('uid','abc') returning id||name as unique_id;";
$r = pg_query($dbconn,$q);
$row = pg_fetch_array($r);
$q1 = "UPDATE test_table set unique_id =".$row['unique_id']." where unique_id='uid'";
$r1 = pg_query($dbconn,$q1);
Is there any way to do the above in a single query?
You can have several options here, you could create a AFTER trigger which uses the generated ID for an direct update of the same row:
CREATE TRIGGER test_table_insert ON AFTER INSERT ON test_table FOR EACH ROW EXECUTE PROCEDURE test_table_insert();
And in your function you update the value:
CREATE FUNCTION test_table_insert() RETURNS TRIGGER AS $$
BEGIN
UPDATE test_table SET uniqid = NEW.id::text || NEW.name WHERE id = NEW.id;
END;
$$ LANGUAGE plpgsql;
You need to add the function before the trigger.
An other option would be to do it directly in the insert:
INSERT INTO table (id, unique_id, name) values (nextval('test_table_id_seq'), 'abc', currval('test_table_id_seq')::text || 'abc') returning id;
But as a_horse_with_no_name pointed out, I think you may have a problem in your database design.

Using Rule to Insert Into Secondary Table Auto-Increments Sequence

To automatically add a column in a second table to tie it to the first table via a unique index, I have a rule such as follows:
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (NEW.userid);
This works fine if user.userid is an integer. However, if it is a sequence (e.g., type serial or bigserial), what is inserted into table lastlogin is the next sequence id. So this command:
INSERT INTO user (username) VALUES ('john');
would insert column [1, 'john', ...] into user but column [2, ...] into lastlogin. The following 2 workarounds do work except that the second one consumes twice as many serials since the sequence is still auto-incrementing:
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (lastval());
CREATE OR REPLACE RULE auto_insert AS ON INSERT TO user DO ALSO
INSERT INTO lastlogin (id) VALUES (NEW.userid-1);
Unfortunately, the workarounds do not work if I'm inserting multiple rows:
INSERT INTO user (username) VALUES ('john'), ('mary');
The first workaround would use the same id, and the second workaround is all kind of screw-up.
Is it possible to do this via postgresql rules or should I simply do the 2nd insertion into lastlogin myself or use a row trigger? Actually, I think the row trigger would also auto-increment the sequence when I access NEW.userid.
Forget rules altogether. They're bad.
Triggers are way better for you. And in 99% of cases when someone thinks he needs a rule. Try this:
create table users (
userid serial primary key,
username text
);
create table lastlogin (
userid int primary key references users(userid),
lastlogin_time timestamp with time zone
);
create or replace function lastlogin_create_id() returns trigger as $$
begin
insert into lastlogin (userid) values (NEW.userid);
return NEW;
end;
$$
language plpgsql volatile;
create trigger lastlogin_create_id
after insert on users for each row execute procedure lastlogin_create_id();
Then:
insert into users (username) values ('foo'),('bar');
select * from users;
userid | username
--------+----------
1 | foo
2 | bar
(2 rows)
select * from lastlogin;
userid | lastlogin_time
--------+----------------
1 |
2 |
(2 rows)