Ensuring members of a link table share a common property - postgresql

I'm using a link table to represent a many-to-many relationship as follows (slightly modified for my use case from this previous answer):
CREATE TABLE owner(
owner_id uuid DEFAULT gen_random_uuid(),
PRIMARY KEY(owner_id)
);
CREATE TABLE product(
product_id uuid DEFAULT gen_random_uuid(),
owner_id uuid NOT NULL,
PRIMARY KEY(product_id)
FOREIGN KEY(owner_id) REFERENCES owner(owner_id)
);
CREATE TABLE bill(
bill_id uuid DEFAULT gen_random_uuid(),
owner_id uuid NOT NULL,
PRIMARY KEY(bill_id),
FOREIGN KEY(owner_id) REFERENCES owner(owner_id)
);
CREATE TABLE bill_product(
bill_id uuid,
product_id uuid,
PRIMARY KEY(bill_id, product_id),
FOREIGN KEY(bill_id) REFERENCES bill(bill_id),
FOREIGN KEY(product_id) REFERENCES product(bill_id)
);
This will of course allow a given bill to belong to many products and vice versa. However, I am wondering what the best way is to ensure that the bill and product belong to the same owner.
I see two options:
Trigger - Have the owner of the bill and product checked BEFORE INSERT, e.g.
CREATE OR REPLACE FUNCTION verify_bill_product_owner() RETURNS trigger AS $trg$
BEGIN
IF (SELECT owner_id FROM product WHERE product_id = NEW.product_id)
<>
(SELECT owner_id FROM bill WHERE bill_id = NEW.bill_id)
THEN
RAISE EXCEPTION 'bill and product do not belong to different owners';
END IF;
RETURN NEW;
END
$trg$ LANGUAGE plpgsql;
CREATE TRIGGER tr_bill_product_biu
BEFORE INSERT OR UPDATE on bill_product
FOR EACH ROW
EXECUTE PROCECURE verify_bill_product_owner();
Compound foreign key - Add the owner_id to the bill_product table and have something like:
-- ..
owner_id uuid,
FOREIGN KEY(owner_id, bill_id) REFERENCES bill(owner_id, bill_id),
FOREIGN KEY(owner_id, product_id) REFERENCES product(product_id, product_id),
-- ..
I think both would work I'm just wondering which is most idiomatic and which would work best in a multi-client/session environment.
I'm using Postgres 9.4.2 :-)

The compound foreign key is cleaner, but requires more space and may have performance implications when the table gets large. The trigger results in the same effect, but I would rewrite the function as follows:
CREATE OR REPLACE FUNCTION verify_bill_product_owner() RETURNS trigger AS $trg$
BEGIN
PERFORM *
FROM product
JOIN bill USING (owner_id)
WHERE product_id = NEW.product_id AND bill_id = NEW.bill_id;
IF NOT FOUND THEN
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END; $trg$ LANGUAGE plpgsql;

Related

How to data model versioning use case in relational database

i have use case where i need to increment the version for individual id by one
CREATE TABLE order (
id BIGSERIAL PRIMARY KEY,
version INT NOT NULL,
);
order with order id 1 can have multiple revision say version 1,2 and 3
and if a new order with some different order id come there revision should again start from 1 and need to be increment by 1 if there is any change .
I know it can be taken care at application/program layer. I want to know if there is any constraint or option at DB layer.
This works for me. (I changed the table name to "orders" and added field1 in order to have field to change)
CREATE TABLE orders (
id BIGSERIAL PRIMARY KEY,
version INT NOT NULL,
field1 VARCHAR
);
CREATE OR REPLACE FUNCTION update_row_version()
RETURNS TRIGGER AS $$
BEGIN
IF row(NEW.version) = row(OLD.version) THEN
NEW.version=(select COALESCE(OLD.version,0)+1 where OLD.id=NEW.id);
RETURN NEW;
ELSE
RETURN OLD;
END IF;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_row_version BEFORE UPDATE ON orders FOR EACH ROW EXECUTE PROCEDURE update_row_version();

PostgreSQL: Foreign key between composite type and independent columns

Minimal definitions:
CREATE TYPE GlobalId AS (
id1 BigInt,
id2 SmallInt
);
CREATE TABLE table1 (
id1 BigSerial NOT NULL,
id2 SmallInt NOT NULL,
PRIMARY KEY (id1, id2)
);
CREATE TABLE table2 (
global_id GlobalId NOT NULL,
FOREIGN KEY (global_id) REFERENCES table1 (id1, id2)
);
In short, I use a composite type for table2 (and many other tables), but for the primary table (table1), I don't directly use the composite type because composite types don't support the use of Serial.
The above produces the following error due to the ostensible mismatch between global_id and id1, id2: number of referencing and referenced columns for foreign key disagree.
Alternatively, if I define the foreign key as FOREIGN KEY (global_id.id1, global_id.id2) REFERENCES table1 (id1, id2), I get a syntax error on using an accessor on global_id.
Any ideas on how to define this foreign key relationship? Alternatively, if there's a way for table1 to use the GlobalId composite type while still getting serial/sequence behavior for id1, that works also.
You can define table1 using your composite type and fill the value using a BEFORE trigger:
CREATE TABLE table1 (id globalid PRIMARY KEY);
CREATE SEQUENCE s OWNED BY table1.id;
CREATE FUNCTION ins_trig() RETURNS trigger LANGUAGE plpgsql AS
$$BEGIN
NEW.id = (nextval('s'), (NEW.id).id2);
RETURN NEW;
END;$$;
CREATE TRIGGER ins_trig BEFORE INSERT ON table1 FOR EACH ROW
EXECUTE PROCEDURE ins_trig();
INSERT INTO table1 VALUES (ROW(NULL, 42));
SELECT * FROM table1;
id
--------
(1,42)
(1 row)

How to have parent record data available when a child one is deleted through cascade

Consider the following two tables:
CREATE TABLE public.parent
(
id bigint NOT NULL DEFAULT nextval('parent_id_seq'::regclass),
CONSTRAINT pk_parent PRIMARY KEY (id)
);
CREATE TABLE public.child
(
child_id bigint NOT NULL DEFAULT nextval('child_child_id_seq'::regclass),
parent_id bigint NOT NULL,
CONSTRAINT pk_child PRIMARY KEY (child_id),
CONSTRAINT inx_parent FOREIGN KEY (parent_id)
REFERENCES public.parent (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
);
CREATE INDEX fki_child
ON public.child
USING btree
(parent_id);
CREATE TRIGGER child_trg
BEFORE DELETE
ON public.child
FOR EACH ROW
EXECUTE PROCEDURE public.trg();
And the trg is defined as:
CREATE OR REPLACE FUNCTION public.trg()
RETURNS trigger AS
$BODY$BEGIN
INSERT INTO temp
SELECT p.id
FROM parent p
WHERE
p.id = OLD.parent_id;
return OLD;
END;$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
To sum up what is happening, there're two tables with a simple parent-child relationship and a cascade on it. There's also a trigger defined on child listening to deletion. I need to access parent's data, in the trigger, when the child's records are deleted due to cascade on parent-child relation. But I can not since they are already deleted! Does anyone have any idea how?
One solution would be to create a BEFORE DELETE trigger on parent instead, which can see all data.
CREATE OR REPLACE FUNCTION public.trg_parent()
RETURNS trigger AS
$func$
BEGIN
INSERT INTO some_tbl (id) -- use target list !!
VALUES (OLD.parent_id);
RETURN OLD;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER parent_trg
BEFORE DELETE ON public.parent
FOR EACH ROW EXECUTE PROCEDURE public.trg_parent();

Using triggers to maintain linking table

I'm considering employing triggers for maintaining linking table. However my initial approach fails due to foreign key constraint violation. Is there any way to solve the issue without disabling constraints?
CREATE TABLE foo (
id SERIAL PRIMARY KEY,
data TEXT
);
CREATE TABLE bar (
id SERIAL PRIMARY KEY,
data TEXT
);
CREATE TABLE foo_bar_link (
foo_id INT NOT NULL REFERENCES foo(id),
bar_id INT NOT NULL REFERENCES bar(id),
UNIQUE (foo_id, bar_id)
);
CREATE OR REPLACE FUNCTION maintain_link()
RETURNS TRIGGER AS
$maintain_link$
DECLARE
bar_id INT;
BEGIN
INSERT INTO bar (data) VALUES ('not_important_for_this_example_bar_data') RETURNING id INTO bar_id;
INSERT INTO foo_bar_link (foo_id, bar_id) VALUES (NEW.id, bar_id);
RETURN NEW;
END;
$maintain_link$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_link BEFORE INSERT ON foo
FOR EACH ROW EXECUTE PROCEDURE maintain_link();
Here is sqlfiddle.
Use AFTER insert, since using BEFORE insert fails because your parent row in foo doesn't exist yet.

Get row to swap tables on a certain condition

I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.
You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;