Get row to swap tables on a certain condition - postgresql

I currently have a parent table:
CREATE TABLE members (
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
first_name varchar(20)
last_name varchar(20)
address address (composite type)
contact_numbers varchar(11)[3]
date_joined date
type varchar(5)
);
and two related tables:
CREATE TABLE basic_member (
activities varchar[3])
INHERITS (members)
);
CREATE TABLE full_member (
activities varchar[])
INHERITS (members)
);
If the type is full the details are entered to the full_member table or if type is basic into the basic_member table. What I want is that if I run an update and change the type to basic or full the tuple goes into the corresponding table.
I was wondering if I could do this with a rule like:
CREATE RULE tuple_swap_full
AS ON UPDATE TO full_member
WHERE new.type = 'basic'
INSERT INTO basic_member VALUES (old.member_id, old.first_name, old.last_name,
old.address, old.contact_numbers, old.date_joined, new.type, old.activities);
... then delete the record from the full_member
Just wondering if my rule is anywhere near or if there is a better way.

You don't need
member_id SERIAL NOT NULL, UNIQUE, PRIMARY KEY
A PRIMARY KEY implies UNIQUE NOT NULL automatically:
member_id SERIAL PRIMARY KEY
I wouldn't use hard coded max length of varchar(20). Just use text and add a check constraint if you really must enforce a maximum length. Easier to change around.
Syntax for INHERITS is mangled. The key word goes outside the parens around columns.
CREATE TABLE full_member (
activities text[]
) INHERITS (members);
Table names are inconsistent (members <-> member). I use the singular form everywhere in my test case.
Finally, I would not use a RULE for the task. A trigger AFTER UPDATE seems preferable.
Consider the following
Test case:
Tables:
CREATE SCHEMA x; -- I put everything in a test schema named "x".
-- DROP TABLE x.members CASCADE;
CREATE TABLE x.member (
member_id SERIAL PRIMARY KEY
,first_name text
-- more columns ...
,type text);
CREATE TABLE x.basic_member (
activities text[3]
) INHERITS (x.member);
CREATE TABLE x.full_member (
activities text[]
) INHERITS (x.member);
Trigger function:
Data-modifying CTEs (WITH x AS ( DELETE ..) are the best tool for the purpose. Requires PostgreSQL 9.1 or later.
For older versions, first INSERT then DELETE.
CREATE OR REPLACE FUNCTION x.trg_move_member()
RETURNS trigger AS
$BODY$
BEGIN
CASE NEW.type
WHEN 'basic' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.basic_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
WHEN 'full' THEN
WITH x AS (
DELETE FROM x.member
WHERE member_id = NEW.member_id
RETURNING *
)
INSERT INTO x.full_member (member_id, first_name, type) -- more columns
SELECT member_id, first_name, type -- more columns
FROM x;
END CASE;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Trigger:
Note that it is an AFTER trigger and has a WHEN condition.
WHEN condition requires PostgreSQL 9.0 or later. For earlier versions, you can just leave it away, the CASE statement in the trigger itself takes care of it.
CREATE TRIGGER up_aft
AFTER UPDATE
ON x.member
FOR EACH ROW
WHEN (NEW.type IN ('basic ','full')) -- OLD.type cannot be IN ('basic ','full')
EXECUTE PROCEDURE x.trg_move_member();
Test:
INSERT INTO x.member (first_name, type) VALUES ('peter', NULL);
UPDATE x.member SET type = 'full' WHERE first_name = 'peter';
SELECT * FROM ONLY x.member;
SELECT * FROM x.basic_member;
SELECT * FROM x.full_member;

Related

Get row number of row to be inserted in Postgres trigger that gives no collisions when inserting multiple rows

Given the following (simplified) schema:
CREATE TABLE period (
id UUID NOT NULL DEFAULT uuid_generate_v4(),
name TEXT
);
CREATE TABLE course (
id UUID NOT NULL DEFAULT uuid_generate_v4(),
name TEXT
);
CREATE TABLE registration (
id UUID NOT NULL DEFAULT uuid_generate_v4(),
period_id UUID NOT NULL REFERENCES period(id),
course_id UUID NOT NULL REFERENCES course(id),
inserted_at timestamptz NOT NULL DEFAULT now()
);
I now want to add a new column client_ref, which identifies a registration unique within a period, but consists of only a 4-character string. I want to use pg_hashids - which requires a unique integer input - to base the column value on.
I was thinking of setting up a trigger on the registration table that runs on inserting a new row. I came up with the following:
CREATE OR REPLACE FUNCTION set_client_ref()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
DECLARE
next_row_number integer;
BEGIN
WITH rank AS (
SELECT
period.id AS period_id,
row_number() OVER (PARTITION BY period.id ORDER BY registration.inserted_at)
FROM
registration
JOIN period ON registration.period_id = period.id ORDER BY
period.id,
row_number
)
SELECT
COALESCE(rank.row_number, 0) + 1 INTO next_row_number
FROM
period
LEFT JOIN rank ON (rank.period_id = period.id)
WHERE
period.id = NEW.period_id
ORDER BY
rank.row_number DESC
LIMIT 1;
NEW.client_ref = id_encode (next_row_number);
RETURN NEW;
END
$function$
;
The trigger is set-up like: CREATE TRIGGER set_client_ref BEFORE INSERT ON registration FOR EACH ROW EXECUTE FUNCTION set_client_ref();
This works as expected when inserting a single row to registration, but if I insert multiple within one statement, they end up having the same client_ref. I can reason about why this happens (the rows don't know about each other's existence, so they assume they're all just next in line when retrieving their row_order), but I am not sure what a way is to prevent this. I tried setting up the trigger as an AFTER trigger, but it resulted in the same (duplicated) behaviour.
What would be a better way to get the lowest possible, unique integer for the rows to be inserted (to base the hash function on) that also works when inserting multiple rows?

PostgreSQL -- JOIN UNNEST output with CTE INSERT ID -- INSERT many to many

In a PostgreSQL function, is it possible to join the result of UNNEST, which is an integer array from function input, with an ID returned from a CTE INSERT?
I have PostgreSQL tables like:
CREATE TABLE public.message (
id SERIAL PRIMARY KEY,
content TEXT
);
CREATE TABLE public.message_tag (
id SERIAL PRIMARY KEY,
message_id INTEGER NOT NULL CONSTRAINT message_tag_message_id_fkey REFERENCES public.message(id) ON DELETE CASCADE,
tag_id INTEGER NOT NULL CONSTRAINT message_tag_tag_id_fkey REFERENCES public.tag(id) ON DELETE CASCADE
);
I want to create a PostgreSQL function which takes input of content and an array of tag_id. This is for graphile. I want to do it all in one function, so I get a mutation.
Here's what I got so far. I don't know how to join an UNNEST across an id returned from a CTE.
CREATE FUNCTION public.create_message(content text, tags Int[])
RETURNS public.message
AS $$
-- insert to get primary key of message, for many to many message_id
WITH moved_rows AS (
INSERT INTO public.message (content)
RETURNING *;
)
-- many to many relation
INSERT INTO public.message_tag
SELECT moved_rows.id as message_id, tagInput.tag_id FROM moved_rows, UNNEST(tags) as tagInput;
RETURNING *
$$ LANGUAGE sql VOLATILE STRICT;
You're not that far from your goal:
the semicolon placement in the CTE is wrong
the first INSERT statement lacks a SELECT or VALUES clause to specify what should be inserted
the INSERT into tag_message should specify the columns in which to insert (especially if you have that unnecessary serial id)
you specified a relation alias for the UNNEST call already, but none for the column tag_id
your function was RETURNING a set of message_tag rows but was specified to return a single message row
To fix these:
CREATE FUNCTION public.create_message(content text, tags Int[])
RETURNS public.message
AS $$
-- insert to get primary key of message, for many to many message_id
WITH moved_rows AS (
INSERT INTO public.message (content)
VALUES ($1)
RETURNING *
),
-- many to many relation
_ AS (
INSERT INTO public.message_tag (message_id, tag_id)
SELECT moved_rows.id, tagInput.tag_id
FROM moved_rows, UNNEST($2) as tagInput(tag_id)
)
TABLE moved_rows;
$$ LANGUAGE sql VOLATILE STRICT;
(Online demo)

Postgresql - retrieving referenced fields in a query

I have a table created like
CREATE TABLE data
(value1 smallint references labels,
value2 smallint references labels,
value3 smallint references labels,
otherdata varchar(32)
);
and a second 'label holding' table created like
CREATE TABLE labels (id serial primary key, name varchar(32));
The rationale behind it is that value1-3 are a very limited set of strings (6 options) and it seems inefficient to enter them directly in the data table as varchar types. On the other hand these do occasionally change, which makes enum types unsuitable.
My question is, how can I execute a single query such that instead of the label IDs I get the relevant labels?
I looked at creating a function for it and stumbled at the point where I needed to pass the label holding table name to the function (there are several such (label holding) tables across the schema). Do I need to create a function per label table to avoid that?
create or replace function translate
(ref_id smallint,reference_table regclass) returns varchar(128) as
$$
begin
select name from reference_table where id = ref_id;
return name;
end;
$$
language plpgsql;
And then do
select
translate(value1, labels) as foo,
translate(value2, labels) as bar
from data;
This however errors out with
ERROR: relation "reference_table" does not exist
All suggestions welcome - at this point a can still alter just about anything...
CREATE TABLE labels
( id smallserial primary key
, name varchar(32) UNIQUE -- <<-- might want this, too
);
CREATE TABLE data
( value1 smallint NOT NULL REFERENCES labels(id) -- <<-- here
, value2 smallint NOT NULL REFERENCES labels(id)
, value3 smallint NOT NULL REFERENCES labels(id)
, otherdata varchar(32)
, PRIMARY KEY (value1,value2,value3) -- <<-- added primary key here
);
-- No need for a function here.
-- For small sizes of the `labels` table, the query below will always
-- result in hash-joins to perform the lookups.
SELECT l1.name AS name1, l2.name AS name2, l3.name AS name3
, d.otherdata AS the_data
FROM data d
JOIN labels l1 ON l1.id = d.value1
JOIN labels l2 ON l2.id = d.value2
JOIN labels l3 ON l3.id = d.value3
;
Note: labels.id -> labels.name is a functional dependency (id is the primary key), but that doesn't mean that you need a function. The query just acts like a function.
You can pass the label table name as string, construct a query as string and execute it:
sql = `select name from ` || reference_table_name || `where id = ` || ref_id;
EXECUTE sql INTO name;
RETURN name;

Avoid putting PostgreSQL function result into one field

The end result of what I am after is a query that calls a function and that function returns a set of records that are in their own separate fields. I can do this but the results of the function are all in one field.
ie: http://i.stack.imgur.com/ETLCL.png and the results I am after are: http://i.stack.imgur.com/wqRQ9.png
Here's the code to create the table
CREATE TABLE tbl_1_hm
(
tbl_1_hm_id bigserial NOT NULL,
tbl_1_hm_f1 VARCHAR (250),
tbl_1_hm_f2 INTEGER,
CONSTRAINT tbl_1_hm PRIMARY KEY (tbl_1_hm_id)
)
-- do that for a few times to get some data
INSERT INTO tbl_1_hm (tbl_1_hm_f1, tbl_1_hm_f2)
VALUES ('hello', 1);
CREATE OR REPLACE FUNCTION proc_1_hm(id BIGINT)
RETURNS TABLE(tbl_1_hm_f1 VARCHAR (250), tbl_1_hm_f2 int AS $$
SELECT tbl_1_hm_f1, tbl_1_hm_f2
FROM tbl_1_hm
WHERE tbl_1_hm_id = id
$$ LANGUAGE SQL;
--And here is the current query I am running for my results:
SELECT t1.tbl_1_hm_id, proc_1_hm(t1.tbl_1_hm_id) AS t3
FROM tbl_1_hm AS t1
Thanks for having a read. Please if you want to haggle about the semantics of what I am doing by hitting the same table twice or my naming convention --> this is a simplified test.
When a function returns a set of records, you should treat it as a table source:
SELECT t1.tbl_1_hm_id, t3.*
FROM tbl_1_hm AS t1, proc_1_hm(t1.tbl_1_hm_id) AS t3;
Note that functions are implicitly using a LATERAL join (scroll down to sub-sections 4 and 5) so you can use fields from tables listed previously without having to specify an explicit JOIN condition.

Need foreign key as array

CREATE TABLE test ( id int PRIMARY KEY , name );
CREATE TABLE test1 ( id integer[] REFERENCES test , rollid int );
ERROR: foreign key constraint "test3_id_fkey" cannot be implemented
DETAIL: Key columns "id" and "id" are of incompatible types: integer[] and integer.
after that I try to another way also
CREATE TABLE test1 ( id integer[] , rollid int);
ALTER TABLE test1 ADD CONSTRAINT foreignkeyarray FOREIGN KEY (id) REFERENCES test;
ERROR: foreign key constraint "fkarray" cannot be implemented
DETAIL: Key columns "id" and "id" are of incompatible types: integer[] and integer.
so I try create a foreign key array means it say error. please tell me anyone?
postgresql version is 9.1.
What you're trying to do simply can't be done. At all. No ifs, no buts.
Create a new table, test1_test, containing two fields, test1_id, test_id. Put the foreign keys as needed on that one, and make test1's id an integer.
Using arrays with foreign element keys is usually a sign of incorrect design. You need to do separate table with one to many relationship.
But technically it is possible. Example of checking array values without triggers. One reusable function with paramethers and dynamic sql. Tested on PostgreSQL 10.5
create schema if not exists test;
CREATE OR REPLACE FUNCTION test.check_foreign_key_array(data anyarray, ref_schema text, ref_table text, ref_column text)
RETURNS BOOL
RETURNS NULL ON NULL INPUT
LANGUAGE plpgsql
AS
$body$
DECLARE
fake_id text;
sql text default format($$
select id::text
from unnest($1) as x(id)
where id is not null
and id not in (select %3$I
from %1$I.%2$I
where %3$I = any($1))
limit 1;
$$, ref_schema, ref_table, ref_column);
BEGIN
EXECUTE sql USING data INTO fake_id;
IF (fake_id IS NOT NULL) THEN
RAISE NOTICE 'Array element value % does not exist in column %.%.%', fake_id, ref_schema, ref_table, ref_column;
RETURN false;
END IF;
RETURN true;
END
$body$;
drop table if exists test.t1, test.t2;
create table test.t1 (
id integer generated by default as identity primary key
);
create table test.t2 (
id integer generated by default as identity primary key,
t1_ids integer[] not null check (test.check_foreign_key_array(t1_ids, 'test', 't1', 'id'))
);
insert into test.t1 (id) values (default), (default), (default); --ok
insert into test.t2 (id, t1_ids) values (default, array[1,2,3]); --ok
insert into test.t2 (id, t1_ids) values (default, array[1,2,3,555]); --error
If you are able to put there just values from test.id, then you can try this:
CREATE OR REPLACE FUNCTION test_trigger() RETURNS trigger
LANGUAGE plpgsql AS $BODY$
DECLARE
val integer;
BEGIN
SELECT id INTO val
FROM (
SELECT UNNEST(id) AS id
FROM test1
) AS q
WHERE id = OLD.id;
IF val IS NULL THEN RETURN OLD;
ELSE
RAISE 'Integrity Constraint Violation: ID "%" in Test1', val USING ERRCODE = '23000';
RETURN NULL;
END IF;
END; $BODY$;
-- DROP TRIGGER test_delete_trigger ON test;
CREATE TRIGGER test_delete_trigger BEFORE DELETE OR UPDATE OF id ON test
FOR EACH ROW EXECUTE PROCEDURE test_trigger();