Create Alias for PostgreSQL Table - postgresql

I have a table called assignments. I would like to be able to read/write to all the columns in this table using either assignments.column or homework.column, how can I do this?
I know this is not something you would normally do. I need to be able to do this to provide backwards compatibility for a short period of time.
We have an iOS app that currently does direct postgresql queries against the DB. We're updating all of our apps to use an API. In the process of building the API the developer decided to change the name of the tables because we (foolishly) thought we didn't need backwards compatibility.
Now, V1.0 and the API both need to be able to write to this table so I don't have to do some voodoo later to transfer/combine data later...
We're using Ruby on Rails for the API.

With Postgres 9.3 the following should be enough:
CREATE VIEW homework AS SELECT * FROM assignments;
It works because simple views are automatically updatable (see docs).

In Postgres 9.3 or later, a simple VIEW is "updatable" automatically. The manual:
Simple views are automatically updatable: the system will allow
INSERT, UPDATE and DELETE statements to be used on the view in
the same way as on a regular table. A view is automatically updatable
if it satisfies all of the following conditions:
The view must have exactly one entry in its FROM list, which must be a table or another updatable view.
The view definition must not contain WITH, DISTINCT, GROUP BY, HAVING, LIMIT, or OFFSET clauses at the top level.
The view definition must not contain set operations (UNION, INTERSECT or EXCEPT) at the top level.
The view's select list must not contain any aggregates, window functions or set-returning functions.
If one of these conditions is not met (or for the now outdated Postgres 9.2 or older), a manual setup may do the job.
Building on your work in progress:
Trigger function
CREATE OR REPLACE FUNCTION trg_ia_insupdel()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
DECLARE
_tbl CONSTANT regclass := 'iassignments_assignments';
_cols text;
_vals text;
BEGIN
CASE TG_OP
WHEN 'INSERT' THEN
INSERT INTO iassignments_assignments
VALUES (NEW.*);
RETURN NEW;
WHEN 'UPDATE' THEN
SELECT INTO _cols, _vals
string_agg(quote_ident(attname), ', ') -- incl. pk col!
, string_agg('n.' || quote_ident(attname), ', ')
FROM pg_attribute
WHERE attrelid = _tbl -- _tbl converted to oid automatically
AND attnum > 0 -- no system columns
AND NOT attisdropped; -- no dropped (dead) columns
EXECUTE format('
UPDATE %s t
SET (%s) = (%s)
FROM (SELECT ($1).*) n
WHERE t.published_assignment_id
= ($2).published_assignment_id' -- match to OLD value of pk
, _tbl, _cols, _vals) -- _tbl converted to text automatically
USING NEW, OLD;
RETURN NEW;
WHEN 'DELETE' THEN
DELETE FROM iassignments_assignments
WHERE published_assignment_id = OLD.published_assignment_id;
RETURN OLD;
END CASE;
RETURN NULL; -- control should never reach this
END
$func$;
Trigger
CREATE TRIGGER insupbef
INSTEAD OF INSERT OR UPDATE OR DELETE ON assignments_published
FOR EACH ROW EXECUTE PROCEDURE trg_ia_insupdel();
Notes
assignments_published must be a VIEW, an INSTEAD OF trigger is only allowed for views.
Dynamic SQL (in the UPDATE section) is not strictly necessary, only to cover future changes to the table layout automatically. The names of table and PK are still hard coded.
Simpler and probably cheaper without sub-block (like you had).
Using (SELECT ($1).*) instead of the shorter VALUES ($1.*) to preserve column names.
My naming convention: I prepend trg_ for trigger functions, followed by an abbreviation indicating the target table and finally one or more of the the tokens ins, up and del for INSERT, UPDATE and DELETE respectively. The name of the trigger is a copy of the function name, stripped of the first two parts. This is purely a matter of convention and taste but has proven useful for me since the names tell the purpose and are still short.
More explanation in the related answer that has already been mentioned:
Update multiple columns in a trigger function in plpgsql

This is where I am with the trigger functions so far, any feedback would be greatly appreciated. It's a combination of http://vibhorkumar.wordpress.com/2011/10/28/instead-of-trigger/ and Update multiple columns in a trigger function in plpgsql
Table: iassignments_assignments
Columns:
published_assignment_id
name
filepath
filename
link
teacher
due date
description
published
classrooms
View: assignments_published - SELECT * FROM iassignments_assignments
Trigger Function for assignments_published
CREATE OR REPLACE FUNCTION assignments_published_trigger_func()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $function$
BEGIN
IF TG_OP = 'INSERT' THEN
EXECUTE format('INSERT INTO %s SELECT ($1).*', 'iassignments_assignments')
USING NEW;
RETURN NEW;
ELSIF TG_OP = 'UPDATE' THEN
DECLARE
tbl = 'iassignments_assignments';
cols text;
vals text;
BEGIN
SELECT INTO cols, vals
string_agg(quote_ident(attname), ', ')
,string_agg('x.' || quote_ident(attname), ', ')
FROM pg_attribute
WHERE attrelid = tbl
AND NOT attisdropped -- no dropped (dead) columns
AND attnum > 0; -- no system columns
EXECUTE format('
UPDATE %s t
SET (%s) = (%s)
FROM (SELECT ($1).*) x
WHERE t.published_assignment_id = ($2).published_assignment_id'
, tbl, cols, vals)
USING NEW, OLD;
RETURN NEW;
END
ELSIF TG_OP = 'DELETE' THEN
DELETE FROM iassignments_assignments WHERE published_assignment_id=OLD.published_assignment_id;
RETURN NULL;
END IF;
RETURN NEW;
END;
$function$;
Trigger
CREATE TRIGGER assignments_published_trigger
INSTEAD OF INSERT OR UPDATE OR DELETE ON
assignments_published FOR EACH ROW EXECUTE PROCEDURE assignments_published_trigger_func();
Table: iassignments_classes
Columns:
class_assignment_id
guid
assignment_published_id
View: assignments_class - SELECT * FROM assignments_classes
Trigger Function for assignments_class
**I'll create this function once I have received feedback on the other and know it's create, so I (hopefully) need very little changes to this function.
Trigger
CREATE TRIGGER assignments_class_trigger
INSTEAD OF INSERT OR UPDATE OR DELETE ON
assignments_class FOR EACH ROW EXECUTE PROCEDURE assignments_class_trigger_func();

Related

Postgres constraint for a subset of rows

I have a Postgres table containing tasks (tasks). A task can link to many entities using a link table (links). Tasks can be one of many types.
A subset of tasks, denoted by their types (let's call it S) can only link to one entity. That is, in link, there can be exactly one record with that task's ID/primary key.
Is there a way to encode that into Postgres constraints so that's managed automatically?
I found a solution to this using triggers:
CREATE OR REPLACE FUNCTION unique_link() RETURNS trigger AS $unique_link$
DECLARE
t_type text;
link_ct int;
BEGIN
SELECT task_type INTO STRICT t_type FROM tasks WHERE id = NEW.task_id;
IF t_type = 'S' THEN
SELECT COUNT(*) INTO STRICT link_ct FROM links
WHERE task_id = NEW.task_id
IF link_ct > 0 THEN
RAISE EXCEPTION '% of type % already has a link associated', NEW.task_id, t_type;
END IF;
RETURN NEW;
END IF;
RETURN NEW;
END;
$unique_link$ LANGUAGE plpgsql;
CREATE TRIGGER unique_link BEFORE INSERT OR UPDATE ON links
FOR EACH ROW EXECUTE PROCEDURE unique_link();

Declare and return value for DELETE and INSERT

I am trying to remove duplicated data from some of our databases based upon unique id's. All deleted data should be stored in a separate table for auditing purposes. Since it concerns quite some databases and different schemas and tables I wanted to start using variables to reduce chance of errors and the amount of work it will take me.
This is the best example query I could think off, but it doesn't work:
do $$
declare #source_schema varchar := 'my_source_schema';
declare #source_table varchar := 'my_source_table';
declare #target_table varchar := 'my_target_schema' || source_table || '_duplicates'; --target schema and appendix are always the same, source_table is a variable input.
declare #unique_keys varchar := ('1', '2', '3')
begin
select into #target_table
from #source_schema.#source_table
where id in (#unique_keys);
delete from #source_schema.#source_table where export_id in (#unique_keys);
end ;
$$;
The query syntax works with hard-coded values.
Most of the times my variables are perceived as columns or not recognized at all. :(
You need to create and then call a plpgsql procedure with input parameters :
CREATE OR REPLACE PROCEDURE duplicates_suppress
(my_target_schema text, my_source_schema text, my_source_table text, unique_keys text[])
LANGUAGE plpgsql AS
$$
BEGIN
EXECUTE FORMAT(
'WITH list AS (INSERT INTO %1$I.%3$I_duplicates SELECT * FROM %2$I.%3$I WHERE array[id] <# %4$L :: integer[] RETURNING id)
DELETE FROM %2$I.%3$I AS t USING list AS l WHERE t.id = l.id', my_target_schema, my_source_schema, my_source_table, unique_keys :: text) ;
END ;
$$ ;
The procedure duplicates_suppress inserts into my_target_schema.my_source_table || '_duplicates' the rows from my_source_schema.my_source_table whose id is in the array unique_keys and then deletes these rows from the table my_source_schema.my_source_table .
See the test result in dbfiddle.
As has been commented, you need some kind of dynamic SQL. In a FUNCTION, PROCEDURE or a DO statement to do it on the server.
You should be comfortable with PL/pgSQL. Dynamic SQL is no beginners' toy.
Example with a PROCEDURE, like Edouard already suggested. You'll need a FUNCTION instead to wrap it in an outer transaction (like you very well might). See:
When to use stored procedure / user-defined function?
CREATE OR REPLACE PROCEDURE pg_temp.f_archive_dupes(_source_schema text, _source_table text, _unique_keys int[], OUT _row_count int)
LANGUAGE plpgsql AS
$proc$
-- target schema and appendix are always the same, source_table is a variable input
DECLARE
_target_schema CONSTANT text := 's2'; -- hardcoded
_target_table text := _source_table || '_duplicates';
_sql text := format(
'WITH del AS (
DELETE FROM %I.%I
WHERE id = ANY($1)
RETURNING *
)
INSERT INTO %I.%I TABLE del', _source_schema, _source_table
, _target_schema, _target_table);
BEGIN
RAISE NOTICE '%', _sql; -- debug
EXECUTE _sql USING _unique_keys; -- execute
GET DIAGNOSTICS _row_count = ROW_COUNT;
END
$proc$;
Call:
CALL pg_temp.f_archive_dupes('s1', 't1', '{1, 3}', 0);
db<>fiddle here
I made the procedure temporary, since I assume you don't need to keep it permanently. Create it once per database. See:
How to create a temporary function in PostgreSQL?
Passed schema and table names are case-sensitive strings! (Unlike unquoted identifiers in plain SQL.) Either way, be wary of SQL-injection when concatenating SQL dynamically. See:
Are PostgreSQL column names case-sensitive?
Table name as a PostgreSQL function parameter
Made _unique_keys type int[] (array of integer) since your sample values look like integers. Use a the actual data type of your id columns!
The variable _sql holds the query string, so it can easily be debugged before actually executing. Using RAISE NOTICE '%', _sql; for that purpose.
I suggest to comment the EXECUTE line until you are sure.
I made the PROCEDURE return the number of processed rows. You didn't ask for that, but it's typically convenient. At hardly any cost. See:
Dynamic SQL (EXECUTE) as condition for IF statement
Best way to get result count before LIMIT was applied
Last, but not least, use DELETE ... RETURNING * in a data-modifying CTE. Since that has to find rows only once it comes at about half the cost of separate SELECT and DELETE. And it's perfectly safe. If anything goes wrong, the whole transaction is rolled back anyway.
Two separate commands can also run into concurrency issues or race conditions which are ruled out this way, as DELETE implicitly locks the rows to delete. Example:
Replicating data between Postgres DBs
Or you can build the statements in a client program. Like psql, and use \gexec. Example:
Filter column names from existing table for SQL DDL statement
Based on Erwin's answer, minor optimization...
create or replace procedure pg_temp.p_archive_dump
(_source_schema text, _source_table text,
_unique_key int[],_target_schema text)
language plpgsql as
$$
declare
_row_count bigint;
_target_table text := '';
BEGIN
select quote_ident(_source_table) ||'_'|| array_to_string(_unique_key,'_') into _target_table from quote_ident(_source_table);
raise notice 'the deleted table records will store in %.%',_target_schema, _target_table;
execute format('create table %I.%I as select * from %I.%I limit 0',_target_schema, _target_table,_source_schema,_source_table );
execute format('with mm as ( delete from %I.%I where id = any (%L) returning * ) insert into %I.%I table mm'
,_source_schema,_source_table,_unique_key, _target_schema, _target_table);
GET DIAGNOSTICS _row_count = ROW_COUNT;
RAISE notice 'rows influenced, %',_row_count;
end
$$;
--
if your _unique_key is not that much, this solution also create a table for you. Obviously you need to create the target schema yourself.
If your unique_key is too much, you can customize to properly rename the dumped table.
Let's call it.
call pg_temp.p_archive_dump('s1','t1', '{1,2}','s2');
s1 is the source schema, t1 is source table, {1,2} is the unique key you want to extract to the new table. s2 is the target schema

Trigger | how to delete row instead of update based on cell value

Postgresql 10/11.
I need to delete row instead of update in case if target cell value is null.
So I created this trigger function:
CREATE OR REPLACE FUNCTION delete_on_update_related_table() RETURNS trigger
AS $$
DECLARE
refColumnName text = TG_ARGV[0];
BEGIN
IF TG_NARGS <> 1 THEN
RAISE EXCEPTION 'Trigger function expects 1 parameters, but got %', TG_NARGS;
END IF;
EXECUTE 'DELETE FROM ' || TG_TABLE_NAME || ' WHERE $1 = ''$2'''
USING refColumnName, OLD.id;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
And a BEFORE UPDATE trigger:
CREATE TRIGGER proper_delete
BEFORE UPDATE OF def_id
ON public.definition_products
FOR EACH ROW
WHEN (NEW.def_id IS NULL)
EXECUTE PROCEDURE delete_on_update_related_table('def_id');
Table is simple:
id uuid primary key
def_id uuid not null
Test:
UPDATE definition_products SET
def_id = NULL
WHERE id = 'f47415e8-6b00-4c65-aeb8-cadc15ca5890';
-- rows affected 0
Documentation says:
Row-level triggers fired BEFORE can return null to signal the trigger
manager to skip the rest of the operation for this row (i.e.,
subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does
not occur for this row).
Previously, I used a RULE instead of the trigger. But there is no way to use WHERE & RETURNING clause in same rule.
You need an unconditional ON UPDATE DO INSTEAD rule with a RETURNING clause
So, is there a way?
While Jeremy's answer is good, there is still room for improvement.
Problems
You need to be very accurate in the definition of the objective. Your statement:
I need to delete row instead of update in case if target cell value is null.
... does not imply that the column was changed to NULL in the UPDATE at hand. Might have been NULL before, like, before you implemented the trigger. So not:
BEFORE UPDATE OF def_id ON public.definition_products
But just:
BEFORE UPDATE ON public.definition_products
Of course, if the column is defined NOT NULL (as it probably should be), there is no effective difference - except for the noise and an additional point of failure. The manual:
A column-specific trigger (one defined using the UPDATE OFcolumn_name syntax) will fire when any of its columns are listed as targets in the UPDATE command's SET list. It is possible for a column's value to change even when the trigger is not fired, because changes made to the row's contents by BEFORE UPDATE triggers are not considered.
Also, nothing in your question indicates the need for dynamic SQL. (That would be the case if you wanted to reuse the same trigger function for multiple triggers on different tables. And even then it's often better to just create several distinct trigger functions for multiple reason: simpler, faster, less error-prone, easier to read & maintain, ...)
As for "error-prone": your original dynamic statement was just invalid:
EXECUTE 'DELETE FROM ' || TG_TABLE_NAME || ' WHERE $1 = ''$2'''
USING refColumnName, OLD.id;
Can't pass a column name as value (refColumnName).
Can't put single quotes around $2, which is passed as value and hence needs no quoting.
An unqualified, unquoted TG_TABLE_NAME can go terribly wrong, which is especially critical for a heavy-weight function that deletes rows.
Jeremy's version fixes most, but still features the unqualified TG_TABLE_NAME.
This would be good:
EXECUTE format('DELETE FROM %s WHERE %I = $1', TG_RELID::regclass, refColumnName) -- refColumnName still unquoted
USING OLD.id;
Or:
EXECUTE format('DELETE FROM %I.%I WHERE %I = $1', TG_TABLE_SCHEMA, TG_TABLE_NAME, refColumnName)
USING OLD.id;
Related:
Why does a PostgreSQL SELECT query return different results when a schema name is specified?
Table name as a PostgreSQL function parameter
Solution
Simpler trigger function:
CREATE OR REPLACE FUNCTION delete_on_update_related_table()
RETURNS trigger AS
$func$
BEGIN
DELETE FROM public.definition_products WHERE id = OLD.id; -- def_id?
RETURN NULL;
END
$func$ LANGUAGE plpgsql;
Simpler trigger:
CREATE TRIGGER proper_delete
BEFORE UPDATE ON public.definition_products
FOR EACH ROW
WHEN (NEW.def_id IS NULL) -- that's the defining condition!
EXECUTE PROCEDURE delete_on_update_related_table(); -- no parameter
You probably want to use OLD.id, not OLD.def_id. (The row to delete is best defined by it's PK, not by the column changed to NULL.) But that's not entirely clear.
This works for me, with a few small changes:
CREATE OR REPLACE FUNCTION delete_on_update_related_table() RETURNS trigger
AS $$
DECLARE
refColumnName text = quote_ident(TG_ARGV[0]);
BEGIN
IF TG_NARGS <> 1 THEN RAISE EXCEPTION 'Trigger function expects 1 parameters, but got %', TG_NARGS; END IF;
EXECUTE format('DELETE FROM %s WHERE %s = %s', quote_ident(TG_TABLE_NAME), refColumnName, quote_literal(OLD.id));
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
-- create trigger
CREATE TRIGGER proper_delete
BEFORE UPDATE OF def_id
ON public.definition_products
FOR EACH ROW
WHEN (NEW.def_id IS NULL)
EXECUTE PROCEDURE delete_on_update_related_table('id'); --Note id, not def_id

Duplicate INSERT to another PostgreSQL table

I have checked similar threats but not match exactly that I am trying to do.
I have two identical tables:
t_data: we will have the data of the last two months. There will be a job for delete data older than two months.
t_data_historical: we will have all data.
I want to do: If an INSERT is done into t_data, I want to "clone" the same query to t_data_historical.
I have seen that can be done with triggers but there is something that I don't understand: in the trigger definition I need to know the columns and values and I don't want to care about the INSERT query (I trust of the client that to do the query), I only clone it.
If you know that the history table will have the same definition, and the name is always constructed that way, you can
EXECUTE format('INSERT INTO %I.%I VALUES (($1).*)',
TG_TABLE_SCHEMA,
TG_TABLE_NAME || '_historical')
USING NEW;
in your trigger function.
That will work for all tables, and you don't need to know the actual columns.
Laurenz,
I created the trigger like:
CREATE OR REPLACE FUNCTION copy2historical() RETURNS trigger AS
$BODY$
BEGIN
EXECUTE format('INSERT INTO %I.%I VALUES (($1).*)', TG_TABLE_SCHEMA, TG_TABLE_NAME || '_historical')
USING NEW;
END;
RETURN NULL;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER copy2historical_tg
AFTER INSERT
ON t_data
FOR EACH ROW
EXECUTE PROCEDURE copy2historical();
thanks for your help.

PostgreSQL trigger raises error 55000

after migrating from PostgreSQL server version 9 to 8.4 I have encountered very strange error.
Short description:
If there is a trigger on a given table for each row before insert or update and one uses in conditional statement (if-else) TG_OP value check and OLD object, following error raises when doinng INSERT:
ERROR: record "old" is not assigned yet
DETAIL: The tuple structure of a not-yet-assigned record is indeterminate.
Detailed description:
There is following DB structure:
CREATE TABLE table1
(
id serial NOT NULL,
name character varying(256),
CONSTRAINT table1_pkey PRIMARY KEY (id)
)
WITH (OIDS=FALSE);
CREATE OR REPLACE FUNCTION exemplary_function()
RETURNS trigger AS
$BODY$ BEGIN
IF TG_OP = 'INSERT' OR OLD.name <> NEW.name THEN
NEW.name = 'someName';
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE COST 100;
CREATE TRIGGER trigger1
BEFORE INSERT OR UPDATE
ON table1
FOR EACH ROW EXECUTE PROCEDURE exemplary_function();
and following SQL query that triggers error:
INSERT INTO table1 (name) VALUES ('other name')
It seems like parser is not stopping on TG_OP = 'INSERT' condition (and it should, because it is true) but checks another one and that triggers an error.
What's interesting, I was only able to reproduce it on version 8.4.
Postgres doesn't officially do short cuts on boolean statements (Unlike C for example)
It does say it that sometimes it can decide to short cut (see docs) but it might just easily decide to short cut on the second expression rather than the first.
It basically looks at how complicated the expressions on each side are before deciding the evaluation order. Then if that is TRUE it can decide not to bother with the other side.
In this case, it looks like its trying to interpret OLD while its still trying to decide the best order in which to evaluate the expression.
You should be able get around this by using a CASE to split the expressions eg.
IF (CASE WHEN TG_OP = 'INSERT' THEN TRUE ELSE OLD.name <> NEW.name END) THEN