Generating incremental numbers based on a different column - postgresql

I have got a composite primary key in a table in PostgreSQL (I am using pgAdmin4)
Let's call the the two primary keys productno and version.
version represents the version of productno.
So if I create a new dataset, then it needs to be checked if a dataset with this productno already exists.
If productno doesn't exist yet, then version should be (version) 1
If productno exists once, then version should be 2
If productno exists twice, then version should be 3
... and so on
So that we get something like:
productno | version
-----|-----------
1 | 1
1 | 2
1 | 3
2 | 1
2 | 2
I found a quite similar problem: auto increment on composite primary key
But I can't use this solution because PostgreSQL syntax is obviously a bit different - so tried a lot around with functions and triggers but couldn't figure out the right way to do it.

You can keep the version numbers in a separate table (one for each "base PK" value). That is way more efficient than doing a max() + 1 on every insert and has the additional benefit that it's safe for concurrent transactions.
So first we need a table that keeps track of the version numbers:
create table version_counter
(
product_no integer primary key,
version_nr integer not null
);
Then we create a function that increments the version for a given product_no and returns that new version number:
create function next_version(p_product_no int)
returns integer
as
$$
insert into version_counter (product_no, version_nr)
values (p_product_no, 1)
on conflict (product_no)
do update
set version_nr = version_counter.version_nr + 1
returning version_nr;
$$
language sql
volatile;
The trick here is the the insert on conflict which increments an existing value or inserts a new row if the passed product_no does not yet exists.
For the product table:
create table product
(
product_no integer not null,
version_nr integer not null,
created_at timestamp default clock_timestamp(),
primary key (product_no, version_nr)
);
then create a trigger:
create function increment_version()
returns trigger
as
$$
begin
new.version_nr := next_version(new.product_no);
return new;
end;
$$
language plpgsql;
create trigger base_table_insert_trigger
before insert on product
for each row
execute procedure increment_version();
This is safe for concurrent transactions because the row in version_counter will be locked for that product_no until the transaction inserting the row into the product table is committed - which will commit the change to the version_counter table as well (and free the lock on that row).
If two concurrent transactions insert the same value for product_no, one of them will wait until the other finishes.
If two concurrent transactions insert different values for product_no, they can work without having to wait for the other.
If we then insert these rows:
insert into product (product_no) values (1);
insert into product (product_no) values (2);
insert into product (product_no) values (3);
insert into product (product_no) values (1);
insert into product (product_no) values (3);
insert into product (product_no) values (2);
The product table looks like this:
select *
from product
order by product_no, version_nr;
product_no | version_nr | created_at
-----------+------------+------------------------
1 | 1 | 2019-08-23 10:50:57.880
1 | 2 | 2019-08-23 10:50:57.947
2 | 1 | 2019-08-23 10:50:57.899
2 | 2 | 2019-08-23 10:50:57.989
3 | 1 | 2019-08-23 10:50:57.926
3 | 2 | 2019-08-23 10:50:57.966
Online example: https://rextester.com/CULK95702

You can do it like this:
-- Check if pk exists
SELECT pk INTO temp_pk FROM table a WHERE a.pk = v_pk1;
-- If exists, inserts it
IF temp_pk IS NOT NULL THEN
INSERT INTO table(pk, versionpk) VALUES (v_pk1, temp_pk);
END IF;

So - I got it work now
So if you want a column to update depending on another column in pg sql - have a look at this:
This is the function I use:
CREATE FUNCTION public.testfunction()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE v_productno INTEGER := NEW.productno;
BEGIN
IF NOT EXISTS (SELECT *
FROM testtable
WHERE productno = v_productno)
THEN
NEW.version := 1;
ELSE
NEW.version := (SELECT MAX(testtable.version)+1
FROM testtable
WHERE testtable.productno = v_productno);
END IF;
RETURN NEW;
END;
$BODY$;
And this is the trigger that runs the function:
CREATE TRIGGER testtrigger
BEFORE INSERT
ON public.testtable
FOR EACH ROW
EXECUTE PROCEDURE public.testfunction();
Thank you #ChechoCZ, you definetly helped me getting in the right direction.

Related

Dynamically get columns names using old in triggers

I want to write a generic trigger function(Postgres Procedure). There are many main tables like TableA, TableB, etc. and their corresponding audit tables TableA_Audit, TableB_Audit, respectively. The structure is given below.
TableA and TableA_Audit has columns aa integer, ab integer.
TableB and TableB_Audit has columns ba integer.
Similarly there can be many main tables along with their audit tables.
The requirement is that is any of the main table gets updated then their entries should be inserted in their respective audit Table.
eg:- If TableA has entries like this
---------------------
| **TableA** |
---------------------
|aa | ab |
|--------|----------|
| 5 | 10 |
---------------------
and then i write an update like
update TableA set aa= aa+15,
then the old values for TableA should be inserted in the TableA_Audit Table like below
TableA_audit contains:-
---------------------
| **TableA_Audit** |
---------------------
|aa | ab |
|--------|----------|
| 5 | 10 |
---------------------
To faciliate the above scenario i have written a generic function called insert_in_audit. Whenever there is any update in any of the main table, the function insert_in_audit should be called. The function should achive the following:-
Dynamically insert entries in corresponding audit_table using main table. If there is update in Table B then entries should be inserted only in TableB_Audit.
Till now what i am able to do so. I have got the names of all the columns of the main table where update happened.
eg: for the query - update TableA set aa= aa+15, i am able to get all the columns name in TableA in a varchar array.
column_names varchar[ ] := '{"aa", "ab"}';
My question is that how to get old values of column aa and ab. I tried doing like this
foreach i in array column_names
loop
raise notice '%', old.i;
end loop;
But the above gave me error :- record "old" has no field "i". Can anyone help me to get old values.
Here is a code sample how you can dynamically extract values from OLD in PL/pgSQL:
CREATE FUNCTION dynamic_col() RETURNS trigger
LANGUAGE plpgsql AS
$$DECLARE
v_col name;
v_val text;
BEGIN
FOREACH v_col IN ARRAY TG_ARGV
LOOP
EXECUTE format('SELECT (($1).%I)::text', v_col)
USING OLD
INTO v_val;
RAISE NOTICE 'OLD.% = %', v_col, v_val;
END LOOP;
RETURN OLD;
END;$$;
CREATE TABLE trigtest (
id integer,
val text
);
INSERT INTO trigtest VALUES
(1, 'one'), (2, 'two');
CREATE TRIGGER dynamic_col AFTER DELETE ON trigtest
FOR EACH ROW EXECUTE FUNCTION dynamic_col('id', 'val');
DELETE FROM trigtest WHERE id = 1;
NOTICE: OLD.id = 1
NOTICE: OLD.val = one

Postgres cascade delete on non-unique column

I have a table like this:
id | group_id | parent_group
---+----------+-------------
1 | 1 | null
2 | 1 | null
3 | 2 | 1
4 | 2 | 1
Is it possible to add a constraint such that a row is automatically deleted when there is no row with a group_id equal to the row's parent_group? For example, if I delete rows 1 and 2, I want rows 3 and 4 to be deleted automatically because there are no more rows with group_id 1.
The answer that clemens posted led me to the following solution. I'm not very familiar with triggers though; could there be any problems with this and is there a better way to do it?
CREATE OR REPLACE FUNCTION on_group_deleted() RETURNS TRIGGER AS $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM my_table WHERE group_id = OLD.group_id) THEN
DELETE FROM my_table WHERE parent_group = OLD.group_id;
END IF;
RETURN OLD;
END;
$$ LANGUAGE PLPGSQL;
CREATE TRIGGER my_table_delete_trigger AFTER DELETE ON my_table
FOR EACH ROW
EXECUTE PROCEDURE on_group_deleted();

Constant column in child partition tables

We are using inheritance based partitioning in our application. The partitioning is on a column such that each partition has a different value for this column. Something like this:
CREATE TABLE base(
tblock INT NOT NULL,
-- other fields --
);
-- Create a partition
CREATE TABLE partition_1(
CHECK(tblock=1),
INHERITS base
);
There are a lot of such partitions, and each one has a large number of records (in the millions). Overall database size is in the terabytes.
In the above schema, the partitions have to have a column tblock, even though each partition has a constant value for that column in all rows. This is clearly a waste of space on disk.
Is there any way to declare the partitions so that it does not actually store the value of tblock on disk?
We are currently on Postgresql 9.2.6.
of course you can write trigger which will skip some columns, but if you do that, you will lose benefits from partitioning (constraint exclusions)
you cant drop this column if table is inherited
here's some example
create table base(i integer, j integer);
CREATE TABLE
create table inh_1() inherits (base);
CREATE TABLE
create table inh_2() inherits (base);
CREATE TABLE
CREATE OR REPLACE FUNCTION part_trigger()
RETURNS TRIGGER AS $$
BEGIN
if NEW.j = 1 THEN INSERT INTO inh_1 (i) VALUES (NEW.i);
ELSIF NEW.j = 2 THEN INSERT INTO inh_2 (i) VALUES (NEW.i);
END IF;
RETURN NULL;
END;
$$
LANGUAGE plpgsql;
CREATE FUNCTION
CREATE TRIGGER insert_base
BEFORE INSERT ON base
FOR EACH ROW EXECUTE PROCEDURE part_trigger();
CREATE TRIGGER
insert into base values (100,1);
INSERT 0 0
insert into base values (140,2);
INSERT 0 0
sebpa=# select * from base;
i | j
-----+---
100 |
140 |
(2 rows)
sebpa=# select * from inh_1;
i | j
-----+---
100 |
(1 row)
sebpa=# select * from inh_2;
i | j
-----+---
140 |
(1 row)

How to prevent insert, update and delete on inherited tables in PostgreSQL using BEFORE triggers

When using table inheritance, I would like to enforce that insert, update and delete statements should be done against descendant tables. I thought a simple way to do this would be using a trigger function like this:
CREATE FUNCTION test.prevent_action() RETURNS trigger AS $prevent_action$
BEGIN
RAISE EXCEPTION
'% on % is not allowed. Perform % on descendant tables only.',
TG_OP, TG_TABLE_NAME, TG_OP;
END;
$prevent_action$ LANGUAGE plpgsql;
...which I would reference from a trigger defined specified using BEFORE INSERT OR UPDATE OR DELETE.
This seems to work fine for inserts, but not for updates and deletes.
The following test sequence demonstrates what I've observed:
DROP SCHEMA IF EXISTS test CASCADE;
psql:simple.sql:1: NOTICE: schema "test" does not exist, skipping
DROP SCHEMA
CREATE SCHEMA test;
CREATE SCHEMA
-- A function to prevent anything
-- Used for tables that are meant to be inherited
CREATE FUNCTION test.prevent_action() RETURNS trigger AS $prevent_action$
BEGIN
RAISE EXCEPTION
'% on % is not allowed. Perform % on descendant tables only.',
TG_OP, TG_TABLE_NAME, TG_OP;
END;
$prevent_action$ LANGUAGE plpgsql;
CREATE FUNCTION
CREATE TABLE test.people (
person_id SERIAL PRIMARY KEY,
last_name text,
first_name text
);
psql:simple.sql:17: NOTICE: CREATE TABLE will create implicit sequence "people_person_id_seq" for serial column "people.person_id"
psql:simple.sql:17: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "people_pkey" for table "people"
CREATE TABLE
CREATE TRIGGER prevent_action BEFORE INSERT OR UPDATE OR DELETE ON test.people
FOR EACH ROW EXECUTE PROCEDURE test.prevent_action();
CREATE TRIGGER
CREATE TABLE test.students (
student_id SERIAL PRIMARY KEY
) INHERITS (test.people);
psql:simple.sql:24: NOTICE: CREATE TABLE will create implicit sequence "students_student_id_seq" for serial column "students.student_id"
psql:simple.sql:24: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "students_pkey" for table "students"
CREATE TABLE
--The trigger successfully prevents this INSERT from happening
--INSERT INTO test.people (last_name, first_name) values ('Smith', 'Helen');
INSERT INTO test.students (last_name, first_name) values ('Smith', 'Helen');
INSERT 0 1
INSERT INTO test.students (last_name, first_name) values ('Anderson', 'Niles');
INSERT 0 1
UPDATE test.people set first_name = 'Oh', last_name = 'Noes!';
UPDATE 2
SELECT student_id, person_id, first_name, last_name from test.students;
student_id | person_id | first_name | last_name
------------+-----------+------------+-----------
1 | 1 | Oh | Noes!
2 | 2 | Oh | Noes!
(2 rows)
DELETE FROM test.people;
DELETE 2
SELECT student_id, person_id, first_name, last_name from test.students;
student_id | person_id | first_name | last_name
------------+-----------+------------+-----------
(0 rows)
So I'm wondering what I've done wrong that allows updates and deletes directly against the test.people table in this example.
The trigger is set to execute FOR EACH ROW, but there is no row in test.people, that's why it's not run.
As a sidenote, you may issue select * from ONLY test.people to list the rows in test.people that don't belong to child tables.
The solution seems esasy: set a trigger FOR EACH STATEMENT instead of FOR EACH ROW, since you want to forbid the whole statement anyway.
CREATE TRIGGER prevent_action BEFORE INSERT OR UPDATE OR DELETE ON test.people
FOR EACH STATEMENT EXECUTE PROCEDURE test.prevent_action();

PostgreSQL: Auto-increment based on multi-column unique constraint

One of my tables has the following definition:
CREATE TABLE incidents
(
id serial NOT NULL,
report integer NOT NULL,
year integer NOT NULL,
month integer NOT NULL,
number integer NOT NULL, -- Report serial number for this period
...
CONSTRAINT PRIMARY KEY (id),
CONSTRAINT UNIQUE (report, year, month, number)
);
How would you go about incrementing the number column for every report, year, and month independently? I'd like to avoid creating a sequence or table for each (report, year, month) set.
It would be nice if PostgreSQL supported incrementing "on a secondary column in a multiple-column index" like MySQL's MyISAM tables, but I couldn't find a mention of such a feature in the manual.
An obvious solution is to select the current value in the table + 1, but this obviously is not safe for concurrent sessions. Maybe a pre-insert trigger would work, but are they guaranteed to be non-concurrent?
Also note that I'm inserting incidents individually, so I can't use generate_series as suggested elsewhere.
It would be nice if PostgreSQL supported incrementing "on a secondary column in a multiple-column index" like MySQL's MyISAM tables
Yeah, but note that in doing so, MyISAM locks your entire table. Which then makes it safe to find the biggest +1 without worrying about concurrent transactions.
In Postgres, you can do this too, and without locking the whole table. An advisory lock and a trigger will be good enough:
CREATE TYPE animal_grp AS ENUM ('fish','mammal','bird');
CREATE TABLE animals (
grp animal_grp NOT NULL,
id INT NOT NULL DEFAULT 0,
name varchar NOT NULL,
PRIMARY KEY (grp,id)
);
CREATE OR REPLACE FUNCTION animals_id_auto()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);
-- Obtain an advisory lock on this table/group.
PERFORM pg_advisory_lock(_rel_id, _grp_id);
SELECT COALESCE(MAX(id) + 1, 1)
INTO NEW.id
FROM animals
WHERE grp = NEW.grp;
RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;
CREATE TRIGGER animals_id_auto
BEFORE INSERT ON animals
FOR EACH ROW WHEN (NEW.id = 0)
EXECUTE PROCEDURE animals_id_auto();
CREATE OR REPLACE FUNCTION animals_id_auto_unlock()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);
-- Release the lock.
PERFORM pg_advisory_unlock(_rel_id, _grp_id);
RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;
CREATE TRIGGER animals_id_auto_unlock
AFTER INSERT ON animals
FOR EACH ROW
EXECUTE PROCEDURE animals_id_auto_unlock();
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
This yields:
grp | id | name
--------+----+---------
fish | 1 | lax
mammal | 1 | dog
mammal | 2 | cat
mammal | 3 | whale
bird | 1 | penguin
bird | 2 | ostrich
(6 rows)
There is one caveat. Advisory locks are held until released or until the session expires. If an error occurs during the transaction, the lock is kept around and you need to release it manually.
SELECT pg_advisory_unlock('animals'::regclass::int, i)
FROM generate_series(1, array_length(enum_range(NULL::animal_grp),1)) i;
In Postgres 9.1, you can discard the unlock trigger, and replace the pg_advisory_lock() call with pg_advisory_xact_lock(). That one is automatically held until and released at the end of the transaction.
On a separate note, I'd stick to using a good old sequence. That will make things faster -- even if it's not as pretty-looking when you look at the data.
Lastly, a unique sequence per (year, month) combo could also be obtained by adding an extra table, whose primary key is a serial, and whose (year, month) value has a unique constraint on it.
I think I found better solution. It doesn't depends on grp Type (it can be enum, integer and string) and can be used in a lot of cases.
myFunc() - function for a trigger. You can name it as you want.
number - autoincrement column which grows up for each exists value of grp.
grp - your column you want to count in number.
myTrigger - trigger for your table.
myTable - table where you want to make trigger.
unique_grp_number_key - unique constraint key. We need make it for unique pair of values: grp and number.
ALTER TABLE "myTable"
ADD CONSTRAINT "unique_grp_number_key" UNIQUE(grp, number);
CREATE OR REPLACE FUNCTION myFunc() RETURNS trigger AS $body_start$
BEGIN
SELECT COALESCE(MAX(number) + 1, 1)
INTO NEW.number
FROM "myTable"
WHERE grp = NEW.grp;
RETURN NEW;
END;
$body_start$ LANGUAGE plpgsql;
CREATE TRIGGER myTrigger BEFORE INSERT ON "myTable"
FOR EACH ROW
WHEN (NEW.number IS NULL)
EXECUTE PROCEDURE myFunc();
How does it work? When you insert something in myTable, trigger invokes and checks if number field is empty. If it is empty, myFunc() select MAX value of number where grp equals to new grp value which you want to insert. It returns max value + 1 like auto_increment and replaces null number field to new autoincrement value.
This solution is more unique than Denis de Bernardy cause it doesn't depend on grp Type, but thanks to him, his code helps me write my solution.
Maybe it's too late to write answer, but i can't found unique solution for this problem in stackoverflow, so it can help someone. Enjoy and thanks for help!
I think this will help:
http://www.varlena.com/GeneralBits/130.php
Note that in MySQL it is for MyISAM tables only.
PP I have tested advisory locks and found them useless for more than 1 transaction in same time. I am using 2 windows of pgAdmin. First is as simple as possible:
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','dog');
COMMIT;
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','cat');
COMMIT;
ERROR: duplicate key violates unique constraint "animals_pkey"
Second:
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','dog');
INSERT INTO animals (grp,name) VALUES ('mammal','cat');
COMMIT;
ERROR: deadlock detected
SQL state: 40P01
Detail: Process 3764 waits for ExclusiveLock on advisory lock [46462,46496,2,2]; blocked by process 2712.
Process 2712 waits for ShareLock on transaction 136759; blocked by process 3764.
Context: SQL statement "SELECT pg_advisory_lock( $1 , $2 )"
PL/pgSQL function "animals_id_auto" line 15 at perform
And database is locked and can not be unlocked - it is unknown what to unlock.