Auto-Incrementing IDs in Composite Primary Key - postgresql

In my app, there are organizations that can create tasks. Instead of having a serial task ID, I would like to have an incrementing task ID for each organization ID. Just like in GitHub, where repo + issue number identifies the issue. Tasks look like this:
create table "Tasks" (
"organizationId" int references "Organizations"(id),
id int check (id > 0),
description text not null,
primary key ("organizationId", id)
);
Users should not "know" the ID when creating a task. How to set the ID automatically for each task?
I'm using PostgreSQL with PostgREST as an API. The solution ideally allows me to use PostgREST's CRUD APIs as is.
I've tried a before insert trigger like this.
create function increment_task_id() returns trigger language
plpgsql security definer as $$
declare
next_id integer := 1;
begin
select coalesce(max("id") + 1, 1)
from "Tasks"
where "organizationId" = new."organizationId"
into next_id;
new.id = next_id;
return new;
end;$$;
create trigger task_insert
before insert on "Tasks"
for each row execute function increment_task_id();
This works fine until I run multiple insertions in parallel. The triggers seem to be executed in parallel, leading to some tasks failing the primary key constraint.
How can I avoid this? Is there a better way?

Related

PostgreSQL - how to pass a function's argument to a trigger?

I have tree tables in a database: users (user_id (auto increment), fname, lname), roles (role_id, role_desc) and users_roles (user_id, role_id). What I'd like to do is to have a function create_user_with_role. The function takes 3 arguments: first name, last name and role_id. The function inserts a new row into the users table and a new user_id is created automatically. Now I want to insert a new record to the users_roles table: user_id is the newly created value and the role_id is taken from the function's arguments list.
Is it possible to pass the role_id argument to an after insert trigger (defined on users table) so another automatic insert can be performed? Or can you suggest any other solution?
First #Pavel Stehule is right:
Don't try to pass parameters to triggers, ever!
Second, you just have to get the inserted id into a variable.
CREATE FUNCTION create_user_with_role(first_name text, last_name text, new_role_id integer)
RETURNS VOID AS $$
DECLARE
new_user_id integer;
BEGIN
INSERT INTO users (fname, lname) VALUES (first_name, last_name)
RETURNING id INTO new_user_id;
INSERT INTO users_roles (user_id, role_id)
VALUES (new_user_id, new_role_id);
END;$$ LANGUAGE plpgsql;
Obviously, this is completely inefficient if you want to insert multiple rows but that's another question ;)
When you need to pass any parameter to trigger, then there is clean, so your design is wrong. Usually triggers should to have check or audit functionality. Not more. You can use a function, and call function directly from your application. Don't try to pass parameters to triggers, ever! Another bad sign are artificial columns in table used just only for trigger parametrization. This is pretty bad design!

PostgreSQL Create Sequence using new id

I want to create a sequence for each row created in the table account, like os_1, os_2, etc...
How can I get the id of this new row and insert it on the name of the sequence?
CREATE OR REPLACE FUNCTION public.create_os_seq() RETURNS TRIGGER AS $$
#variable_conflict use_variable
BEGIN
--CREATE SEQUENCE seqname;
EXECUTE format('CREATE SEQUENCE os_', NEW.id);
return NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER create_os_seq AFTER INSERT ON account FOR EACH ROW EXECUTE PROCEDURE create_os_seq();
Table account
id INT AUTO_INCREMENT
nane VARCHAR
After creating a sequence i´ll put its number in the OS table
table os
id INT
account_id INT
From your question I assume you want to take care of self incrementing values?.. Postgres uses shortcut SERIAL instead of AUTO_INCREMENT, just create table like:
CREATE TABLE so79 (id bigserial primary key, col text);
That will automatically create sequence for you and assign its value as default for column id. Basically will make it smth like AUTO_INCREMENT. You don't have to use trigger to increment values...
There is no way that creating a sequence for each row is a good idea. Instead, tell us what you're trying to do.
My guess is you need a waterline indicator, a high point you intend to increment on some action. Just use an integer.
CREATE TABLE foo (
foo_id serial,
max_seen int
);
Now you can do whatever with triggers on other tables and such to increment max_seen.

postgres update NEW variable before INSERT in a TRIGGER

I've two tables accounts and projects:
create table accounts (
id bigserial primary key,
slug text unique
);
create table projects (
id bigserial primary key,
account_id bigint not null references accounts (id),
name text
);
I want to be able to insert a new row into projects by specifying only account.slug (not account.id). What I'm trying to achieve is something like:
INSERT into projects (account_slug, name) values ('account_slug', 'project_name');
I thought about using a trigger (unfortunately it doesn't work):
create or replace function trigger_projects_insert() returns trigger as $$
begin
if TG_OP = 'INSERT' AND NEW.account_slug then
select id as account_id
from accounts as account
where account.slug = NEW.account_slug;
NEW.account_id = account_id;
-- we should also remove NEW.account_slug but don't know how
end if;
return NEW;
end;
$$ LANGUAGE plpgsql;
create trigger trigger_projects_insert before insert on projects
for each row execute procedure trigger_projects_insert();
What is the best way to achieve what I'm trying to do?
Is a trigger a good idea?
Is there any other solution?
WITH newacc AS (
INSERT INTO accounts (slug)
VALUES ('account_slug')
RETURNING id
)
INSERT INTO projects (account_id, name)
SELECT id, 'project_name'
FROM newacct;
If you are limited in the SQL you can use, another idea might be to define a view over both tables and create an INSTEAD OF INSERT trigger on the view that performs the two INSERTs on the underlying tables. Then an INSERT statement like the one in your question would work.

Is it possible to pass data to postgreSQL trigger?

I need to log any changes made in some table by trigger which will insert older version of modified row to another table with some additional data like:
-which action was performed
-when this action was performed
-by who.
I have problem with last requirement. While performing SQL somewhere in java by JDBC. I need to somehow pass logged user id stored in variable to postgres table where all older versions of modified row will be stored.
Is it even possible?
It may be stupid question but I desperately try to avoid inserting data like that manually in java. Triggers done some work for me but not all I need.
Demonstrative code below (I've cut out some code for security reasons):
"notes" table:
CREATE TABLE my_database.notes
(
pk serial NOT NULL,
client_pk integer,
description text,
CONSTRAINT notes_pkey PRIMARY KEY (pk)
)
Table storing older versions of every row changed in "notes" table:
CREATE TABLE my_database_log.notes_log
(
pk serial NOT NULL,
note_pk integer,
client_pk integer,
description text,
who_changed integer DEFAULT 0, -- how to fill in this field?
action_date timestamp without time zone DEFAULT now(), --when action was performed
action character varying, --which action was performed
CONSTRAINT notes_log_pkey PRIMARY KEY (pk)
)
Trigger for "notes" table:
CREATE TRIGGER after_insert_or_update_note_trigger
AFTER INSERT OR UPDATE
ON database.notes
FOR EACH ROW
EXECUTE PROCEDURE my_database.notes_new_row_log();
Procedure executed by trigger:
CREATE OR REPLACE FUNCTION my_database.notes_new_row_log()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO my_database_log.notes_log(
note_pk, client_pk, description, action)
VALUES (
NEW.pk, NEW.client_pk, NEW.description, TG_OP);
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION my_database.notes_new_row_log()
OWNER TO database_owner;
According to #Nick Barnes hint in comments, there is a need to declare a variable in postgresql.conf file:
...
#----------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#----------------------------------------------------------------------------
custom_variable_classes = 'myapp' # list of custom variable class names
myapp.user_id = 0
and call:
SET LOCAL customvar.user_id=<set_user_id_value_here>
before query that should be triggered.
To handle variable in trigger use:
current_setting('myapp.userid')

How to properly emulate statement level triggers with access to data in postgres

I am using PostgreSQL as my database for a project at work. We use triggers in quite a few places to either maintain computed columns, or tables that essentially act as a materialized view.
All this worked just fine when simply utilizing row level triggers to keep all this in sync. However when we wrote scripts to periodically import our customers data into the database, we ran into issues with either performance or problems with number of locks in a single transaction.
To alleviate this I wanted to create a statement-level trigger with access to the modified rows (inserted, updated or deleted). However as this is not possible I instead created a BEFORE statement-level trigger that would create a temporary table. Then an AFTER row-level trigger that would insert the changed data into the temporary table. At last an AFTER statement-level trigger that would read the changes and perform necessary updates, and then drop the temporary table.
All this works just fine, assuming that within the triggers, no one would re-trigger the same flow again (as the temporary table would then already exist).
However I then learned that when using foreign key constraints with ON DELETE SET NULL, it is simply implemented with a system trigger that sets the column to NULL. This of course is not a problem at all, except for the fact that when you have several foreign key constraints like this on a single table, all referencing the same table (let's just call this files). When deleting a row from the files table, all these system level triggers to handle the ON DELETE SET NULL clause all fire at the same time, that is in parallel. Which presents a serious issue for me.
How would I go about implementing something like this? Here is a short SQL script to illustrate the problem:
CREATE TABLE files (
id serial PRIMARY KEY,
"name" TEXT NOT NULL
);
CREATE TABLE profiles (
id serial PRIMARY KEY,
NAME TEXT NOT NULL,
cv_file_id INT REFERENCES files(id) ON DELETE SET NULL,
photo_file_id INT REFERENCES files(id) ON DELETE SET NULL
);
CREATE TABLE profile_audit (
profile_id INT NOT NULL,
modified_at timestamptz NOT NULL
);
CREATE FUNCTION pre_stmt_create_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
CREATE TEMPORARY TABLE tmp_modified_profiles (
id INT NOT NULL
) ON COMMIT DROP;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE FUNCTION insert_modified_profile_to_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
INSERT INTO tmp_modified_profiles(id) VALUES (NEW.id);
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE FUNCTION post_stmt_insert_rows_and_drop_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
INSERT INTO profile_audit (id, modified_at)
SELECT t.id, CURRENT_TIMESTAMP FROM tmp_modified_profiles t;
DROP TABLE tmp_modified_profiles;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER tr_create_working_table BEFORE UPDATE ON profiles FOR EACH STATEMENT EXECUTE PROCEDURE pre_stmt_create_temp_table();
CREATE TRIGGER tr_insert_row_to_working_table AFTER UPDATE ON profiles FOR EACH ROW EXECUTE PROCEDURE insert_modified_profile_to_temp_table();
CREATE TRIGGER tr_insert_modified_rows_and_drop_working_table AFTER UPDATE ON profiles FOR EACH STATEMENT EXECUTE PROCEDURE post_stmt_insert_rows_and_drop_temp_table();
INSERT INTO files ("name") VALUES ('photo.jpg'), ('my_cv.pdf');
INSERT INTO profiles ("name") VALUES ('John Doe');
DELETE FROM files WHERE "name" = 'photo.jpg';
It would be a serious hack, but meanwhile, until PostgreSQL 9.5 is out, I would try to use CONSTRAINT triggers deferred to the end of the transaction. I am not really sure this will work, but might be worth trying.
You could use a status column to track inserts and updates for your statement-level triggers.
In a BEFORE INSERT OR UPDATE row-level trigger:
SET NEW.status = TG_OP;
Now you can use statement-level AFTER triggers:
BEGIN
DO FUNNY THINGS
WHERE status = 'INSERT';
-- reset the status
UPDATE mytable
SET status = NULL
WHERE status = 'INSERT';
END;
However, if you want to deal with deletes as well, you'll need something like this in your row-level trigger:
INSERT INTO status_table (table_name, op, id) VALUES (TG_TABLE_NAME, TG_OP, OLD.id);
Then, in your statement-level AFTER trigger, you can go like:
BEGIN
DO FUNNY THINGS
WHERE id IN (SELECT id FROM status_table
WHERE table_name = TG_TABLE_NAME AND op = TG_OP); -- just an example
-- reset the status
DELETE FROM status_table
WHERE table_name = TG_TABLE_NAME AND op = TG_OP;
END;