Immutability in Postgres - postgresql

I want to create an immutable Postgres database, where the user can insert & select (write & read) data, but cannot update or delete the data.
I am aware of the FOR UPDATE lock, but I don't understand how to use it.
Let's say for example I have the table below, how can I make it immutable (or, if I understood correctly, how can I use the FOR UPDATE lock permanently)
CREATE TABLE account(
user_id serial PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
email VARCHAR (355) UNIQUE NOT NULL,
created_on TIMESTAMP NOT NULL,
last_login TIMESTAMP
);

The solution is to give the user that accesses the database only the INSERT and SELECT privilege on the tables involved.
A lock is not a tool to deny somebody access, but a short-time barrier to prevent conflicting data modifications to happen at the same time.
Here is an example:
CREATE TABLE sensitive (
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
available text,
restricted text
);
Now I want to allow someuser to insert data and read and update all columns except restricted, and I want to keep myself from deleting data in that table:
/* the CREATE TABLE above was run by user "laurenz" */
REVOKE DELETE ON sensitive FROM laurenz;
GRANT INSERT ON sensitive TO someuser;
GRANT SELECT (id, available), UPDATE (id, available) ON sensitive TO someuser;

Nope, that 👆🏼 solution doesn't work. I found this one. I make a before trigger on the table on update for each row:
create or replace function table_update_guard() returns trigger
language plpgsql immutable parallel safe cost 1 as $body$
begin
raise exception
'trigger %: updating is prohibited for %.%',
tg_name, tg_table_schema, tg_table_name
using errcode = 'restrict_violation';
return null;
end;
$body$;
create or replace trigger account_update_guard
before update on account for each row
execute function table_update_guard();
See my original research.

Related

Trigger and function to insert user id into another table

I am using Prisma as my schema and migrating it to supabase with prisma migrate dev
One of my tables Profiles, should reference the auth.users table in supabase, in sql something like this id uuid references auth.users not null,
Now since that table is automatically created in supabase do I still add it to my prisma schema? It's not in public either it is in auth.
model Profiles {
id String #id #db.Uuid
role String
subId String
stripeCustomerId String
refundId String[]
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
}
The reason I want the relation is because I want a trigger to automatically run a function that inserts an id and role into the profiles table when a new users is invited.
This is that trigger and function
-- inserts a row into public.profiles
create function public.handle_new_user()
returns trigger
language plpgsql
security definer
as $$
begin
insert into public.Profiles (id, role)
values (new.id, 'BASE_USER');
return new;
end;
$$;
-- trigger the function every time a user is created
create trigger on_auth_user_created
after insert on auth.users
for each row execute procedure public.handle_new_user();
I had this working when I created the profiles table manually in supabase I included the reference to the auth.users, that's the only reason I can think of why the user Id and role won't insert into the profiles db when I invite a user, the trigger and function are failing
create table public.Profiles (
id uuid references auth.users not null,
role text,
primary key (id)
);
Update from comment:
One error I found is
relation "public.profiles" does not exist
I change it to "public.Profiles" with a capital in supabase, but the function seem to still be looking for lowercase.
What you show should just work:
db<>fiddle here
Looks like you messed up capitalization with Postgres identifiers.
If you (or your ORM) created the table as "Profiles" (with double-quotes), non-standard capitalization is preserved and you need to double-quote the name for the rest of its life.
So the trigger function body must read:
...
insert into public."Profiles" (id, role) -- with double-quotes
...
Note that schema and table (and column) have to be quoted separately.
See:
Are PostgreSQL column names case-sensitive?

postgres update NEW variable before INSERT in a TRIGGER

I've two tables accounts and projects:
create table accounts (
id bigserial primary key,
slug text unique
);
create table projects (
id bigserial primary key,
account_id bigint not null references accounts (id),
name text
);
I want to be able to insert a new row into projects by specifying only account.slug (not account.id). What I'm trying to achieve is something like:
INSERT into projects (account_slug, name) values ('account_slug', 'project_name');
I thought about using a trigger (unfortunately it doesn't work):
create or replace function trigger_projects_insert() returns trigger as $$
begin
if TG_OP = 'INSERT' AND NEW.account_slug then
select id as account_id
from accounts as account
where account.slug = NEW.account_slug;
NEW.account_id = account_id;
-- we should also remove NEW.account_slug but don't know how
end if;
return NEW;
end;
$$ LANGUAGE plpgsql;
create trigger trigger_projects_insert before insert on projects
for each row execute procedure trigger_projects_insert();
What is the best way to achieve what I'm trying to do?
Is a trigger a good idea?
Is there any other solution?
WITH newacc AS (
INSERT INTO accounts (slug)
VALUES ('account_slug')
RETURNING id
)
INSERT INTO projects (account_id, name)
SELECT id, 'project_name'
FROM newacct;
If you are limited in the SQL you can use, another idea might be to define a view over both tables and create an INSTEAD OF INSERT trigger on the view that performs the two INSERTs on the underlying tables. Then an INSERT statement like the one in your question would work.

Is it possible to pass data to postgreSQL trigger?

I need to log any changes made in some table by trigger which will insert older version of modified row to another table with some additional data like:
-which action was performed
-when this action was performed
-by who.
I have problem with last requirement. While performing SQL somewhere in java by JDBC. I need to somehow pass logged user id stored in variable to postgres table where all older versions of modified row will be stored.
Is it even possible?
It may be stupid question but I desperately try to avoid inserting data like that manually in java. Triggers done some work for me but not all I need.
Demonstrative code below (I've cut out some code for security reasons):
"notes" table:
CREATE TABLE my_database.notes
(
pk serial NOT NULL,
client_pk integer,
description text,
CONSTRAINT notes_pkey PRIMARY KEY (pk)
)
Table storing older versions of every row changed in "notes" table:
CREATE TABLE my_database_log.notes_log
(
pk serial NOT NULL,
note_pk integer,
client_pk integer,
description text,
who_changed integer DEFAULT 0, -- how to fill in this field?
action_date timestamp without time zone DEFAULT now(), --when action was performed
action character varying, --which action was performed
CONSTRAINT notes_log_pkey PRIMARY KEY (pk)
)
Trigger for "notes" table:
CREATE TRIGGER after_insert_or_update_note_trigger
AFTER INSERT OR UPDATE
ON database.notes
FOR EACH ROW
EXECUTE PROCEDURE my_database.notes_new_row_log();
Procedure executed by trigger:
CREATE OR REPLACE FUNCTION my_database.notes_new_row_log()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO my_database_log.notes_log(
note_pk, client_pk, description, action)
VALUES (
NEW.pk, NEW.client_pk, NEW.description, TG_OP);
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION my_database.notes_new_row_log()
OWNER TO database_owner;
According to #Nick Barnes hint in comments, there is a need to declare a variable in postgresql.conf file:
...
#----------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#----------------------------------------------------------------------------
custom_variable_classes = 'myapp' # list of custom variable class names
myapp.user_id = 0
and call:
SET LOCAL customvar.user_id=<set_user_id_value_here>
before query that should be triggered.
To handle variable in trigger use:
current_setting('myapp.userid')

Prevent update of column except by trigger

Is it possible to configure a Postgres database such that a specific column may only be updated by a trigger, while still allowing the trigger itself to be executed in response to an update by a role without permission to update that column? If so, how?
For example, given tables and a trigger like this:
CREATE TABLE a(
id serial PRIMARY KEY,
flag boolean NOT NULL DEFAULT TRUE,
data text NOT NULL
);
CREATE TABLE b(
id serial PRIMARY KEY,
updated_on DATE NOT NULL DEFAULT CURRENT_DATE,
a_id INTEGER NOT NULL,
FOREIGN KEY (a_id) references a(id)
);
CREATE FUNCTION update_aflag() RETURNS trigger AS $update_aflag$
BEGIN
UPDATE a
SET flag = FALSE
WHERE id = NEW.a_id;
RETURN NEW;
END;
$update_aflag$ LANGUAGE plpgsql;
CREATE TRIGGER update_aflag_trigger
AFTER INSERT ON b
FOR EACH ROW
EXECUTE PROCEDURE update_aflag()
;
I'd like to define a role which does not have permission to update a.flag directly using an UPDATE statement, but which may update flag indirectly via the trigger.
Yes, this is possible using a SECURITY DEFINER trigger function. The trigger function runs as a role that has the right to modify the flag column, but you don't GRANT that right to normal users. You should create the trigger function as the role that you'll grant the required rights to.
This requires that the application not run as the user that owns the tables, and of course not as a superuser.
You can GRANT column update rights to other columns to the user, just leave out the flag column. Note that GRANTing UPDATE on all columns then REVOKEing it on flag will not work, it's not the same thing.

How to properly emulate statement level triggers with access to data in postgres

I am using PostgreSQL as my database for a project at work. We use triggers in quite a few places to either maintain computed columns, or tables that essentially act as a materialized view.
All this worked just fine when simply utilizing row level triggers to keep all this in sync. However when we wrote scripts to periodically import our customers data into the database, we ran into issues with either performance or problems with number of locks in a single transaction.
To alleviate this I wanted to create a statement-level trigger with access to the modified rows (inserted, updated or deleted). However as this is not possible I instead created a BEFORE statement-level trigger that would create a temporary table. Then an AFTER row-level trigger that would insert the changed data into the temporary table. At last an AFTER statement-level trigger that would read the changes and perform necessary updates, and then drop the temporary table.
All this works just fine, assuming that within the triggers, no one would re-trigger the same flow again (as the temporary table would then already exist).
However I then learned that when using foreign key constraints with ON DELETE SET NULL, it is simply implemented with a system trigger that sets the column to NULL. This of course is not a problem at all, except for the fact that when you have several foreign key constraints like this on a single table, all referencing the same table (let's just call this files). When deleting a row from the files table, all these system level triggers to handle the ON DELETE SET NULL clause all fire at the same time, that is in parallel. Which presents a serious issue for me.
How would I go about implementing something like this? Here is a short SQL script to illustrate the problem:
CREATE TABLE files (
id serial PRIMARY KEY,
"name" TEXT NOT NULL
);
CREATE TABLE profiles (
id serial PRIMARY KEY,
NAME TEXT NOT NULL,
cv_file_id INT REFERENCES files(id) ON DELETE SET NULL,
photo_file_id INT REFERENCES files(id) ON DELETE SET NULL
);
CREATE TABLE profile_audit (
profile_id INT NOT NULL,
modified_at timestamptz NOT NULL
);
CREATE FUNCTION pre_stmt_create_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
CREATE TEMPORARY TABLE tmp_modified_profiles (
id INT NOT NULL
) ON COMMIT DROP;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE FUNCTION insert_modified_profile_to_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
INSERT INTO tmp_modified_profiles(id) VALUES (NEW.id);
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE FUNCTION post_stmt_insert_rows_and_drop_temp_table()
RETURNS TRIGGER
AS $$
BEGIN
INSERT INTO profile_audit (id, modified_at)
SELECT t.id, CURRENT_TIMESTAMP FROM tmp_modified_profiles t;
DROP TABLE tmp_modified_profiles;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER tr_create_working_table BEFORE UPDATE ON profiles FOR EACH STATEMENT EXECUTE PROCEDURE pre_stmt_create_temp_table();
CREATE TRIGGER tr_insert_row_to_working_table AFTER UPDATE ON profiles FOR EACH ROW EXECUTE PROCEDURE insert_modified_profile_to_temp_table();
CREATE TRIGGER tr_insert_modified_rows_and_drop_working_table AFTER UPDATE ON profiles FOR EACH STATEMENT EXECUTE PROCEDURE post_stmt_insert_rows_and_drop_temp_table();
INSERT INTO files ("name") VALUES ('photo.jpg'), ('my_cv.pdf');
INSERT INTO profiles ("name") VALUES ('John Doe');
DELETE FROM files WHERE "name" = 'photo.jpg';
It would be a serious hack, but meanwhile, until PostgreSQL 9.5 is out, I would try to use CONSTRAINT triggers deferred to the end of the transaction. I am not really sure this will work, but might be worth trying.
You could use a status column to track inserts and updates for your statement-level triggers.
In a BEFORE INSERT OR UPDATE row-level trigger:
SET NEW.status = TG_OP;
Now you can use statement-level AFTER triggers:
BEGIN
DO FUNNY THINGS
WHERE status = 'INSERT';
-- reset the status
UPDATE mytable
SET status = NULL
WHERE status = 'INSERT';
END;
However, if you want to deal with deletes as well, you'll need something like this in your row-level trigger:
INSERT INTO status_table (table_name, op, id) VALUES (TG_TABLE_NAME, TG_OP, OLD.id);
Then, in your statement-level AFTER trigger, you can go like:
BEGIN
DO FUNNY THINGS
WHERE id IN (SELECT id FROM status_table
WHERE table_name = TG_TABLE_NAME AND op = TG_OP); -- just an example
-- reset the status
DELETE FROM status_table
WHERE table_name = TG_TABLE_NAME AND op = TG_OP;
END;