Postgres RLS Policy and functions - postgresql

I have a RLS policy violation on a Postgres function. I believe it's because the policy relies on rows created in the function. A SELECT command is run in the function. New rows are not available because they are still in a transaction.
Here is the function:
CREATE FUNCTION public.create_message(organization_id int, content text, tags Int[])
RETURNS setof public.message
AS $$
-- insert message, return PK
WITH moved_rows AS (
INSERT INTO public.message (organization_id, content)
VALUES($1, $2)
RETURNING *
),
-- many to many relation
moved_tags AS (
INSERT INTO public.message_tag (message_id, tag_id)
SELECT moved_rows.id, tagInput.tag_id
FROM moved_rows, UNNEST($3) as tagInput(tag_id)
RETURNING *
)
SELECT moved_rows.* FROM moved_rows LEFT JOIN moved_tags ON moved_rows.id = moved_tags.message_id
$$ LANGUAGE sql VOLATILE STRICT;
Here is the policy:
CREATE POLICY select_if_organization
on message_tag
for select
USING ( message_id IN (
SELECT message.id
FROM message
INNER JOIN organization_user ON (organization_user.organization_id = message.organization_id)
INNER JOIN sessions ON (sessions.user_id = organization_user.user_id)
WHERE sessions.session_token = current_user_id()));
Ideas:
Add a field to the joining table to simplify the policy, but it violates normal form.
Return user input instead of running the SELECT, but input may be escaped and I should be able to run a SELECT command
Split into two functions. Create the message row, then add the message_tag. I'm running postgraphile, so two mutations. I have foreign key relations setup between the two. I don't know if graphile will do that automatically.
Error message:
ERROR: new row violates row-level security policy for table "message_tag"
CONTEXT: SQL function "create_message" statement 1
I receive the error when I run the function. I want the function to run successfully, insert one row in the message table, and turning the input array into rows for the message_tag table with message_tag.message_id=message.id, the last inserted id. I need a policy in place so users from that join relation only see their own organization's message_tag rows.
Here is another policy on the INSERT command. It allows INSERT if a user is logged in:
create policy insert_message_tag_if_author
on message_tag
for insert
with check (EXISTS (SELECT * FROM sessions WHERE sessions.session_token = current_user_id()));

According to the error message, this part of your SQL statement causes the error:
INSERT INTO public.message_tag (message_id, tag_id)
SELECT moved_rows.id, tagInput.tag_id
FROM moved_rows, UNNEST($3) as tagInput(tag_id)
RETURNING *
You need to add another policy FOR INSERT with an appropriate WITH CHECK clause.

I ended up adding a field to the joining table, and creating a policy with that. That way, RLS validation does not require a row which would be created in a middle of a function.

Related

PostgreSQL triggers with LibreOffice Base front-end

I have the following trigger function & trigger in PostgreSQL 12.1:
create or replace function constraint_for_present()
returns trigger
as $$
BEGIN
if
new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
then raise exception 'a present_status of "viewing" requires that the viewable item is in sourcing';
end if;
return new;
END;
$$ language plpgsql;
create trigger constraint_for_present
before insert or update of present_status on viewable_item
for each row
execute function constraint_for_present();
These work as expected during data entry in the psql and TablePlus clients. However, the function throws an error when accessing the database via LibreOffice Base:
pq_driver: [PGRES_FATAL_ERROR]ERROR: relation "sourcing" does not exist
LINE 2: and new.name not in (select viewable_item from sourcing)
QUERY: SELECT new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
CONTEXT: PL/pgSQL function viewing.constraint_for_present() line 3 at IF
(caused by statement 'UPDATE "viewing"."viewable_item" SET "present_status" = 'none' WHERE "name" = 'test4'')
In Base I have a simple form set up for the trigger's table, with each foreign-key column set to list box, and the Type of list contents set to Sql (also tried Sql [Native]). The List content of each is (with appropriate table and primary key columns):
select name from viewing.cv_present_status order by name
(This database is using natural keys for now, for organizational political reasons.) The Bound field is set to 0, which is the displayed and primary key column.
So ... 2 questions:
Why is this problem happening only in Base, and how might I fix it (or at least better trouble-shoot it)?
Since Bound field appears to take only a single integer, does that in effect mean that you can't use list boxes for tables with multi-column primary keys, at least if there is a single displayed column?
In the trigger function, you can fully qualify the table
...
and new.name not in (select viewable_item from viewing.sourcing)
...

Is it safe to insert data from inside of a CTE expression?

Background
I have a function that generates detail rows and inserts these into a detail table. I want to automatically ensure that the referenced master row is available before inserting the detail rows.
A BEFORE INSERT trigger does the job, but unfortunately the job is done too well. If a detail row is prevented due to a unique index, the master row will still be inserted, leaving me with a master without children (I don't want that).
I managed to solve this by inserting the master rows inside of a cte and then inserting the detail rows in the actual query. This works, but I'm worried that it's not a safe way of doing it.
INSERT rows from inside of a CTE expression
Here is the code to try this out. The input_cte is just generating static dummy data in the example. This particular example could be built with separate static insert SQLs, but in my real case the input_cte is dynamic.
Is this a safe way to solve this, or is it out of spec and could blow up in the next PG version?
CREATE TABLE master (
id INT NOT NULL GENERATED BY DEFAULT AS IDENTITY,
PRIMARY KEY (id)
);
CREATE TABLE detail (
id INT NOT NULL GENERATED BY DEFAULT AS IDENTITY,
master_id INT,
PRIMARY KEY (id, master_id)
);
ALTER TABLE detail
ADD CONSTRAINT detail_master_id_fkey
FOREIGN KEY (master_id)
REFERENCES master (id);
WITH input_cte AS (
SELECT 1 AS master_id, 1 AS detail_id
UNION SELECT 1, 2
UNION SELECT 2, 1
),
insert_cte AS (
--Ignore conflicts, as the master row could already exist
INSERT INTO master (id)
SELECT DISTINCT master_id
FROM input_cte
ON CONFLICT DO NOTHING
)
INSERT INTO detail (id, master_id)
SELECT detail_id, master_id
FROM input_cte;
SELECT * FROM master;
SELECT * FROM detail;
Edit
Bummer.. I just realised that this method will also insert master rows even if the detail rows would be stopped by my unique index.
I see two options. Which should I choose?
Check exactly what I can insert before trying to insert. I.e. do the same thing as my unique index does.
Go ahead with the above solution or the BEFORE INSERT trigger concept and then clean up unused master rows afterwards with a separate DELETE query.
Edit 2 as a reponse to Haleemur Ali's comment
I agree with Haleemur, but in my case it's a bit more complicated than the small example I created. The unique key on my detail can actually have null values. The index in my detail table (project_sequence) looks like this to enable null values:
CREATE UNIQUE INDEX
project_sequence_unique_combinations ON main.project_sequence
(project_id, controlpoint_type_id, COALESCE(drawing_id, 0), COALESCE(layer_guid, '00000000-0000-0000-0000-000000000000'));
Due to possible NULL values I cannot use these fields in my primary key, so I have a surrogate integer key. I'm calculating these key values in my CTE so they will always be unique. I.e. they can be inserted into the master table (sequence) even if the detail rows gets stopped by the unique index.*
To clarify I've inserted my actual code below. This code works as it should, but it sure would be nice to be able to utilize the new fancy DEFERRED and REFERENCES triggers.
SELECT COALESCE(MAX(id), 0)
INTO _max_sequence_id
FROM main.sequence;
WITH cte AS (
SELECT
d.project_id,
pct.controlpoint_type_id,
d.id as drawing_id,
DENSE_RANK() OVER(ORDER BY d.id, sequence_group_key) + _max_sequence_id + 1 AS new_sequence_id
FROM main.drawing d
CROSS JOIN main.project_controlpoint_type pct
--This left JOIN along with "ps.project_id IS NULL" is my
--current solution, i.e. its "option 1" from above.
LEFT JOIN main.project_sequence ps
ON ps.project_id = d.project_id
AND ps.drawing_id = d.id
AND ps.controlpoint_type_id = pct.controlpoint_type_id
WHERE d.project_id = _project_id
AND pct.project_id = _project_id
AND pct.sequence_level_id = 2
AND ps.project_id IS NULL
),
insert_sequence_cte AS (
INSERT INTO main.sequence
(id, project_id, last_value)
SELECT DISTINCT cte.new_sequence_id, cte.project_id, 0
FROM cte
ON CONFLICT DO NOTHING
)
INSERT INTO main.project_sequence
(project_id, controlpoint_type_id, drawing_id, sequence_id)
SELECT
project_id,
controlpoint_type_id,
drawing_id,
new_sequence_id
FROM cte;
The manual page on WITH queries states that your use case is legitimate and supported:
You can use data-modifying statements (INSERT, UPDATE, or DELETE) in WITH.
and
... data-modifying statements are only allowed in WITH clauses that are attached to the top-level statement. However, normal WITH visibility rules apply, so it is possible to refer to the WITH statement's output from the sub-SELECT.
Further:
If a data-modifying statement in WITH lacks a RETURNING clause, then it forms no temporary table and cannot be referred to in the rest of the query. Such a statement will be executed nonetheless.
If the rule "no master without a detail" is important to your model, you should let the DB enforce it. This will free you from "auto-inserting" master to reduce the potential for error and let you use conventional methods again.
Take a look at CONSTRAINT TRIGGERS. They allow you to just detect and reject violations of your no-master-without-detail rule while leaving the actual INSERTs to application code.
Your use case would need a CONSTRAINT TRIGGER on your master table that is DEFERRABLE INITIALLY DEFERRED. This allows you to INSERT master, then INSERT detail and still be sure that the transaction only commits if all is consistent.
From the manual linked above:
Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; in the latter case they are said to be deferred.
You'll need two triggers, one handling INSERT/UPDATE on master and another one handling DELETE/UPDATE on detail:
CREATE CONSTRAINT TRIGGER trigger_assert_master_has_detail
AFTER INSERT OR UPDATE OF id ON master
DEFERRABLE INITIALLY DEFERRED FOR EACH ROW
EXECUTE PROCEDURE assert_master_has_detail();
CREATE CONSTRAINT TRIGGER trigger_assert_no_leftover_master
AFTER DELETE OR UPDATE OF master_id ON detail
DEFERRABLE INITIALLY DEFERRED FOR EACH ROW
EXECUTE PROCEDURE assert_no_leftover_master();
Note that UPDATEs will only fire the trigger if they concern the FK/PK column.
The two trigger functions will then check if there are 1-n details for the master:
CREATE FUNCTION assert_master_has_detail() RETURNS trigger
AS
$$
BEGIN
IF NOT EXISTS (SELECT 1 FROM detail WHERE master_id = NEW.id)
THEN
RAISE EXCEPTION 'no detail for master_id=%', NEW.id;
ELSE
RETURN NEW;
END IF;
END;
$$
LANGUAGE plpgsql;
CREATE FUNCTION assert_no_leftover_master() RETURNS trigger
AS
$$
BEGIN
IF NOT EXISTS (SELECT 1 FROM detail WHERE master_id = OLD.master_id)
AND EXISTS (SELECT 1 FROM master WHERE id = OLD.master_id)
THEN
RAISE EXCEPTION 'last detail for master_id=% removed, but master still exists', OLD.master_id;
ELSE
RETURN NULL;
END IF;
END;
$$
LANGUAGE plpgsql;
Example of a violation:
INSERT INTO master
VALUES (1);
-- ERROR: no detail for master_id=1
-- CONTEXT: PL/pgSQL function assert_master_has_detail() line 5 at RAISE
and a legal scenario:
INSERT INTO master
VALUES (1);
INSERT INTO detail
VALUES (10, 1);
-- trigger fired at end of transaction, finds everything is OK
Here's a complete solution as dbfiddle.

How to update column based on column name in postgres?

I've narrowed it down to two possibilities - DynamicSQL and using a case statement.
However, I've failed with both of these.
I simply don't understand dynamicSQL, and how I would use it in my case.
This is my attempt using case statements; one of many failed variations.
SELECT column_name,
CASE WHEN column_name = 'address' THEN (**update statement gives syntax error within here**)
END
FROM information_schema.columns
WHERE table_name = 'employees';
As an overview, I'm using Axios to talk to my Node server, which is making calls to my Heroku database using Massivejs.
Maybe this isn't the way to go - so here's my main problem:
I've ran into troubles because the values I'm planning on using as column names are sent to my server as strings. The exact call that I've been trying to use is
update employees
set $1 = $2
where employee_id = $3;
Once again, I'm passing into those using massive.
I get the error back { error: syntax error at or near "'address'"} because my incoming values are strings. My thought process was that the above statement would allow me to use variables because 'address' is encapsulated by quotes.
But alas, my thought process has failed me.
This seems to be close to answering my question, but I can't seem to figure out what to do in my case if using dynamic SQL.
How to use dynamic column names in an UPDATE or SELECT statement in a function?
Thanks in advance.
I will show you a way to do this by using a function.
First we create the employees table :
CREATE TABLE employees(
id BIGSERIAL PRIMARY KEY,
column1 TEXT,
column2 TEXT
);
Next, we create a function that requires three parameters:
columnName - the name of the column that needs to be updated
columnValue - the new value to which the column needs to be updated
employeeId - the id of the employee that will be updated
By using the format function we generate the update query as a string and use the EXECUTE command to execute the query.
Here is the code of the function.
CREATE OR REPLACE FUNCTION update_columns_on_employee(columnName TEXT, columnValue TEXT, employeeId BIGINT)
RETURNS VOID AS
$$
DECLARE update_statement TEXT := format('UPDATE EMPLOYEES SET %s = ''%s'' WHERE id = %L',columnName, columnValue, employeeId);
BEGIN
EXECUTE update_statement;
end;
$$ LANGUAGE plpgsql;
Now, lets insert some data into the employees table
INSERT INTO employees(column1, column2) VALUES ('column1_start_value','column2_start_value');
So now we currently have an employee with an id value of 1 who has 'column1_start_value' value for the column1, and 'column2_start_value' value for column2.
If we want to update the value of column2 from 'column2_start_value' to 'column2_new_value' all we have to do is execute the following call
SELECT * FROM update_columns_on_employee('column2','column2_new_value',1);

IF... ELSE... two mutually exclusive inserts INTO #temptable

I need to insert either set A or set B of records into a #temptable, depending on certain condition
My pseudo-code:
IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
IF {some-condition}
SELECT {columns}
INTO #t1
FROM {some-big-table}
WHERE {some-filter}
ELSE
SELECT {columns}
INTO #t1
FROM {some-other-big-table}
WHERE {some-other-filter}
The two SELECTs above are exclusive (guaranteed by the ELSE operator). However, SQL compiler tries to outsmart me and throws the following message:
There is already an object named '#t1' in the database.
My idea of "fixing" this is to create #t1 upfront and then executing a simple INSERT INTO (instead of SELECT... INTO). But I like minimalism and am wondering whether this can be achieved in an easier way i.e. without explicit CREATE TABLE #t1 upfront.
Btw why is it NOT giving me an error on a conditional DROP TABLE in the first line? Just wondering.
You can't have 2 temp tables with the same name in a single SQL batch. One of the MSDN article says "If more than one temporary table is created inside a single stored procedure or batch, they must have different names". You can have this logic with 2 different temp tables or table variable/temp table declared outside the IF-Else block.
Using a Dyamic sql we can handle this situation. As a developoer its not a good practice. Best to use table variable or temp table.
IF 1=2
BEGIN
EXEC ('SELECT 1 ID INTO #TEMP1
SELECT * FROM #TEMP1
')
END
ELSE
EXEC ('SELECT 2 ID INTO #TEMP1
SELECT * FROM #TEMP1
')

Postgresql function not working as expected with INSERT INTO

I have function to insert data from one table to another
$BODY$
BEGIN
INSERT INTO backups.calls2 (uid,queue_id,connected,callerid2)
SELECT distinct (c.uid) ,c.queue_id,c.connected,c.callerid2
FROM public.calls c
WHERE c.connected is not null;
RETURN;
EXCEPTION WHEN unique_violation THEN NULL;
END;
$BODY$
And structure of table:
CREATE TABLE backups.nc_calls_id
(
uid character(30) NOT NULL,
queue_id integer,
callerid2 text,
connected timestamp without time zone,
id serial NOT NULL,
CONSTRAINT calls2_pkey PRIMARY KEY (uid)
)
WITH (
OIDS=FALSE
);
When I have first executed this query, everything went ok, 200000 rows was inserted to new table with unique Id.
But now, when I executing it again, no rows are being inserted
From the rather minimalist description given (no PostgreSQL version, no CREATE FUNCTION statement showing params etc, no other table structure, no function invocation) I'm guessing that you're attempting to do a merge, where you insert a row only if it doesn't exist by skipping rows if they already exist.
What the above function will do is skip all rows if any row already exists.
You need to either use a loop to do the insert within individual BEGIN ... EXCEPTION blocks (slow) or LOCK the table and do an INSERT INTO ... SELECT ... FROM newtable WHERE NOT EXISTS (SELECT 1 FROM oldtable where oldtable.key = newtable.key).
The INSERT INTO ... SELECT ... WHERE NOT EXISTS method will perform a lot better but will fail if more than one runs concurrently or if anything else inserts into the destination table at the same time. LOCKing the destination table before running it will make sure it's safe.
The PL/PgSQL looping BEGIN ... EXCEPTION method sounds nice and safe at first glance. Then you think about what happens when you run two of them at once. One will insert some keys first, one will insert other keys first, so they have a split of the values between them. That's OK, together they make up the full set. But what if only one of them commits and the other fails for some reason? You'll have an interesting sparsely inserted result. For that reason it's probably best to lock the destination table if using this approach too ... in which case you might as well use the vastly more efficient single pass INSERT with subquery-based uniqueness violation check.