I have a Postgresql DB that I want to enable the Row-Level-Security on one of its tables.
Everything is working fine, except one thing, that is I want to have an error to be thrown when a user try to perform an update on a record that he doesn't have privileges on.
According to the docs:
check_expression: Any SQL conditional expression (returning boolean).
The conditional expression cannot contain any aggregate or window
functions. This expression will be used in INSERT and UPDATE queries
against the table if row level security is enabled. Only rows for
which the expression evaluates to true will be allowed. An error will
be thrown if the expression evaluates to false or null for any of the
records inserted or any of the records that result from the update.
Note that the check_expression is evaluated against the proposed new
contents of the row, not the original contents.
So I tried the following:
CREATE POLICY update_policy ON my_table FOR UPDATE TO editors
USING (has_edit_privilege(user_name))
WITH CHECK (has_edit_privilege(user_name));
I have also another policy for SELECT
CREATE POLICY select_policy ON my_table FOR SELECT TO editors
USING (has_select_privilege(user_name));
According to the docs, this should create a policy that would prevent any one from the editors ROLE to perform update on any record of my_table, and would throw an error when an update is performed. This works correctly, but no error is thrown.
What's my problem?
Please help.
First, let me explain how row level security works when reading from the table:
You need not even have to define the policy – if there is no policy for a user on a table with row level security, the default is that the user can see nothing.
No error will be thrown when reading from a table.
If you want an error to be thrown, you could write a function
CREATE FUNCTION test_row(my_table) RETURNS boolean
LANGUAGE plpgsql COST 10000 AS
$$BEGIN
IF /* the user is not allowed */
THEN
RAISE EXCEPTION ...;
END IF;
RETURN TRUE;
$$END;$$;
Then use that function in your policy:
CREATE POLICY update_policy ON my_table FOR UPDATE TO editors
USING (test_row(my_table));
I used COST 10000 for the function to tell PostgreSQL to test that condition after all other conditions, if possible.
This is not a fool-proof technique, but it will work for simple queries. What could happen in the general case is that some conditions get checked after the condition from the policy, and this could lead to errors with rows that wouldn't even be returned from the query.
But I think it is the best you can get when abusing the concept of row level security.
Now let me explain about writing to the table:
Any attempt to write a row to the table that does not satisfy the CHECK clause will cause an error as documented.
Now let's put it together:
Assuming that you define the policy like in your question:
Any INSERT will cause an error.
Any SELECT will return an empty result.
Any UPDATE or DELETE will be successful, but affect no row. This is because these operations have to read (scan) the table before they modify data, and that scan will return no rows like in the SELECT case. Since no rows are affected by the data modification, no error is thrown.
Related
I would love to be able to validate objects representing table rows using the database's existing constraints (triggers that raise exceptions and checks) without actually inserting them into the database.
Is there currently a way one could do this in postgres? At least with BEFORE INSERT triggers and CHECK, I assume it makes no sense with AFTER INSERT triggers.
The easiest way I can think or right now would be to:
Lock the table
Insert a new row
If exception raise to the API / else DELETE the row and call it valid
Unlock
But I can see several issues with this.
A simpler way is to insert within a transaction and not commit:
BEGIN;
INSERT INTO tbl(...) VALUES (...);
-- see effects ...
ROLLBACK;
No need for additional locking. The row is never visible to any other transaction with default transacton isolation level READ COMMITTED. (You might be stalling concurrent writes that confict with the tested row.)
Notable side-effect: Sequences of serial or IDENTITY columns are advanced even if the INSERT is never committed. But gaps in sequential numbers are to be expected anyway and nothing to worry about.
Be wary of triggers with side-effects. All "transactional" SQL effects are rolled back, even most DDL commands. But some special operations (like advancing sequences) are never rolled back.
Also, DEFERRED constraints do not kick in. The manual:
DEFERRED constraints are not checked until transaction commit.
If you need this a lot, work with a copy of your table, or even your database.
Strictly speaking, while any trigger / constraint / concurrent event is allowed, there is no other way to "validate objects" than to insert them into the actual target table in the actual target database at the actual point in time. Triggers, constraints, even default values, can interact with the current state of the whole DB. The more possibilities are ruled out and requirements are reduced, the more options we might have to emulate the test.
CREATE FUNCTION validate_function ( )
RETURNS trigger LANGUAGE plpgsql
AS $function$
DECLARE
valid_flag boolean := 't';
BEGIN
--Validation code
if valid_flag = 'f' then
RAISE EXCEPTION 'This record is not valid id %', id
USING HINT = 'Please enter valid record';
RETURN NULL;
else
RETURN NEW;
end if;
END;
$function$
CREATE TRIGGER validate_rec BEFORE INSERT OR UPDATE ON some_tbl
FOR EACH ROW EXECUTE FUNCTION validate_function();
With this function and trigger you validate inside the trigger. If the new record fails validation you set the valid_flag to false and then use that to raise exception. The RETURN NULL; is probably redundant and I am not sure it will be reached, but if it is it will also abort the insert or update. If the record is valid then you RETURN NEW and the insert/update completes.
I have to check when a table is inserted to/updated to see if a column value exists for the same HotelID and different RoomNo in the same table. I'm thinking that an INSTEAD OF trigger on the table would be a good option, but I read that it's a bad idea to update/insert the table the trigger executes on inside the trigger and you should create the trigger on a view instead (which raises more questions for me)
Is it ok to create a trigger like this? Is there a better option?
CREATE TRIGGER dbo.tgr_tblInterfaceRoomMappingUpsert
ON dbo.tblInterfaceRoomMapping
INSTEAD OF INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #txtRoomNo nvarchar(20)
SELECT #txtRoomNo = Sonifi_RoomNo
FROM dbo.tblInterfaceRoomMapping r
INNER JOIN INSERTED i
ON r.iHotelID = i.iHotelID
AND r.Sonifi_RoomNo = i.Sonifi_RoomNo
AND r.txtRoomNo <> i.txtRoomNo
IF #txtRoomNo IS NULL
BEGIN
-- Insert/update the record
END
ELSE
BEGIN
-- Raise error
END
END
GO
So it sounds like you only want 1 row per combo of HotelID and Sonifi_RoomNo.
CREATE UNIQUE INDEX UQ_dbo_tblInterfaceRoomMapping
ON dbo.tblInterfaceRoomMapping(HotelID,Sonifi_RoomNo)
Now if you try and put a second row with the same values, it will bark at you.
It's (usually) not okay to create a trigger like that.
Your trigger assumes a single row update or insert will only ever occur - is that guaranteed?
What will be the value of #txtRoomNo if multiple rows are inserted or updated in the same batch?
Eg, if an update is performed against the table resulting in 1 row with correct data and 1 row with incorrect data, how do you think your trigger would cope in that situation? Remember triggers fire once per insert/update, not per row.
Depending on your requirments you could keep the instead of trigger concept, however I would suggest a separate trigger for inserts and for updates.
In each you can then insert / update and include a where not exists clause to only allow valid inserts / updates, ignoring inserting or updating anything invalid.
I would avoid raising an error in the trigger, if you need to handle bad data you could also insert into some logging table with the reverse where exists logic and then handle separately.
Ultimately though, it would be best for the application to check if the roomNo is already used.
I'm handing ownerships as "Project -> Ownership -> User" relations and the following function gets the project owners' names as text:
CREATE FUNCTION owners_as_text(projects) RETURNS TEXT AS $$
SELECT trim(both concat_ws(' ', screen_name, first_name, last_name)) FROM users
INNER JOIN ownerships ON users.id = ownerships.user_id
WHERE deleted_at IS NULL AND ownable_id = $1.id AND ownable_type = 'Project'
$$ LANGUAGE SQL IMMUTABLE SET search_path = public, pg_temp;
This is then used to build an index which ignores accents:
CREATE INDEX index_projects_on_owners_as_text
ON projects
USING GIN(immutable_unaccent(owners_as_text(projects)) gin_trgm_ops)
When the project is updated, this index is updated as well. However, when e.g. an owner name changes, this index won't be touched, right?
How can I force the index to be updated on a regular basis to catch up in that case?
(REINDEX is not an option since it's locking and will cause deadlocks should write actions happen at the same time.)
The idea is erroneously assumed because the index is built on a function that in fact is not immutable. For the documentation:
IMMUTABLE indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list.
The problems you are now facing arise from incorrect assumptions.
Since you lied to PostgreSQL by saying that the function was IMMUTABLE when it is actually STABLE, it is unsurprising that the index becomes corrupted when the database changes.
The solution is not to create such an index.
It would be better not to use that function, but a view that has the expression you want to search for as a column. Then a query that uses the view can be optimized, and an index on immutable_unaccent(btrim(concat_ws(' ', screen_name, first_name, last_name))) can be used.
It is probably OK to cheat about unaccent's volatility...
Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.
I have a trigger function for a table test which has the following code snippet:
IF TG_OP='UPDATE' THEN
IF OLD.locked > 0 AND
( OLD.org_id <> NEW.org_id OR
OLD.document_code <> NEW.document_code OR
-- other columns ...
)
THEN
RAISE EXCEPTION 'Message';
-- more code
So I am statically checking all the column's new value with its previous value to ensure integrity. Now every time my business logic changes and I have to add new columns into that table, I will have to modify this trigger each time. I thought it would be better if somehow I could dynamically check all the columns of that table, without explicitly typing their name.
How can it be done?
From 9.0 beta2 documentation about WHEN clause in triggers, which might be able to be used in earlier versions within the trigger body:
OLD.* IS DISTINCT FROM NEW.*
or possibly (from 8.2 release notes)
IF row(new.*) IS DISTINCT FROM row(old.*)
Take a look at the information_schema, there is a view "columns". Execute a query to get all current columnnames from the table that fired the trigger:
SELECT
column_name
FROM
information_schema.columns
WHERE
table_schema = TG_TABLE_SCHEMA
AND
table_name = TG_TABLE_NAME;
Loop through the result and there you go!
More information can be found in the fine manual.
In Postgres 9.0 or later add a WHEN clause to your trigger definition (CREATE TRIGGER statement):
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD IS DISTINCT FROM NEW) -- parentheses required!
EXECUTE PROCEDURE ...;
Only possible for triggers BEFORE / AFTER UPDATE, where both OLD and NEW are defined. You'd get an exception trying to use this WHEN clause with INSERT or DELETE triggers.
And radically simplify the trigger function accordingly:
...
IF OLD.locked > 0 THEN
RAISE EXCEPTION 'Message';
END IF;
...
No need to test IF TG_OP='UPDATE' ... since this trigger only works for UPDATE anyway.
Or move that condition in the WHEN clause, too:
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD.locked > 0
AND OLD IS DISTINCT FROM NEW)
EXECUTE PROCEDURE ...;
Leaving only an unconditional RAISE EXCEPTION in your trigger function, which is only called when needed to begin with.
Read the fine print:
In a BEFORE trigger, the WHEN condition is evaluated just before the
function is or would be executed, so using WHEN is not materially
different from testing the same condition at the beginning of the
trigger function. Note in particular that the NEW row seen by the
condition is the current value, as possibly modified by earlier
triggers. Also, a BEFORE trigger's WHEN condition is not allowed to
examine the system columns of the NEW row (such as oid), because those
won't have been set yet.
In an AFTER trigger, the WHEN condition is evaluated just after the
row update occurs, and it determines whether an event is queued to
fire the trigger at the end of statement. So when an AFTER trigger's
WHEN condition does not return true, it is not necessary to queue an
event nor to re-fetch the row at end of statement. This can result in
significant speedups in statements that modify many rows, if the
trigger only needs to be fired for a few of the rows.
Related:
Fire trigger on update of columnA or ColumnB or ColumnC
To also address the question title
Is it possible to dynamically loop through a table's columns?
Yes. Examples:
Handle result when dynamic SQL is in a loop
Removing all columns with given name
Iteration over RECORD variable inside trigger
Use pl/perl or pl/python. They are much better suited for such tasks. much better.
You can also install hstore-new, and use it's row->hstore semantics, but that's definitely not a good idea when using normal datatypes.