What is the scope of Postgres policies? - postgresql

I am trying to wrap my head around row level security in Postgres. Unfortunately the documentation is not very verbose on the matter. My problem is the following:
I have two tables: locations and locations_owners. There is a TRIGGER set on INSERT for locations, which will automatically add a new row to the locations_owners table including the request.jwt.claim.sub variable.
This works all just fine, however when I want to create a policy for DELETE like this:
CREATE POLICY location_delete ON eventzimmer.locations FOR DELETE TO organizer USING(
(SELECT EXISTS (SELECT name FROM protected.locations_owners AS owners WHERE owners.name = name AND owners.sub = (SELECT current_setting('request.jwt.claim.sub', true))))
);
It will always evaluate to true, no matter the actual content. I know that I can call a custom procedure with SELECT here, however I ended up with the following questions:
what is the scope of a policy? Can I access tables? Can I access procedures? The documentation says "Any SQL conditional expression" so SELECT EXISTS should be fine
how are the column names of the rows mapped to the policy. The examples just magically use the column names (which I adopted by using the name variable), however I have not found any documentation about what this actually does
what is the magic user_name variable. Where does it come from? I believe it is the current role which is executing the query, but how can I know?
why is there no WITH CHECK expression available for DELETE? If I understand correctly, WITH CHECK will fail any row with invalid constraint, which is the behaviour I would prefer (because otherwise PostgREST will always return 204)
I am a little bit confused by the astonishingly missing amount of information in the (otherwise) very good documentation of PostgreSQL. Where is this information? How can I find it?
For the sake of completeness I have also attached the column definitions below:
CREATE TABLE eventzimmer.locations (
name varchar PRIMARY KEY NOT NULL,
latitude float NOT NULL,
longitude float NOT NULL
);
CREATE TABLE IF NOT EXISTS protected.locations_owners (
name varchar NOT NULL REFERENCES eventzimmer.locations(name) ON DELETE CASCADE,
sub varchar NOT NULL
);

Many of the questions will become clear once you understand how row level security is implemented: the conditions in the policies will automatically be added to the query, just as if you added another WHERE condition.
Use EXPLAIN to see the query plan, and you will see the policy's conditions in there.
So you can use any columns from the table on which the policy is defined.
Essentially, you can use anything in a policy definition that you could use in a WHERE conditions: Function calls, subqueries and so on.
You can also qualify the column name with the table name if that is required for disambiguation. This can be used in the policy from your example: The unqualified name is interpreted as owners.name, so the test always succeeds. To fix the policy, use locations.name instead of name.
There is no magic user_name variable, and I don't know where you get that from. There is, however, the current_user function, which is always available and can of course also be used in a policy definition.
WITH CHECK is a condition that the new row added by INSERT or UPDATE must fulfill. Since DELETE doesn't add any data, WITH CHECK doesn't apply to it.

Related

Delete row despite missing select right on a column

In this example, the second column should not be visible for a member (role) of the group 'user_group', because this column is only required internally to regulate the row level security. however, records can only be deleted if this column is also visible. How can you get around that?
Options that come to mind would be:
just make the second column visible (i.e. selectable), which would
actually be completely superfluous and I want to hide internally as
much as possible
write a function that is called with elevated rights
(security definer), which I want even less.
Are there any other options?
(and especially with deletions I want to use nice things like 'ON DELETE SET NULL' for foreign keys in other tables, instead of having to unnecessarily program triggers for them)
create table test (
internal_id serial primary key,
user_id int not null default session_user_id(),
info text default null
);
grant
select(internal_id, info),
insert(info),
update(info),
delete
on test to user_group;
create policy test_policy on policy for all to public using (
user_id = session_user_id());
RLS just implicitly adds unavoidable WHERE clauses to all queries, it doesn't mess with the roles under which code is evaluated. From the docs:
"Since policy expressions are added to the user's query directly, they will be run with the rights of the user running the overall query. Therefore, users who are using a given policy must be able to access any tables or functions referenced in the expression or they will simply receive a permission denied error when attempting to query the table that has row-level security enabled."
This feature is orthogonal to the granted column permissions. So the public role must be able to view the user_id column, otherwise evaluating user_id = session_user_id() leads to an error. There's really no way around making the column visible.
completely superfluous and I want to hide internally as much as possible
The solution for that would be a VIEW that doesn't include the column. It will even be updatable!

Way to migrate a create table with sequence from postgres to DB2

I need to migrate a DDL from Postgres to DB2, but I need that it works the same as in Postgres. There is a table that generates values from a sequence, but the values can also be explicitly given.
Postgres
create sequence hist_id_seq;
create table benchmarksql.history (
hist_id integer not null default nextval('hist_id_seq') primary key,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
);
(Look at the sequence call in the hist_id column to define the value of the primary key)
The business logic inserts into the table by explicitly providing an ID, and in other cases, it leaves the database to choose the number.
If I change this in DB2 to a GENERATED ALWAYS it will throw errors because there are some provided values. On the other side, if I create the table with GENERATED BY DEFAULT, DB2 will throw an error when trying to insert with the same value (SQL0803N), because the "internal sequence" does not take into account the already inserted values, and it does not retry with a next value.
And, I do not want to restart the sequence each time a provided ID was inserted.
This is the problem in BenchmarkSQL when trying to port it to DB2: https://sourceforge.net/projects/benchmarksql/ (File sqlTableCreates)
How can I implement the same database logic in DB2 as it does in Postgres (and apparently in Oracle)?
You're operating under a misconception: that sources external to the db get to dictate its internal keys. Ideally/conceptually, autogenerated ids will never need to be seen outside of the db, as conceptually there should be unique natural keys for export or reporting. Still, there are times when applications will need to manage some ids, often when setting up related entities (eg, JPA seems to want to work this way).
However, if you add an id value that you generated from a different source, the db won't be able to manage it. How could it? It's not efficient - for one thing, attempting to do so would do one of the following
Be unsafe in the face of multiple clients (attempt to add duplicate keys)
Serialize access to the table (for a potentially slow query, too)
(This usually shows up when people attempt something like: SELECT MAX(id) + 1, which would require locking the entire table for thread safety, likely including statements that don't even touch that column. If you try to find any "first-unused" id - trying to fill gaps - this gets more complicated and problematic)
Neither is ideal, so it's best to not have the problem in the first place. This is usually done by having id columns be autogenerated, but (as pointed out earlier) there are situations where we may need to know what the id will be before we insert the row into the table. Fortunately, there's a standard SQL object for this, SEQUENCE. This provides a db-managed, thread-safe, fast way to get ids. It appears that in PostgreSQL you can use sequences in the DEFAULT clause for a column, but DB2 doesn't allow it. If you don't want to specify an id every time (it should be autogenerated some of the time), you'll need another way; this is the perfect time to use a BEFORE INSERT trigger;
CREATE TRIGGER Add_Generated_Id NO CASCADE BEFORE INSERT ON benchmarksql.history
NEW AS Incoming_Entity
FOR EACH ROW
WHEN Incoming_Entity.id IS NULL
SET id = NEXTVAL FOR hist_id_seq
(something like this - not tested. You didn't specify where in the project this would belong)
So, if you then add a row with something like:
INSERT INTO benchmarksql.history (hist_id, h_data) VALUES(null, 'a')
or
INSERT INTO benchmarksql.history (h_data) VALUES('a')
an id will be generated and attached automatically. Note that ALL ids added to the table must come from the given sequence (as #mustaccio pointed out, this appears to be true even in PostgreSQL), or any UNIQUE CONSTRAINT on the column will start throwing duplicate-key errors. So any time your application needs an id before inserting a row in the table, you'll need some form of
SELECT NEXT VALUE FOR hist_id_seq
FROM sysibm.sysdummy1
... and that's it, pretty much. This is completely thread and concurrency safe, will not maintain/require long-term locks, nor require serialized access to the table.

How does one call a function from a postgresql rule that has access to NEW and OLD?

I'm new to postgresql (and therefore rules) and I've looked around, but can't really find an example calling a 'global' function.
I am using a normalized database where rows will be flagged as deleted rather than deleted. However, I would like to retain the DELETE FROM... functionality for the end user, by using an instead of delete rule to update the table's deleted_time column. Each table should, therefore, be able to use a common function, but I am not sure how this would be called in this context, or how it would have access to NEW and OLD?
CREATE OR REPLACE RULE rule_tablename_delete AS ON DELETE
TO tablename DO INSTEAD (
/// call function here to update the table's delete_time column
);
Is this even the correct approach? (I note that INSTEAD OF triggers are restricted to views only)
Just use an UPDATE statement:
create rule rule_tablename_delete as
on delete to tablename
do instead
update tablename
set delete_time = current_timestamp
where id = old.id
and delete_time is null;
Assuming that the id column is the primary key of that table.
Some more examples are in the manual: http://www.postgresql.org/docs/9.1/static/rules-update.html

How to get the key fields for a table in plpgsql function?

I need to make a function that would be triggered after every UPDATE and INSERT operation and would check the key fields of the table that the operation is performed on vs some conditions.
The function (and the trigger) needs to be an universal one, it shouldn't have the table name / fields names hardcoded.
I got stuck on the part where I need to access the table name and its schema part - check what fields are part of the PRIMARY KEY.
After getting the primary key info as already posted in the first answer you can check the code in http://github.com/fgp/pg_record_inspect to get record field values dynamicaly in PL/pgSQL.
Have a look at How do I get the primary key(s) of a table from Postgres via plpgsql? The answer in that one should be able to help you.
Note that you can't use dynamic SQL in PL/pgSQL; it's too strongly-typed a language for that. You'll have more luck with PL/Perl, on which you can access a hash of the columns and use regular Perl accessors to check them. (PL/Python would also work, but sadly that's an untrusted language only. PL/Tcl works too.)
In 8.4 you can use EXECUTE 'something' USING NEW, which in some cases is able to do the job.

auto-increment column in PostgreSQL on the fly?

I was wondering if it is possible to add an auto-increment integer field on the fly, i.e. without defining it in a CREATE TABLE statement?
For example, I have a statement:
SELECT 1 AS id, t.type FROM t;
and I am can I change this to
SELECT some_nextval_magic AS id, t.type FROM t;
I need to create the auto-increment field on the fly in the some_nextval_magic part because the result relation is a temporary one during the construction of a bigger SQL statement. And the value of id field is not really important as long as it is unique.
I search around here, and the answers to related questions (e.g. PostgreSQL Autoincrement) mostly involving specifying SERIAL or using nextval in CREATE TABLE. But I don't necessarily want to use CREATE TABLE or VIEW (unless I have to). There are also some discussions of generate_series(), but I am not sure whether it applies here.
-- Update --
My motivation is illustrated in this GIS.SE answer regarding the PostGIS extension. The original query was:
CREATE VIEW buffer40units AS
SELECT
g.path[1] as gid,
g.geom::geometry(Polygon, 31492) as geom
FROM
(SELECT
(ST_Dump(ST_UNION(ST_Buffer(geom, 40)))).*
FROM point
) as g;
where g.path[1] as gid is an id field "required for visualization in QGIS". I believe the only requirement is that it is integer and unique across the table. I encountered some errors when running the above query when the g.path[] array is empty.
While trying to fix the array in the above query, this thought came to me:
Since the gid value does not matter anyways, is there an auto-increment function that can be used here instead?
If you wish to have an id field that assigns a unique integer to each row in the output, then use the row_number() window function:
select
row_number() over () as id,
t.type from t;
The generated id will only be unique within each execution of the query. Multiple executions will not generate new unique values for id.