Problems with create policy of update - postgresql

I want to use row level security to create a policy for update, tb.idx never could update to less than 2 if cls = 'great2':
create table tb (
idx integer,
cls text);
create role user1;
grant all on tb to user1;
......
create policy up_p on tb for update
using(true)
with check (idx >2 and cls='great2');
output:
set role user1;
select * from tb;
update tb set idx=1 cls='great2'
There are two problems:
when using select * from tb, it shows an empty table.
it allows update with idx=1 cls='great2'.

it shows an empty table.
Quote from the manual
If row-level security is enabled for a table, but no applicable policies exist, a “default deny” policy is assumed, so that no rows will be visible or updatable.
So you need to create a policy that allows selecting:
create policy tb_select on tb
for select
using (true);
it allows update with idx=1 cls='great2'.
Quote from the manual
Existing table rows are checked against the expression specified in USING, while new rows that would be created via INSERT or UPDATE are checked against the expression specified in WITH CHECK
As you created the policy with using (true) that means all rows can be updated.
So you need:
create policy up_p on tb
for update
using (idx > 2 and cls='great2');
Assuming there is a row with (1, 'great2') the following update would not update anything:
update stuff.tb
set cls = 'great2'
where idx = 1;
Note, that for the policy to actually be active you also need:
alter table tb enable row level security;
However, if you simply want to ensure that values for idx are always greater than 2 for rows with cls = 'great2', a check constraint might be the better option:
create table tb
(
idx integer,
cls text,
constraint check_idx check ( (idx > 2 and cls = 'great2') or (cls <> 'great2'))
);
insert into tb
values
(10, 'great2'),
(1, 'foo');
Now running:
update tb
set idx = 1
where idx = 10
results in:
ERROR: new row for relation "tb" violates check constraint "check_idx"
Detail: Failing row contains (1, great2).
the same happens if you change the cls value for a row with idx <= 2
update tb
set cls = 'great2'
where idx = 1;

Related

Building a query which sets a column according to the data in a join table

I have a table af with columns af.id, etc. and a table af_pb with columns af_id and pb_id (which assigns entities from table pb to the entities of table af).
What i want:
add a new column precedence in table af
for each af.id in af:
if there is a pair (af_id, pb_id) with af.id = af_id and some pb_id in the join table af_pb, then set af.precedence = 0
if there is no such pair, set af.precedence = 1
How can i reach this in PostgreSQL? I already read about the case-when-else-statement but I didn't managed to implement it such that the column precedence is set correctly.
While this can be done in with a case expression, it is not necessary. If you want a default value for later inserts into table af then alter the table with it, then update to set the non-default.
alter table af add column precedence integer default 1;
update af
set precedence = 0
where exists (select null
from af_pb
where af.af_id = af_pb.af_id);
If a default is not desired then a just add the column and afterward update to set the appropriate value:
alter table af add column precedence integer;
update af
set precedence =
( not (exists (select null
from af_pb
where af.af_id = af_pb.af_id)))::integer;

Row level security issues on insert

I am trying to create an RLS in supabase for an initiative_categories table since when the service creates a new initiative it calls another function to create a new record in the initiative_categories tables but when it does the operation Insertion into the table returns a 403 with the following error: new row violates row-level security policy for table "initiative_categories".
Currently my row security level has been configured like this:
(initiative_id = ( SELECT initiatives.id
FROM initiatives
WHERE ((initiatives.id = initiative_categories.initiative_id) AND (uid() = initiatives.user_id))
LIMIT 1)
)
Would this work?
(exists (SELECT initiatives.id
FROM initiatives
WHERE id = initiative_id AND uid() = user_id)
)

PostgreSQL array of data composite update element using where condition

I have a composite type:
CREATE TYPE mydata_t AS
(
user_id integer,
value character(4)
);
Also, I have a table, uses this composite type as an array of mydata_t.
CREATE TABLE tbl
(
id serial NOT NULL,
data_list mydata_t[],
PRIMARY KEY (id)
);
Here I want to update the mydata_t in data_list, where mydata_t.user_id is 100000
But I don't know which array element's user_id is equal to 100000
So I have to make a search first to find the element where its user_id is equal to 100000 ... that's my problem ... I don't know how to make the query .... in fact, I want to update the value of the array element, where it's user_id is equal to 100000 (Also where the id of tbl is for example 1) ... What will be my query?
Something like this (I know it's wrong !!!)
UPDATE "tbl" SET "data_list"[i]."value"='YYYY'
WHERE "id"=1 AND EXISTS (SELECT ROW_NUMBER() OVER() AS i
FROM unnest("data_list") "d" WHERE "d"."user_id"=10000 LIMIT 1)
For example, this is my tbl data:
Row1 => id = 1, data = ARRAY[ROW(5,'YYYY'),ROW(6,'YYYY')]
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'YYYY')]
Now i want to update tbl where id is 2 and set the value of one of the tbl.data elements to 'XXXX' where the user_id of element is equal to 11
In fact, the final result of Row2 will be this:
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'XXXX')]
If you know the value value, you can use the array_replace() function to make the change:
UPDATE tbl
SET data_list = array_replace(data_list, (11, 'YYYY')::mydata_t, (11, 'XXXX')::mydata_t)
WHERE id = 2
If you do not know the value value then the situation becomes more complex:
UPDATE tbl SET data_list = data_arr
FROM (
-- UPDATE doesn't allow aggregate functions so aggregate here
SELECT array_agg(new_data) AS data_arr
FROM (
-- For the id value, get the data_list values that are NOT modified
SELECT (user_id, value)::mydata_t AS new_data
FROM tbl, unnest(data_list)
WHERE id = 2 AND user_id != 11
UNION
-- Add the values to update
VALUES ((11, 'XXXX')::mydata_t)
) x
) y
WHERE id = 2
You should keep in mind, though, that there is an awful lot of work going on in the background that cannot be optimised. The array of mydata_t values has to be examined from start to finish and you cannot use an index on this. Furthermore, updates actually insert a new row in the underlying file on disk and if your array has more than a few entries this will involve substantial work. This gets even more problematic when your arrays are larger than the pagesize of your PostgreSQL server, typically 8kB. All behind the scene so it will work, but at a performance penalty. Even though array_replace sounds like changes are made in-place (and they indeed are in memory), the UPDATE command will write a completely new tuple to disk. So if you have 4,000 array elements that means that at least 40kB of data will have to be read (8 bytes for the mydata_t type on a typical system x 4,000 = 32kB in a TOAST file, plus the main page of the table, 8kB) and then written to disk after the update. A real performance killer.
As #klin pointed out, this design may be more trouble than it is worth. Should you make data_list as table (as I would do), the update query becomes:
UPDATE data_list SET value = 'XXXX'
WHERE id = 2 AND user_id = 11
This will have MUCH better performance, especially if you add the appropriate indexes. You could then still create a view to publish the data in an aggregated form with a custom type if your business logic so requires.

Fast new row insertion if a value of a column depends on previous value in existing row

I have a table cusers with a primary key:
primary key(uid, lid, cnt)
And I try to insert some values into the table:
insert into cusers (uid, lid, cnt, dyn, ts)
values
(A, B, C, (
select C - cnt
from cusers
where uid = A and lid = B
order by ts desc
limit 1
), now())
on conflict do nothing
Quite often (with the possibility of 98%) a row cannot be inserted to cusers because it violates the primary key constraint, so hard select queries do not need to be executed at all. But as I can see PostgreSQL first counts the select query as a result of dyn column and only then rejects row because of uid, lid, cnt violation.
What is the best way to insert rows quickly in such situation?
Another explanation
I have a system where one row depends on another. Here is an example:
(x, x, 2, 2, <timestamp>)
(x, x, 5, 3, <timestamp>)
Two columns contain an absolute value (2 and 5) and relative value (2, 5 - 2). Each time I insert new row it should:
avoid same rows (see primary key constraint)
if new row differs, it should count a difference and put it into the dyn column (so I take the last inserted row for the user according to the timestamp and subtract values).
Another solution I've found is to use returning uid, lid, ts for inserts and get user ids which were really inserted - this is how I know they have differences from existing rows. Then I update inserted values:
update cusers
set dyn = (
select max(cnt) - min(cnt)
from (
select cnt
from cusers
where uid = A and lid = B
order by ts desc
limit 2) Table
)
where uid = A and lid = B and ts = TS
But it is not a fast approach either, as it seeks all over the ts column to find the two last inserted rows for each user. I need a fast insert query as I insert millions of rows at a time (but I do not write duplicates).
What the solution can be? May be I need a new index for this? Thanks in advance.

In DB2, perform an update based on insert for large number of rows

In DB2, I need to do an insert, then, using results/data from that insert, update a related table. I need to do it on a million plus records and would prefer not to lock the entire database. So, 1) how do I 'couple' the insert and update statements? 2) how can I ensure the integrity of the transaction (without locking the whole she-bang)?
some pseudo-code should help clarify
STEP 1
insert into table1 (neededId, id) select DYNAMICVALUE, id from tableX where needed value is null
STEP 2
update table2 set neededId = (GET THE DYNAMIC VALUE JUST INSERTED) where id = (THE ID JUST INSERTED)
note: in table1, the ID col is not unique, so i can't just filter on that to find the new DYNAMICVALUE
This should be more clear (FTR, this works, but I don't like it, because I'd have to lock the tables to maintain integrity. Would be great it I could run these statements together, and allow the update to refer to the newAddressNumber value.)
/****RUNNING TOP INSERT FIRST****/*
--insert a new address for each order that does not have a address id
insert into addresses
(customerId, addressNumber, address)
select
cust.Id,
--get next available addressNumber
ifNull((select max(addy2.addressNumber) from addresses addy2 where addy2.customerId = cust.id),0) + 1 as newAddressNumber,
cust.address
from customers cust
where exists (
--find all customers with at least 1 order where addressNumber is null
select 1 from orders ord
where 1=1
and ord.customerId = cust.id
and ord.addressNumber is null
)
/*****RUNNING THIS UPDATE SECOND*****/
update orders ord1
set addressNumber = (
select max(addressNumber) from addresses addy3
where addy3.customerId = ord1.customerId
)
where 1=1
and ord1.addressNumber is null
The IDENTITY_VAL_LOCAL function is a non-deterministic function that returns the most recently assigned value for an identity column, where the assignment occurred as a result of a single INSERT statement using a VALUES clause