CREATE OR REPLACE FUNCTION()
RETURND VOID AS
BEGIN
FOR I IN 1..5
LOOP
LOCK TABLE tbl_Employee1 IN EXCLUSIVE MODE;
INSERT INTO tbl_Employee1
VALUES
(i,'test');
END LOOP;
COMMIT;
END;
$$ LANGUAGE PLPGSQL
When I select the table it is going into infinty loop means the transaction is not complete. Please help me out ?
Your code has been stripped down so much that it doesn't really make sense any more.
However, you should only lock the table once, not in each iteration of the loop. Plus you can't use commit in a function in Postgres, so you have to remove that as well. It's also bad coding style (in Postgres and Oracle) to not provide column names for the insert statement.
Immediate solution:
CREATE OR REPLACE FUNCTION ...
RETURNS VOID AS
$$
BEGIN
LOCK TABLE Employee1 IN EXCLUSIVE MODE;
FOR I IN 1..5 LOOP
INSERT INTO Employee1 (id, name)
VALUES (i,'test');
END LOOP;
-- no commit here!
END;
$$ LANGUAGE PLPGSQL
The above is needlessly complicated in Postgres and can be implemented much more efficiently without a loop:
CREATE OR REPLACE FUNCTION ....
RETURNS VOID AS
$$
BEGIN
LOCK TABLE Employee1 IN EXCLUSIVE MODE;
INSERT INTO Employee1 (id, name)
select i, test
from generate_series(1,5);
END;
$$ LANGUAGE PLPGSQL
Locking a table in exclusive mode seems like a bad idea to begin with. In Oracle as well, but in Postgres this might have more severe implications. If you want to prevent duplicates in the table, create a unique index (or constraint) and deal with errors. Or use insert ... on conflict in Postgres. That will be much more efficient (and scalable) than locking a complete table.
Additionally: LOCK TABLE IN EXCLUSIVE MODE; behaves differently in Oracle and Postgres. While Oracle will still allow read only queries on that table, you block every access to it in Postgres - including SELECT statements.
Related
I am trying to understand which type of a lock to use for a trigger function.
Simplified function:
CREATE OR REPLACE FUNCTION max_count() RETURNS TRIGGER AS
$$
DECLARE
max_row INTEGER := 6;
association_count INTEGER := 0;
BEGIN
LOCK TABLE my_table IN ROW EXCLUSIVE MODE;
SELECT INTO association_count COUNT(*) FROM my_table WHERE user_id = NEW.user_id;
IF association_count > max_row THEN
RAISE EXCEPTION 'Too many rows';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE CONSTRAINT TRIGGER my_max_count
AFTER INSERT OR UPDATE ON my_table
DEFERRABLE INITIALLY DEFERRED
FOR EACH ROW
EXECUTE PROCEDURE max_count();
I initially was planning to use EXCLUSIVE but it feels too heavy. What I really want is to ensure that during this function execution no new rows are added to the table with concerned user_id.
If you want to prevent concurrent transactions from modifying the table, a SHARE lock would be correct. But that could lead to a deadlock if two such transactions run at the same time — each has modified some rows and is blocked by the other one when it tries to escalate the table lock.
Moreover, all table locks that conflict with SHARE UPDATE EXCLUSIVE will lead to autovacuum cancelation, which will cause table bloat when it happens too often.
So stay away from table locks, they are usually the wrong thing.
The better way to go about this is to use no explicit locking at all, but to use the SERIALIZABLE isolation level for all transactions that access this table.
Then you can simply use your trigger (without lock), and no anomalies can occur. If you get a serialization error, repeat the transaction.
This comes with a certain performance penalty, but allows more concurrency than a table lock. It also avoids the problems described in the beginning.
I'm trying to call a function from a trigger function and don't understand what control structure to use. Here's the situation:
I have 3 tables (table1, table2, table3) and two functions (Fct1 and Fct2).
Fct1 is a trigger function triggered after an insert in table1 and which makes insert in table2:
CREATE OR REPLACE FUNCTION Fct1()
RETURNS TRIGGER AS
$BODY$
BEGIN
TRUNCATE "table2";
INSERT INTO "table2"
SELECT ... FROM "table1";
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
The trigger is:
CREATE TRIGGER trig_fct1
AFTER INSERT
ON table1
FOR EACH ROW
WHEN ((pg_trigger_depth() < 1))
EXECUTE PROCEDURE Fct1();
If I do after that a SELECT "Fct2"(); everything works fine, but if I add in Fct1 a PERFORM "Fct2"(); , like this:
CREATE OR REPLACE FUNCTION Fct1()
RETURNS TRIGGER AS
$BODY$
BEGIN
TRUNCATE "table2";
INSERT INTO "table2"
SELECT ... FROM "table1";
TRUNCATE "table3";
PERFORM "Fct2"(); -- will insert into table3
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
It takes much more time to run (I never waited for the end, it's too long).
Fct2 looks like this
CREATE OR REPLACE FUNCTION "Fct2"()
RETURNS void AS
$BODY$
BEGIN
INSERT INTO "table3" ...;
RETURN;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
So, there is something I don't understand and I think it is related to these 'RETURNs' which are not clear to me. I have tried different 'solutions' but I always got errors mentioning some 'return' mismatches. Any suggestions ?
I'm using PostgreSQL 9.6
To capture long running SQL statements from functions in the log, you can use auto_explain with auto_explain.log_nested_statements set to on. But if the query doesn't even finish, that won't help a lot.
My bet is that you are blocked by a database lock. Set log_lock_waits to on and see if something is reported in the log. You should also query pg_locks to see if there are locks requested but not granted.
I have the following stored procedure
CREATE OR REPLACE FUNCTION testFunction(iRowID1 integer, iRowID2 integer) RETURNS void AS $$
BEGIN
UPDATE Table1 SET Value1=Value1+1 WHERE rowID=iRowID1;
UPDATE Table1 SET Value1=Value1-1 WHERE rowID=iRowID2;
END;
$$ LANGUAGE plpgsql;
If I run the following two commands concurrently
SELECT testFunction(1,2);
SELECT testFunction(2,1);
I get a deadlock detected error for one of the commands. Is there some way to avoid this deadlock?
I can't test this right now as I don't have access to a PostgreSQL database at the moment, but in theory it should work, as deadlocks can always be avoided if you lock things in the same order and never escalate a lock level (upgrade a read lock to a write lock, for example).
Do the updates in a specific order:
CREATE OR REPLACE FUNCTION testFunction(iRowID1 integer, iRowID2 integer) RETURNS void AS $$
BEGIN
IF iRowID1 < iRowID2 THEN
UPDATE Table1 SET Value1=Value1+1 WHERE rowID=iRowID1;
UPDATE Table1 SET Value1=Value1-1 WHERE rowID=iRowID2;
ELSE
UPDATE Table1 SET Value1=Value1-1 WHERE rowID=iRowID2;
UPDATE Table1 SET Value1=Value1+1 WHERE rowID=iRowID1;
END IF
END;
$$ LANGUAGE plpgsql;
That will always update the rows in numerically-ascending order, thus in your example row 1 will always be updated before row 2, and the second invocation can't start its update until the first invocation is done.
Let's say I have a table persons which contains only a name(varchar) and a user client.
I'd like that the only way for client to insert to persons is through the function:
CREATE OR REPLACE FUNCTION add_a_person(a_name varying character)
RETURNS void AS
$BODY$
BEGIN
INSERT INTO persons VALUES(a_name);
END;
$BODY$
LANGUAGE plpgsql VOLATILE COST 100;
So, I don't want to grant client insert privileges on persons and only give execute privilege for add_a_person.
But without doing so, I'd get a permission denied because of the use of insert inside the function.
I have not found a way to this in the postgres documentation about granting privileges.
Is there a way to do this?
You can define the function with SECURITY DEFINER. This will allow the function to run for the restricted user as if they had the higher privileges of the function's creator (which needs to be able to insert into the table).
The last line of the definition would look like this:
LANGUAGE plpgsql VOLATILE COST 100 SECURITY DEFINER;
This is a bit simplistic, but assuming are running 9.2 or later, this is an example of how to check for a single permitted function doing an insert:
CREATE TABLE my_table (col1 text, col2 integer, col3 timestamp);
CREATE FUNCTION my_table_insert_function(col1 text, col2 integer) RETURNS integer AS $$
BEGIN
INSERT INTO my_table VALUES (col1, col2, current_timestamp);
RETURN 1;
END $$ LANGUAGE plpgsql;
CREATE FUNCTION my_table_insert_trigger_function() RETURNS trigger AS $$
DECLARE
stack text;
fn integer;
BEGIN
RAISE EXCEPTION 'secured';
EXCEPTION WHEN OTHERS THEN
BEGIN
GET STACKED DIAGNOSTICS stack = PG_EXCEPTION_CONTEXT;
fn := position('my_table_insert_function' in stack);
IF (fn <= 0) THEN
RAISE EXCEPTION 'Expecting insert from my_table_insert_function'
USING HINT = 'Use function to insert data';
END IF;
RETURN new;
END;
END $$ LANGUAGE plpgsql;
CREATE TRIGGER my_table_insert_trigger BEFORE INSERT ON my_table
FOR EACH ROW EXECUTE PROCEDURE my_table_insert_trigger_function();
And a quick example of usage:
INSERT INTO my_table VALUES ('test one', 1, current_timestamp); -- FAILS
SELECT my_table_insert_function('test one', 1); -- SUCCEEDS
You'll want to peek into the stack in more detail if you want your code to be more robust, secure, etc. Checks for multiple functions are possible, of course, but involve more work. Splitting the stack into multiple lines and parsing it can be fairly involved, so you'll probably want some helper functions if things get more complex.
This is just a proof of concept, but it does what it claims. I would expect this code to be fairly slow given the use of exception handling and stack inspection, so don't use it in performance-critical parts of your application. It's not likely to be suitable for cases where DML statements are frequent, but if security is more important than performance, go for it.
Matthew's answer is correct in that a SECURITY DEFINER will allow the function to run with the privileges of a different user. Documentation for this is at http://www.postgresql.org/docs/9.1/static/sql-createfunction.html
Why are you trying to implement security this way? If you want to enforce some logic on the inserts, then I would strongly recommend doing it with constraints. http://www.postgresql.org/docs/9.1/static/ddl-constraints.html
If you want substantially higher levels of logic than can be reasonably implemented in constraints, I would suggest looking into building a business logic layer between your presentation layer and the data storage layer. You will find that scalability demands this pretty much instantly.
If your goal is to defend against SQL injection then you have found a way that might work, but that will create a heck of a lot of work for you. Worse, it leads to huge volumes of really mindless code that all has to be kept in sync across schema changes. This is pretty rough if you're trying to do anything agile. Consider instead using a programming framework that takes advantage of PREPARE / EXECUTE, which is pretty much all of them at this point.
http://www.postgresql.org/docs/9.0/static/sql-prepare.html
I am trying to delete a row from one table and insert it with some additional data into another. I know this can be done in two separate commands, one to delete and another to insert into the new table. However I am trying to combine them and it is not working, this is my query so far:
insert into b (one,two,num) values delete from a where id = 1 returning one, two, 5;
When running that I get the following error:
ERROR: syntax error at or near "delete"
Can anyone point out how to accomplish this, or is there a better way? or is it not possible?
You cannot do this before PostgreSQL 9.1, which is not yet released. And then the syntax would be
WITH foo AS (DELETE FROM a WHERE id = 1 RETURNING one, two, 5)
INSERT INTO b (one, two, num) SELECT * FROM foo;
Before PostgreSQL 9.1 you can create a volatile function like this (untested):
create function move_from_a_to_b(_id integer, _num integer)
returns void language plpgsql volatile as
$$
declare
_one integer;
_two integer;
begin
delete from a where id = _id returning one, two into strict _one, _two;
insert into b (one,two,num) values (_one, _two, _num);
end;
$$
And then just use select move_from_a_to_b(1, 5). A function has the advantage over two statements that it will always run in single transaction — there's no need to explicitly start and commit transaction in client code.
For all version of PostgreSQL, you can create a trigger function for deleting rows from a table and inserting them to another table. But it seems slower than bulk insert that is released in PostgreSQL 9.1. You just need to move the old data into the another table before it gets deleted. This is done with the OLD data type:
CREATE FUNCTION moveDeleted() RETURNS trigger AS $$
BEGIN
INSERT INTO another_table VALUES(OLD.column1, OLD.column2,...);
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER moveDeleted
BEFORE DELETE ON table
FOR EACH ROW
EXECUTE PROCEDURE moveDeleted();
As above answer, after PostgreSQL 9.1 you can do this:
WITH tmp AS (DELETE FROM table RETURNING column1, column2, ...)
INSERT INTO another_table (column1, column2, ...) SELECT * FROM tmp;
That syntax you have there isn't valid. 2 statements is the best way to do this. The most intuitive way to do it would be to do the insert first and the delete second.
As "AI W", two statements are certainly the best option for you, but you could also consider writing a trigger for that. Each time something is deleted in your first table, another is filled.