Does anyone knows how can I set up an insert trigger so when a perform an insert from my application, the data gets inserted and postgres returns, even before the trigger finishes executing?
There is no built-in support for this; you will have to hack something up. Options include:
Write the trigger in C, Perl, or Python and have it launch a separate process to do the things you want. This can get tricky and possibly slightly dangerous to your database system, and it only works if the things you want to do are outside of the database.
Write a lightweight trigger function that only records an entry into a log or task table, and have a separate job or daemon that looks into that table on its own schedule and executes things from there. That's more or less how Slony works.
The question is : why do you need it? Triggers should be fast. If you need to do something complicated, write trigger that send notification to some daemon that does the complex part - for example using LISTEN/NOTIFY feature of PostgreSQL.
Related
I've had a strange bug pop up -- when I write to a partitioned table, then immediately after do a select on that same table, I get an error like:
./2018.04.23/ngbarx/cunadj. OS reports: Operation not permitted.
This error does not appear if, after writing the table, I wait a few seconds. This to me seems like pointing towards there being a caching situation, where q responds before an operation is complete, but afaik everything I am doing should be synchronous.
I would love to understand what I am doing wrong / what is causing this error exactly / which commands are executing asynchronously.
The horrible details:
Am writing from Python connected to q synchronously using the qpython3 package
The q session is launched with slaves i.e. -s 4
To write the partitioned table, I am using the unofficial function .Q.dcfgnt which can be found here
I write to a q session that was initialized with a database directory as is usual when dealing with partitioned tables
After writing the table with .Q.dcfgnt, but before doing the select, I also do .Q.chk`:/db/; system"l /db/"; .Q.cn table in that order, just to be sure the table is up and ready to use in the q session. These might be both overkill and in the wrong order, but I believe they are all synchronous calls afaik, please correct me if I am wrong.
The trigger to the error is a 10#select from table; I understand
why this is a bad idea to do in general on a partitioned table, but
also from my understanding it shouldn't be triggering the particular error that I am getting.
I have a stored procedure on Postgres, which processes large data and takes a good time to complete.
In my application, there is a chance that 2 processes or schedulers can run this procedure at same time. I want to know if there is a built in mechanism in db to allow only instance of this procedure to run at db level.
I searched the internet, but didn't find anything concrete.
There is nothing built in to define a procedure (or function) so that concurrent execution is prevented.
But you can use advisory locks to achieve something like that.
At the very beginning of the procedure, you can add something like:
perform pg_advisory_lock(987654321);
which will then wait to get the lock. If a second session invokes the procedure it will have to wait.
Make sure you release the lock at the end of the procedure using pg_advisory_unlock() as they are not released when the transaction is committed.
If you use advisory locks elsewhere, make sure you use a key that can't be used in other places.
I am using PostgreSQL 10 from RDS (AWS).
So note that I don't have full permissions to do whatever I want.
In PostgreSQL I have some functions written in PL/pgSQL.
From my experience in these function I cannot start/commit/rollback transactions. In a DO block I cannot do that either.
Is that correct? So what is the logic behind this... seems PostgreSQL expects each function to be called in the context of an existing transaction. Right?
But what if I want every statement in my function to be executed in a separate (short) transaction i.e. to have a behavior something like AUTOCOMMIT = ON?
I found some extension which maybe can do that but I am not sure.
I don't know if it's relevant.
https://www.postgresql.org/docs/10/ecpg-sql-set-autocommit.html
Isn't there a standard way of doing this in Postgres without the need to download and install additional packages/extensions?
Again: I want every statement in my function to be executed in a separate (short) transaction i.e. to have a behavior something like AUTOCOMMIT = ON.
So I want something like this:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-implicit-transactions-transact-sql?view=sql-server-2017
All statements in a function run in the same transaction, and no plugin can change that.
You can use procedures from v11 on, but you still have to explicitly manage transactions then.
I suspect that the best thing would be to run your functions on the database client, where you have autocommit automatically, rather than as a function in the database.
I need to write some SQL update scripts to add tables, rename columns etc. I put everything inside a transaction:
IF EXISTS (/* check version */)
BEGIN
-- print cannot apply update
END
ELSE
BEGIN
BEGIN TRANSACTION
-- apply updates
ROLLBACK
--COMMIT
END
Now if I ROLLBACK the updates instead of COMMIT them can I assume that later when I change it back to COMMIT it will work? I don't want to apply it to my dev database just yet but be able to press F5 and check if everything's fine.
No,
if you want to test,you must be sure of the state of your database.
Better to write a small script, which drops your tables/complete database and write some SQL which creates the database, tables and adds test data.Then you can do a real test, with SQL you really gonna use, instead arbitrary (different from the real SQL to use) SQL and wondering if any site-effects of your test code.
SSV(short-short-version): Try to execute your tests as clean as possible, witout any other test-code. which MIGHT interfere with expected results.
I need to modify a Trigger (which use a particular FUNCTION) already defined and it is being in use. If i modify it using CREATE OR REPLACE FUNCTION, what is the behaviour of Postgres? will it "pause" the old trigger while it is updating the function?. As far as i know, Postgres should execute all the REPLACE FUNCTION in one transaction (so the tables are locked and so the triggers being modify while it is updating, then next transactions locked will use the new FUNCTION not the old one. is it correct?
Yes. According to the documentation:
http://www.postgresql.org/docs/9.0/static/explicit-locking.html
Also, most PostgreSQL commands automatically acquire locks of appropriate modes to ensure that referenced tables are not dropped or modified in incompatible ways while the command executes. (For example, ALTER TABLE cannot safely be executed concurrently with other operations on the same table, so it obtains an exclusive lock on the table to enforce that.)
will it "pause" the old trigger while it is updating the function?
It should continue executing the old trigger functions when calls are in progress (depending on the isolation level, subsequent calls in the same transaction should use the old definition too; I'm not 100% sure the default level would do so, however), block new transactions that try to call the function while it's being updated, and execute the new function once it's replaced.
As far as i know, Postgres should execute all the REPLACE FUNCTION in one transaction (so the tables are locked and so the triggers being modify while it is updating, then next transactions locked will use the new FUNCTION not the old one. is it correct?
Best I'm aware the function associated to the trigger doesn't lock the table when it's updated.
Please take this with a grain of salt, though: the two above statements amount to what I'd intuitively expect mvcc to do, rather than knowing this area of Postgres' source code off the top of my head. (A few core contributors periodically come to SO, and might eventually chime in with a more precise answer.)
Note that this is relatively straightforward to test, that being said: open two psql sessions, open two transactions, and see what happens...