I have a plpgsql function. This function deletes and inserts some rows in some foreign table which is connected to OracleXE via oracle_fdw plugin. Every minute cron start 10 instances of this function for 10 different tables using psql. Sometimes (less than once per week) one instance stucks. It is in active state, but can't be canceled or terminated.
I can kill it by SIGKILL but this will cancel all other postgres processes and start recovery mode.
Is it possible to stop this process without using SIGKILL?
(Postgres 9.3 on CentOS 7)
Related
Enviroment:
Cluster with 3 DB servers managed with Patroni.
PostgreSQL 14.
I have procedure without parameteres which archive some data and works a long time.
When I running it, after exactly 30 minutes this procedure start again. I know it because on the beginnig I insert record into control table with information about start date.
Do you know what process can responsible for it ?
I have PostgreSQL 9.6 and backend on PHP. When I use Persistent connections to PostgreSQL via PDO PHP I have some idle processes. Command
select * from pg_stat_activity; shows me 3 idle process with query column DEALLOCATE pdo_stmt_0000013e. I understand, that these processes wait new queries, but I not understand why are 3 processes? On other project with PostgreSQL I have 50 the same idle connections. Where this number is defined, why it depends?
What is the expected behavior for postgresql processes when system time is changed?
For eg: A case is seen with postgres autovacuum process that, it is not working after time has been changed and only solution is to restart the Database.
I want to execute a long-running stored procedure on PostgreSQL 9.3. Our database server is (for the sake of this question) guaranteed to be running stable, but the machine calling the stored procedure can be shut down at any second (Heroku dynos get cycled every 24h).
Is there a way to run the stored procedure 'detached' on PostgreSQL? I do not care about its output. Can I run it asynchronously and then let the database server keep working on it while I close my database connection?
We're using Python and the psycopg2 driver, but I don't care so much about the implementation itself. If the possibility exists, I can figure out how to call it.
I found notes on the asynchronous support and the aiopg library and I'm wondering if there's something in those I could possibly use.
No, you can't run a function that keeps on running after the connection you started it from terminates. When the PostgreSQL server notices that the connection has dropped, it will terminate the function and roll back the open transaction.
With PostgreSQL 9.3 or 9.4 it'd be possible to write a simple background worker to run procedures for you via a queue table, but this requires the ability to compile and install new C extensions into the server - something you can't do on Heroku.
Try to reorganize your function into smaller units of work that can be completed individually. Huge, long-running functions are problematic for other reasons, and should be avoided even if unstable connections aren't a problem.
I have dozens of unlogged tables, and doc says that an unlogged table is automatically truncated after a crash or unclean shutdown.
Based on that, I need to check some tables after database starts to see if they are "empty" and do something about it.
So in short words, I need to execute a procedure, right after database is started.
How the best way to do it?
PS: I'm running Postgres 9.1 on Ubuntu 12.04 server.
There is no such feature available (at time of writing, latest version was PostgreSQL 9.2). Your only options are:
Start a script from the PostgreSQL init script that polls the database and when the DB is ready locks the tables and populates them;
Modify the startup script to use pg_ctl start -w and invoke your script as soon as pg_ctl returns; this has the same race condition but avoids the need to poll.
Teach your application to run a test whenever it opens a new pooled connection to detect this condition, lock the tables, and populate them; or
Don't use unlogged tables for this task if your application can't cope with them being empty when it opens a new connection
There's been discussion of connect-time hooks on pgsql-hackers but no viable implementation has been posted and merged.
It's possible you could do something like this with PostgreSQL bgworkers, but it'd be a LOT harder than simply polling the DB from a script.
Postgres now has pg_isready for determining if the database is ready.
https://www.postgresql.org/docs/11/app-pg-isready.html