Postgres and PyCharm and hung transactions - postgresql

I keep running into this issue where my postgresql database hangs because I didn't finish a transaction while debugging in PyCharm.
The log has several of these messages:
[16:30:40 PDT] unexpected EOF on client connection with an open transaction
Now the database is hung and I don't know how to recover from it other than shutting down the database (vagrant halt; vagrant up)
Is there any way to clear those stuck transactions so I don't have to go through stopping and restarting the database?
Thanks for any info

I found this solution here:
SELECT * FROM pg_stat_activity ORDER BY client_addr ASC, query_start ASC;
will list all your hung/idle transactions, then you can run
SELECT pg_terminate_backend(3592)
using the pid listed in the table.
and it is much faster than rebooting vagrant or postgresql

Related

Is my PostgreSQL query still running even if server closed the connection?

Postgres noob here. I have a very long postgresql query running an update on about ~3 million rows. I did this via psql and after about the second hour I got this message:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
Is my query still running? I did run:
select *
from pg_stat_activity
where datname = 'mydb';
and I do still see a row with my update query, with the state = active, wait_event_type = IO, and wait_event = DataFileRead. Do I need to be worried that my connection closed out? Is my query still running, and is the best way to check for done-ness to keep checking up with
select *
from pg_stat_activity
where datname = 'mydb';
?
Your query will not succeed. Your client lost its connection, and while the backend server process that was handling your UPDATE is still going, it will notice that the client disconnected when it tries to return the query status upon completion, and abort the transaction (whether or not you had performed a BEGIN; every statement in PG is implicitly in a transaction even without BEGIN/COMMIT). You will need to re-issue the UPDATE.

Postgresql 9.2.1 failed to initialize after full vacuum in standalone backend mode

In postgresql version 9.2.1, the database didn't accept any commands to avoid wraparouond dataloss.The following error occured in the pg_log,
ERROR: database is not accepting commands to avoid wraparound data loss in database "XXX"
HINT: Stop the postmaster and use a standalone backend to vacuum that database.
You might also need to commit or roll back old prepared transactions.
I executed vacuum full for the database XXX in standalone backend mode.After that i tried to restart the pgsql, now pgsql server is rejecting connnections. while executing the pg_isready command, the host is rejecting connections.
Is there anything i have to do after completing the vacuum process? what are the possible reasons for the postgres server is failed to start ? Thanks in advance.
In single user mode, run
SELECT datname, datfrozenxid FROM pg_database;
to see which databases need to be vacuumed (those with the smallest values).
Run VACUUM (FREEZE) in these databases, not VACUUM (FULL).

Postgresql large table update slows down

I run an update on a large table (e.g. 8 GB). It is a simple update of 3 fields in the table. I had no problems running it under postgresql 9.1, it would take 40-60 minutes but it worked. I run the same query in 9.4 database (freshly created, not upgraded) and it starts the update fine but then slows down. It uses only ~2% CPU, the level if IO is 4-5MB/s and it is sitting there. No locks, no other queries or connections, just this single update SQL on the server.
The SQL is below. "lookup" table has 12 records. The lookup can return only one row, it breaks a discrete scale (SMALLINT, -32768 .. +32767) into non-overlapping regions. "src" and "dest" tables are ~60 million records.
UPDATE dest SET
field1 = src.field1,
field2 = src.field2,
field3_id = (SELECT lookup.id FROM lookup WHERE src.value BETWEEN lookup.min AND lookup.max)
FROM src
WHERE dest.id = src.id;
I thought my disk slowed down but I can copy 1 GB files in parallel to query execution and it runs fast at >40MB/s and I have only one disk (it is a VM with ISCSI media). All other disk operations are not impacted, there is plenty of IO bandwidth. At the same time PostgreSQL is just sitting there doing very little, running very slowly.
I have 2 virtualized linux servers, one runs postgresql 9.1 and another runs 9.4. Both servers have close to identical postgresql configuration.
Has anyone else had similar experience? I am running out of ideas. Help.
Edit
The query "ran" for 20 hours I had to kill the connections and restart the server. Surprisingly it didn't kill the connection via query:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE pid <> pg_backend_pid() AND datname = current_database();
and sever produced the following log:
2015-05-21 12:41:53.412 EDT FATAL: terminating connection due to administrator command
2015-05-21 12:41:53.438 EDT FATAL: terminating connection due to administrator command
2015-05-21 12:41:53.438 EDT STATEMENT: UPDATE <... this is 60,000,000 record table update statement>
Also server restart took long time, producing the following log:
2015-05-21 12:43:36.730 EDT LOG: received fast shutdown request
2015-05-21 12:43:36.730 EDT LOG: aborting any active transactions
2015-05-21 12:43:36.730 EDT FATAL: terminating connection due to administrator command
2015-05-21 12:43:36.734 EDT FATAL: terminating connection due to administrator command
2015-05-21 12:43:36.747 EDT LOG: autovacuum launcher shutting down
2015-05-21 12:44:36.801 EDT LOG: received immediate shutdown request
2015-05-21 12:44:36.815 EDT WARNING: terminating connection because of crash of another server process
2015-05-21 12:44:36.815 EDT DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
"The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory" - is this an indication of a bug in PostgreSQL?
Edit
I tested 9.1, 9.3 and 9.4. Both 9.1 and 9.3 don't experience the slow down. 9.4 consistently slows down on large transactions. I noticed that when a transaction starts htop monitor indicates high CPU and the process status is "R" (running). Then it gradually changes to low CPU usage and status "D" - disk (see screenshot ). My biggest question is why 9.4 is different from 9.1 and 9.3? I have a dozen of servers and this effect is observed across the board.
Thanks everyone for the help. No matter how much I tried to emphasize on the difference of performance between identical configuration of 9.4 and previous versions no one seemed to pay attention to that.
The problem was solved by disabling transparent huge pages:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
Here are some resources I found helpful in reserching the issue:
* https://dba.stackexchange.com/questions/32890/postgresql-pg-stat-activity-shows-commit/34169#34169
* https://lwn.net/Articles/591723/
* https://blogs.oracle.com/linux/entry/performance_issues_with_transparent_huge
I'd suspect a lot of disk seeking - 5MB/s is just about right for a very random IO on ordinary (spinning) hard drive.
As you constantly replace basically all your rows I'd try to set dest table fillfactor to about 45% (alter table dest set (fillfactor=45);) and then cluster test using test_pkey;. This would allow updated row versions to be placed in the same disk sector.
Additionally using cluster src using src_pkey; so both tables would have data in the same physical order on disk also can help.
Also remember to vacuum table dest; after every update that large, so old row versions could be used again in subsequent updates.
Your old server probably evolved it's fillfactor naturally during multiple updates. On new server it is packed 100%, so updated rows have to be placed at the end.
If only few of the target rows are actually updated, you can avoid new row versions to be generated by using DISTICNT FROM. This can prevent a lot of useless disk traffic.
UPDATE dest SET
field1 = src.field1,
field2 = src.field2,
field3_id = lu.id
FROM src
JOIN lookup lu ON src.value BETWEEN lu.min AND lu.max
WHERE dest.id = src.id
-- avoid unnecessary row versions to be generated
AND (dest.field1 IS DISTINCT FROM src.field1
OR dest.field1 IS DISTINCT FROM src.field1
OR dest.field3_id IS DISTINCT FROM lu.id
)
;

Heroku Postgres: Too many connections. How do I kill these connections?

I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();

Restart Heroku Postgres Dev DB

I got this error from an Play 2.0.3 java application. How could I restart Heroku Postgres Dev DB? I could not find any instructions to restart the DB on Heroku help center.
app[web.1]: Caused by: org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections
The error mesage you have there isn't a reason to restart the database; it isn't a database problem. Your application is holding too many connections, probably because you forgot to set up its connection pool. That isn't a DB server problem and you can fix it without restarting the DB server.
If you stop your Play application or reconfigure its connection pool the problem will go away.
Another option is to put your Heroku instance in maintenance mode then take it out again.
Since heroku doesn't allow you to connect as a superuser (for good reasons) you can't use that reserved superuser slot to connect and manage connections like you would with normal PostgreSQL.
See also:
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
If you're a non-heroku user who found this:
With normal PostgreSQL you can disconnect your client from the server end end using a PostgreSQL connection to your server. See how it says there's a slot reserved for "superuser connections" ? Connect to Pg as a superuser (postgres user by default) using PgAdmin-III or psql.
Once you're connected you can see other clients with:
SELECT * FROM pg_stat_activity;
If you want to terminate every connection except your own you can run:
SELECT procpid, pg_terminate_backend(procpid)
FROM pg_stat_activity WHERE procpid <> pg_backend_pid();
Add AND datname = current_database and/or AND usename = <target-user-name> as appropriate.
I think I should have just added this in reply to the previous answer, but I couldn't figure out how to do that, so...
As an update to Liron Yahdav's comment in the accepted answer's thread: the "non-heroku users who found this" solution worked for me on a Heroku PostgreSQL dev database, but with a slight modification to the query Liron provided. Here is my modified query: SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND usename='<your_username>';
It seems that procpid has changed to pid.
There is no way to restart the whole database. Though, heroku offers a simple way to stop all connections which solves the problem in the majority of cases:
heroku pg:killall