Restart Heroku Postgres Dev DB - postgresql

I got this error from an Play 2.0.3 java application. How could I restart Heroku Postgres Dev DB? I could not find any instructions to restart the DB on Heroku help center.
app[web.1]: Caused by: org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections

The error mesage you have there isn't a reason to restart the database; it isn't a database problem. Your application is holding too many connections, probably because you forgot to set up its connection pool. That isn't a DB server problem and you can fix it without restarting the DB server.
If you stop your Play application or reconfigure its connection pool the problem will go away.
Another option is to put your Heroku instance in maintenance mode then take it out again.
Since heroku doesn't allow you to connect as a superuser (for good reasons) you can't use that reserved superuser slot to connect and manage connections like you would with normal PostgreSQL.
See also:
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
If you're a non-heroku user who found this:
With normal PostgreSQL you can disconnect your client from the server end end using a PostgreSQL connection to your server. See how it says there's a slot reserved for "superuser connections" ? Connect to Pg as a superuser (postgres user by default) using PgAdmin-III or psql.
Once you're connected you can see other clients with:
SELECT * FROM pg_stat_activity;
If you want to terminate every connection except your own you can run:
SELECT procpid, pg_terminate_backend(procpid)
FROM pg_stat_activity WHERE procpid <> pg_backend_pid();
Add AND datname = current_database and/or AND usename = <target-user-name> as appropriate.

I think I should have just added this in reply to the previous answer, but I couldn't figure out how to do that, so...
As an update to Liron Yahdav's comment in the accepted answer's thread: the "non-heroku users who found this" solution worked for me on a Heroku PostgreSQL dev database, but with a slight modification to the query Liron provided. Here is my modified query: SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND usename='<your_username>';
It seems that procpid has changed to pid.

There is no way to restart the whole database. Though, heroku offers a simple way to stop all connections which solves the problem in the majority of cases:
heroku pg:killall

Related

"error: too many connections for database 'postgres'" when trying to connect to any Postgres 13 instance

My team and I are currently experiencing an issue where we can't connect to Cloud SQL's Postgres instance(s) from anything other than the psql cli tool. We get a too many connections for database "postgres" error (in PGAdmin, DBeaver, and our node typeorm/pg backend). It initially happened on our (only) Postgres database instance. After restarting, stopping and starting again, increasing machine CPU/memory proved to do nothing, I deleted the database instance entirely and created a new one from scratch.
However, after a few hours the problem came back. I know that we're not actually having too many connections as I am able to query pg_stat_activity from psql command line and see the following:
Only one of those (postgres username) connections is ours.
My coworker also can't connect at all - not even from psql cli.
If it matters, we are using PostgreSQL 13, europe-west2 (London), single zone availability, db-g1-small instance with 1.7GB memory, 10GB HDD, and we have public IP enabled and the correct IP addresses whitelisted.
I'd really appreciate if anyone has any insights into what's causing this.
EDIT: I further increased the instance size (to no longer be a shared core), and I managed to successfully connect my backend to it. However my psql cli no longer works - it appears that only the first client to connect is allowed to connect after a restart (even if it disconnects, other clients can't connect...).
From the error message, it is clear that the database "postgres" has a custom connection limit (set, for example, by ALTER DATABASE postgres CONNECTION LIMIT 1). And apparently, it is quite small. Why is everyone try to connect to that database anyway? Usually 'postgres' database is reserved for maintenance operations, and you should create other databases for daily use.
You can see the setting with:
select datconnlimit from pg_database where datname='postgres';
I don't know if the low setting is something you did, or maybe Google does it on its own for their cloud offering.
#jjanes had the right idea/mention.
I created another database within the Cloud SQL instance that wasn't named postgres and then it was fine.
It wasn't anything to do with maximum connection settings (as this was within Google Cloud SQL) or not closing connections (as TypeORM/pg does this already).

pgAdmin connection error to pgpool

I'm using pgAdmin III to manage my database from client. I have a master and a slave postgreSQL running in streaming replication mode. There's another pgpool server in front of them to do connection pooling and load-balancing.
When I was connection pgAdmin to pgpool, I got:
Error connecting to the server: ERROR: unable to read message kind
DETAIL: kind does not match between master(52) slot[1] (45)
I had no problem connecting to it before, but somehow pgpool died and I restarted it, and then this error popped up out of no where.
The pgpool and postgreSQL servers are running well. I can access them with psql -h hostname database user. The app server can also connect to it and the web app is running as usually. I just cannot access it from pgAdmin.
http://www.sraoss.jp/pipermail/pgpool-general/2012-March/000297.html
In short: max_connections is exceeded on postgres cluster.
What I assume has happened - you restarted pgpool and it opened new connections to postgres, while old ones left in idle or idle in transaction (depending on timeout). So after restarting pgpool it consumed double amount of num_init_children and reached actual allowed maximum.
Killing old (before restart) pgpool connections should fix it. Try pg_terminate_backend(pid) run on postgres in order to do it. Also be carefull to kill right connections. At least check
select pid,query, client_address
from pg_stat_activity where now()-query_start > '1 day'::interval
or alike to catch only zombies

Postgres: how to disconnect without logging out?

Using psql, how can I disconnect from an established connection without logging out?
To be more specific: Assuming the database server runs on localhost, I connect to the db server using
psql -U <user>
after that I am in the PSQL console. From there I can connect to one of the databases using
<user>=# \connect <database>
Now here is the question: How can I disconnect from this session without logging of from PSQL Console? I tried \disconnect but without any luck and using \q not only closes the session but kicks me out of PSQL entirely.
So is there a command which allows me to disconnect from one database and to reconnect to another database (using \connect <another_database>) without logging of from/ shutting down PSQL?
According to the POSTGRESQL docs at Postgres Manual, there is no meta command which will only close the current connection (without closing the PSQL application using \q). The only way to close the current connection is to connect to another database using \connect, as described in the docs mentioned above:
If the new connection is successfully made, the previous connection is closed.
Similiar information is also provided in the second answer to How to Disconnect from a database and go back to the default database in PostgreSQL? (thanks to #Barbara Laird for pointing this out)

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

Heroku Postgres: Too many connections. How do I kill these connections?

I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();