I am using PostgreSQL and Flas-SQLAlchemy extension for Flask.
# app.py
app = Flask(__name__)
app.config['SQLALCHEMY_POOL_SIZE'] = 20
db = SQLAlchemy(app)
# views.py
user = User(***)
db.session.add(user)
db.session.commit()
Note that I am not closing the connection as suggested by documentation:
You have to commit the session, but you don’t have to remove it at the end of the request, Flask-SQLAlchemy does that for you.
However, when I run the following PostgreSQL query I can see some IDLE connections:
SELECT * FROM pg_stat_activity;
Does it mean that I have a problem with Flask-SQLAlchemy not closing connections? I am worried about it because recently I got remaining connection slots are reserved for non-replication superuser connections error.
SQLAlchemy sets up a pool of connections that will remain opened for performance reasons. The PostgreSQL has a max_connections config option. If you are exceeding that value, you need to either lower your pool count or raise the max connection count. Given that the default max is 100, and you've set your pool to 20, what's more likely is that there are other applications with open connections to the same database. max_connections is a global setting, so it must account for all applications connecting to the database server.
Related
After 10 connections to a Postgres RDS database I start getting error - Too Many Connections or Timed-out waiting to acquire database connection.
But when I check max_connections it shows 405. pg_roles shows -1 as rollconnlimit. If none of the ceilings are hit why can I not have more than 10 concurrent connections for that user?
A comment from #jjanes on another question gave me a pointer.. the bottleneck was datconnlimit setting from pg_database. Changing it using query below fixed the issue:-
ALTER DATABASE mydb with CONNECTION LIMIT 50
I have several clients (FreeRadius servers) that connect to a single central Pgbouncer.
When I utilise one of these FreeRadius servers I can see that several database connections are created by Pgbouncer.
select *
from pg_stat_activity
where datname = 'master_db';
When I utilise the same freeradius server again, pgbouncer isn't re-using the existing open connections but keeps creating new ones. Once I reach over 30 connections the whole database comes to an halt.
PGbouncer.ini
server_idle_timeout = 10
max_client_conn = 500
default_pool_size = 30
postgresql.conf: (Postgres 13)
max_connections = 150
Based on my research Pgbouncer is supposed to allocate a single database connection to a client (from the default_pool_size) and then create as many internal connections the client needs (up to max_client_conn).
But what I observe here is the opposite. What am I missing, please?
UPDATE:
The solution Laurenz suggested works but throws this error, when using asyncpg behind the scenes:
NOTE: pgbouncer with pool_mode set to "transaction" or "statement" does not support prepared statements properly. You have two options: * if you are using pgbouncer for connection pooling to a single server, switch to the connection pool functionality provided by asyncpg, it is a much better option for this purpose; * if you have no option of avoiding the use of pgbouncer, then you can set statement_cache_size to 0 when creating the asyncpg connection object.
You will need to use the session option for POOL_MODE so that it is able to maintain connection sessions opened by asyncpg because of the asynchronous nature
You should the following in your
if using pgbouncer.ini file
[pgbouncer]
pool_mode = session
...
...
...
or if using env variable
POOL_MODE=session
extra source: https://magicstack.github.io/asyncpg/current/faq.html#why-am-i-getting-prepared-statement-errors
If you are getting intermittent prepared statement
"asyncpg_stmt_xx" does not exist or prepared statement
“asyncpg_stmt_xx” already exists errors, you are most likely not
connecting to the PostgreSQL server directly, but via pgbouncer.
pgbouncer, when in the "transaction" or "statement" pooling mode, does
not support prepared statements. You have several options:
if you are using pgbouncer only to reduce the cost of new connections
(as opposed to using pgbouncer for connection pooling from a large
number of clients in the interest of better scalability), switch to
the connection pool functionality provided by asyncpg, it is a much
better option for this purpose;
disable automatic use of prepared statements by passing
statement_cache_size=0 to asyncpg.connect() and asyncpg.create_pool()
(and, obviously, avoid the use of Connection.prepare());
switch pgbouncer’s pool_mode to session.
I am getting an error "remaining connection slots are reserved for non-replication superuser connections" at one of PostgreSQL instances.
However, when I run below query from superuser to check available connections, I found that enough connections are available. But still getting the same error.
select max_conn,used,res_for_super,max_conn-used-res_for_super
res_for_normal
from
(select count(*) used from pg_stat_activity) t1,
(select setting::int res_for_super from pg_settings where
name='superuser_reserved_connections') t2,
(select setting::int max_conn from pg_settings where name='max_connections') t3
Output
I searched this error and everyone is suggesting to increase the max connections like below link.
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
EDIT
I restarted the server and after some time used connections were almost 210 but i was able to connect to the server from a normal user.
Might not be a direct solution to your problem, but I recommend using middlewares like pgbouncer. It helps keeping a lower, fixed number of open connections to the db server.
Your client would connect to pgbouncer and pgbouncer would internally pick one of its already opened connection to use for your client's queries. If the number of clients exceed the amount of possible connections, clients are queued till one is available, therefore allowing some breathing room in situations of high traffic, while keeping the db server under tolerable load.
I'm sort of an "accidental dba" so apologies for a real noob question here. I'm using pgbouncer in pool_mode = transaction mode. Yesterday I started getting errors in my php log:
no more connections allowed (max_client_conn)
I had max_client_conn = 150 to match max_connections in my postgresql.conf.
So my first question is, should pgbouncer max_client_conn be set equal to postgresql max_connections, or am I totally misunderstanding that relationship?
I have 20 databases on a single postgres instance behind pgbouncer with the default default_pool_size = 20. So should max_client_conn be 400? (pool_size * number_of_databases)?
Thanks
https://pgbouncer.github.io/config.html
max_client_conn Maximum number of client connections allowed.
default_pool_size How many server connections to allow per user/database pair.
so max_client_conn should be way larger then postgres max_connections, otherwise why you use connection pooler at all?..
If you have 20 databases and set default_pool_size to 20, you will allow pgbouncer to open 400 connections to db, so you need to adjust posgtres.conf max_connections to 400 and set pgbouncer max_client_conn to smth like 4000 (to have average 10 connections in pool for each actual db connection)
This answer is only meant to provide an example for understanding the settings, not as a statement to literally follow. (eg I just saw a config with:
max_client_conn = 10000
default_pool_size = 100
max_db_connections = 100
max_user_connections = 100
for cluster with two databases and max_connections set to 100). Here the logic is different, also mind max_db_connections is set and in fact connection limits are set individually per database in pgbouncer [database] section.
So - play with small settings to get the idea of how config influence each other - this is "how to determine max_client_conn for pgbouncer" the best
Like almost everyone, then you are setting your pool size way to high. Don't let your postgresql server do the connection pooling. If you do then it severely hurts your performance.
The optimal setting for how many concurrent connection to postgresql is
connections = ((core_count * 2) + effective_spindle_count)
That means that if you are running your database on a 2 core server, then your total pool size from pgbouncer should be no more than 5. Pgbouncer is a lot better at handling pooling than postgresql, so let it do that.
So, leave the max_connections in my postgresql.conf to it's default 100 (no reason to change as it is a max. Also this should always be higher than what your application needs as some logging, admin and backup processes needs connections as well)
And in your pgbouncer.ini file set
max_db_connections=5
default_pool_size=5
max_client_conn=400
For more information https://www.percona.com/blog/2018/06/27/scaling-postgresql-with-pgbouncer-you-may-need-a-connection-pooler-sooner-than-you-expect/
I have an app running on Heroku. This app has an Postgres 9.2.4 (Dev) addon installed. To access my online database I use Navicat Postgres. Sometimes Navicat doesn't cleanly close connections it sets up with the Postgres database. The result is that after a while there are 20+ open connections to the Postgres database. My Postgres installs only allows 20 simultanious connections. So with the 20+ open connections my Postgress database is now unreachable (too many connections).
I know this is a problem of Navicat and I'm trying to solve this on that end. But if it happens (that there are too many connections), how can I solve this (e.g. close all connections).
I've tried all of the following things, without result.
Closed Navicat & restarted my computer (OS X 10.9)
Restarted my Heroku application (heroku restart)
Tried to restart the online database, but I found out there is no option to do this
Manually closed all connections from OS X to the IP of the Postgres server
Restarted our router
I think it's obvious there are some 'dead' connections at the Postgres side. But how do I close them?
Maybe have a look at what heroku pg:kill can do for you? https://devcenter.heroku.com/articles/heroku-postgresql#pg-ps-pg-kill-pg-killall
heroku pg:killall will kill all open connections, but that may be a blunt instrument for your needs.
Interestingly, you can actually kill specific connections using heroku's dataclips.
To get a detailed list of connections, you can query via dataclips:
SELECT * FROM pg_stat_activity;
In some cases, you may want to kill all connections associated with an IP address (your laptop or in my case, a server that was now destroyed).
You can see how many connections belong to each client IP using:
SELECT client_addr, count(*)
FROM pg_stat_activity
WHERE client_addr is not null
AND client_addr <> (select client_addr from pg_stat_activity where pid=pg_backend_Tid())
GROUP BY client_addr;
which will list the number of connections per IP excluding the IP that dataclips itself uses.
To actually kill the connections, you pass their "pid" to pg_terminate_backend(). In the simple case:
SELECT pg_terminate_backend(1234)
where 1234 is the offending PID you found in pg_stat_activity.
In my case, I wanted to kill all connections associated with a (now dead) server, so I used:
SELECT pg_terminate_backend(pid), host(client_addr)
FROM pg_stat_activity
WHERE host(client_addr) = 'IP HERE'
1). First login into Heroku with your correct id (in case you have multiple accounts) using heroku login.
2). Then, run heroku apps to get a list of your apps and copy the name of the one which is having the PostgreSQL db installed.
3). Finally, run heroku pg:killall --app appname to get all the connections terminated.
From the Heroku documentation (emphasis is mine):
FATAL: too many connections for role
FATAL: too many connections for role "[role name]"
This occurs on Starter Tier (dev and basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here.
Because Heroku does not provide superuser access your options are rather limited to the above.
Restart server
heroku restart --app <app_name>
It will close all connection and restart.
As the superuser (eg. "postgres"), you can kill every session but your current one with a query like this:
select pg_cancel_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();
If they do not go away, you might have to use a stronger "kill", but certainly test with pg_cancel_backend() first.
select pg_terminate_backend(pid)
from pg_stat_activity
where pid <> pg_backend_pid();