Postgresql: Max connections: set application name eating connections - postgresql

many connections in PostgreSQL that eating connections limit, many of them named: PostgreSQL JDBC Driver, with a query: SET application_name = 'PostgreSQL JDBC Driver', please find attached image.
causing: FATAL: sorry, too many clients already.
max connections were 100, and I have increased to 150, but not solved!
note that I am using ThingWorx platform which connects to PostgreSQL.

if the connection property "assumeMinServerVersion" is set to at least 9.0 then the application_name will be set just in the start up packet
jdbc:postgresql://<db_address>:5432/<db_name>?assumeMinServerVersion=9.4

Related

Why cannot I have more than 10 concurrent connections to a Postgres RDS database

After 10 connections to a Postgres RDS database I start getting error - Too Many Connections or Timed-out waiting to acquire database connection.
But when I check max_connections it shows 405. pg_roles shows -1 as rollconnlimit. If none of the ceilings are hit why can I not have more than 10 concurrent connections for that user?
A comment from #jjanes on another question gave me a pointer.. the bottleneck was datconnlimit setting from pg_database. Changing it using query below fixed the issue:-
ALTER DATABASE mydb with CONNECTION LIMIT 50

Postgresql | remaining connection slots are reserved for non-replication superuser connections

I am getting an error "remaining connection slots are reserved for non-replication superuser connections" at one of PostgreSQL instances.
However, when I run below query from superuser to check available connections, I found that enough connections are available. But still getting the same error.
select max_conn,used,res_for_super,max_conn-used-res_for_super
res_for_normal
from
(select count(*) used from pg_stat_activity) t1,
(select setting::int res_for_super from pg_settings where
name='superuser_reserved_connections') t2,
(select setting::int max_conn from pg_settings where name='max_connections') t3
Output
I searched this error and everyone is suggesting to increase the max connections like below link.
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
EDIT
I restarted the server and after some time used connections were almost 210 but i was able to connect to the server from a normal user.
Might not be a direct solution to your problem, but I recommend using middlewares like pgbouncer. It helps keeping a lower, fixed number of open connections to the db server.
Your client would connect to pgbouncer and pgbouncer would internally pick one of its already opened connection to use for your client's queries. If the number of clients exceed the amount of possible connections, clients are queued till one is available, therefore allowing some breathing room in situations of high traffic, while keeping the db server under tolerable load.

how to determine max_client_conn for pgbouncer

I'm sort of an "accidental dba" so apologies for a real noob question here. I'm using pgbouncer in pool_mode = transaction mode. Yesterday I started getting errors in my php log:
no more connections allowed (max_client_conn)
I had max_client_conn = 150 to match max_connections in my postgresql.conf.
So my first question is, should pgbouncer max_client_conn be set equal to postgresql max_connections, or am I totally misunderstanding that relationship?
I have 20 databases on a single postgres instance behind pgbouncer with the default default_pool_size = 20. So should max_client_conn be 400? (pool_size * number_of_databases)?
Thanks
https://pgbouncer.github.io/config.html
max_client_conn Maximum number of client connections allowed.
default_pool_size How many server connections to allow per user/database pair.
so max_client_conn should be way larger then postgres max_connections, otherwise why you use connection pooler at all?..
If you have 20 databases and set default_pool_size to 20, you will allow pgbouncer to open 400 connections to db, so you need to adjust posgtres.conf max_connections to 400 and set pgbouncer max_client_conn to smth like 4000 (to have average 10 connections in pool for each actual db connection)
This answer is only meant to provide an example for understanding the settings, not as a statement to literally follow. (eg I just saw a config with:
max_client_conn = 10000
default_pool_size = 100
max_db_connections = 100
max_user_connections = 100
for cluster with two databases and max_connections set to 100). Here the logic is different, also mind max_db_connections is set and in fact connection limits are set individually per database in pgbouncer [database] section.
So - play with small settings to get the idea of how config influence each other - this is "how to determine max_client_conn for pgbouncer" the best
Like almost everyone, then you are setting your pool size way to high. Don't let your postgresql server do the connection pooling. If you do then it severely hurts your performance.
The optimal setting for how many concurrent connection to postgresql is
connections = ((core_count * 2) + effective_spindle_count)
That means that if you are running your database on a 2 core server, then your total pool size from pgbouncer should be no more than 5. Pgbouncer is a lot better at handling pooling than postgresql, so let it do that.
So, leave the max_connections in my postgresql.conf to it's default 100 (no reason to change as it is a max. Also this should always be higher than what your application needs as some logging, admin and backup processes needs connections as well)
And in your pgbouncer.ini file set
max_db_connections=5
default_pool_size=5
max_client_conn=400
For more information https://www.percona.com/blog/2018/06/27/scaling-postgresql-with-pgbouncer-you-may-need-a-connection-pooler-sooner-than-you-expect/

Flask-SQLAlchemy close connection

I am using PostgreSQL and Flas-SQLAlchemy extension for Flask.
# app.py
app = Flask(__name__)
app.config['SQLALCHEMY_POOL_SIZE'] = 20
db = SQLAlchemy(app)
# views.py
user = User(***)
db.session.add(user)
db.session.commit()
Note that I am not closing the connection as suggested by documentation:
You have to commit the session, but you don’t have to remove it at the end of the request, Flask-SQLAlchemy does that for you.
However, when I run the following PostgreSQL query I can see some IDLE connections:
SELECT * FROM pg_stat_activity;
Does it mean that I have a problem with Flask-SQLAlchemy not closing connections? I am worried about it because recently I got remaining connection slots are reserved for non-replication superuser connections error.
SQLAlchemy sets up a pool of connections that will remain opened for performance reasons. The PostgreSQL has a max_connections config option. If you are exceeding that value, you need to either lower your pool count or raise the max connection count. Given that the default max is 100, and you've set your pool to 20, what's more likely is that there are other applications with open connections to the same database. max_connections is a global setting, so it must account for all applications connecting to the database server.

Is there a timeout for idle PostgreSQL connections?

1 S postgres 5038 876 0 80 0 - 11962 sk_wai 09:57 ? 00:00:00 postgres: postgres my_app ::1(45035) idle
1 S postgres 9796 876 0 80 0 - 11964 sk_wai 11:01 ? 00:00:00 postgres: postgres my_app ::1(43084) idle
I see a lot of them. We are trying to fix our connection leak. But meanwhile, we want to set a timeout for these idle connections, maybe max to 5 minute.
It sounds like you have a connection leak in your application because it fails to close pooled connections. You aren't having issues just with <idle> in transaction sessions, but with too many connections overall.
Killing connections is not the right answer for that, but it's an OK-ish temporary workaround.
Rather than re-starting PostgreSQL to boot all other connections off a PostgreSQL database, see: How do I detach all other users from a postgres database? and How to drop a PostgreSQL database if there are active connections to it? . The latter shows a better query.
For setting timeouts, as #Doon suggested see How to close idle connections in PostgreSQL automatically?, which advises you to use PgBouncer to proxy for PostgreSQL and manage idle connections. This is a very good idea if you have a buggy application that leaks connections anyway; I very strongly recommend configuring PgBouncer.
A TCP keepalive won't do the job here, because the app is still connected and alive, it just shouldn't be.
In PostgreSQL 9.2 and above, you can use the new state_change timestamp column and the state field of pg_stat_activity to implement an idle connection reaper. Have a cron job run something like this:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'regress'
AND pid <> pg_backend_pid()
AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '5' MINUTE;
In older versions you need to implement complicated schemes that keep track of when the connection went idle. Do not bother; just use pgbouncer.
In PostgreSQL 9.6, there's a new option idle_in_transaction_session_timeout which should accomplish what you describe. You can set it using the SET command, e.g.:
SET SESSION idle_in_transaction_session_timeout = '5min';
In PostgreSQL 9.1, the idle connections with following query. It helped me to ward off the situation which warranted in restarting the database. This happens mostly with JDBC connections opened and not closed properly.
SELECT
pg_terminate_backend(procpid)
FROM
pg_stat_activity
WHERE
current_query = '<IDLE>'
AND
now() - query_start > '00:10:00';
if you are using postgresql 9.6+, then in your postgresql.conf you can set
idle_in_transaction_session_timeout = 30000 (msec)
There is a timeout on broken connections (i.e. due to network errors), which relies on the OS' TCP keepalive feature. By default on Linux, broken TCP connections are closed after ~2 hours (see sysctl net.ipv4.tcp_keepalive_time).
There is also a timeout on abandoned transactions, idle_in_transaction_session_timeout and on locks, lock_timeout. It is recommended to set these in postgresql.conf.
But there is no timeout for a properly established client connection. If a client wants to keep the connection open, then it should be able to do so indefinitely. If a client is leaking connections (like opening more and more connections and never closing), then fix the client. Do not try to abort properly established idle connections on the server side.
A possible workaround that allows to enable database session timeout without an external scheduled task is to use the extension pg_timeout that I have developped.
Another option is set this value "tcp_keepalives_idle". Check more in documentation https://www.postgresql.org/docs/10/runtime-config-connection.html.