PgBouncer don't start the minimum connections - postgresql

I have set in pgBouncer this limits
max_client_conn = 2000
default_pool_size = 40
When i execute this SQL in phpPgAdmin, only 2 or 4 connections appears:
SELECT datname, usename, pid, query, query_start
FROM pg_catalog.pg_stat_activity
WHERE datname='example'
ORDER BY usename, pid
This is normal or pgBouncer don't loaded the .ini when started?

The amount of connections in pg_stat_activity depends on the actual load. Also it depend more on pool_mode - if you have pool_mode = session, you will see more sessions just because the are released less often and slower.
Regarding your options, check out the docs (allowed - is a key word):
default_pool_size
How many server connections to allow per user/database pair. Can be overridden in the per-database
configuration.
Default: 20
and
max_client_conn
Maximum number of client connections allowed. When increased then the file descriptor limits should also be increased.
Note that actual number of file descriptors used is more than
max_client_conn.
Emphasis mine.

Related

Postgresql | remaining connection slots are reserved for non-replication superuser connections

I am getting an error "remaining connection slots are reserved for non-replication superuser connections" at one of PostgreSQL instances.
However, when I run below query from superuser to check available connections, I found that enough connections are available. But still getting the same error.
select max_conn,used,res_for_super,max_conn-used-res_for_super
res_for_normal
from
(select count(*) used from pg_stat_activity) t1,
(select setting::int res_for_super from pg_settings where
name='superuser_reserved_connections') t2,
(select setting::int max_conn from pg_settings where name='max_connections') t3
Output
I searched this error and everyone is suggesting to increase the max connections like below link.
Heroku "psql: FATAL: remaining connection slots are reserved for non-replication superuser connections"
EDIT
I restarted the server and after some time used connections were almost 210 but i was able to connect to the server from a normal user.
Might not be a direct solution to your problem, but I recommend using middlewares like pgbouncer. It helps keeping a lower, fixed number of open connections to the db server.
Your client would connect to pgbouncer and pgbouncer would internally pick one of its already opened connection to use for your client's queries. If the number of clients exceed the amount of possible connections, clients are queued till one is available, therefore allowing some breathing room in situations of high traffic, while keeping the db server under tolerable load.

how to determine max_client_conn for pgbouncer

I'm sort of an "accidental dba" so apologies for a real noob question here. I'm using pgbouncer in pool_mode = transaction mode. Yesterday I started getting errors in my php log:
no more connections allowed (max_client_conn)
I had max_client_conn = 150 to match max_connections in my postgresql.conf.
So my first question is, should pgbouncer max_client_conn be set equal to postgresql max_connections, or am I totally misunderstanding that relationship?
I have 20 databases on a single postgres instance behind pgbouncer with the default default_pool_size = 20. So should max_client_conn be 400? (pool_size * number_of_databases)?
Thanks
https://pgbouncer.github.io/config.html
max_client_conn Maximum number of client connections allowed.
default_pool_size How many server connections to allow per user/database pair.
so max_client_conn should be way larger then postgres max_connections, otherwise why you use connection pooler at all?..
If you have 20 databases and set default_pool_size to 20, you will allow pgbouncer to open 400 connections to db, so you need to adjust posgtres.conf max_connections to 400 and set pgbouncer max_client_conn to smth like 4000 (to have average 10 connections in pool for each actual db connection)
This answer is only meant to provide an example for understanding the settings, not as a statement to literally follow. (eg I just saw a config with:
max_client_conn = 10000
default_pool_size = 100
max_db_connections = 100
max_user_connections = 100
for cluster with two databases and max_connections set to 100). Here the logic is different, also mind max_db_connections is set and in fact connection limits are set individually per database in pgbouncer [database] section.
So - play with small settings to get the idea of how config influence each other - this is "how to determine max_client_conn for pgbouncer" the best
Like almost everyone, then you are setting your pool size way to high. Don't let your postgresql server do the connection pooling. If you do then it severely hurts your performance.
The optimal setting for how many concurrent connection to postgresql is
connections = ((core_count * 2) + effective_spindle_count)
That means that if you are running your database on a 2 core server, then your total pool size from pgbouncer should be no more than 5. Pgbouncer is a lot better at handling pooling than postgresql, so let it do that.
So, leave the max_connections in my postgresql.conf to it's default 100 (no reason to change as it is a max. Also this should always be higher than what your application needs as some logging, admin and backup processes needs connections as well)
And in your pgbouncer.ini file set
max_db_connections=5
default_pool_size=5
max_client_conn=400
For more information https://www.percona.com/blog/2018/06/27/scaling-postgresql-with-pgbouncer-you-may-need-a-connection-pooler-sooner-than-you-expect/

pgpool num_init_children with 10000 Concurrent connections

Yesterday I have test pgpool with pgbench :
pgbench -c 30 -T 20 -r pgbench -p9999 -h192.168.8.28
Concurrent connections is 30, pgpool default num_init_children is 32.
So, when I set -c 33 ,test will blocked unless I break out.
My question is :
If my concurrent connections online is 10000, should I set num_init_children=10000?
It is terrible that num_init_children=10000 means pgpool start with 10000 process.
Is there something wrong ?
How can I config pgpool with 10000 concurrent connections?
One pgpool child process can handle exactly one client connection at any time. So value of num_init_children is directly proportional to the number of expected maximum concurrent connections. If you want 10,000 concurrent connections through pgpool there is no other way than setting num_init_children to 10,000.
PostgreSQL also spawns a dedicated process to handle each client connection so if at any instance the PostgreSQL sever has 10,000 connected clients, it would also have 10,000 child processes. The difference in pgpool and PostgreSQL in this respect is that pgpool prespawns the num_init_children number of connections and PostgreSQL do it on demand.
Yes you have to mention num_init_children=10000 and max_pool=1. You can also write like num_init_children=1000 and max_pool=10

Is there a timeout for idle PostgreSQL connections?

1 S postgres 5038 876 0 80 0 - 11962 sk_wai 09:57 ? 00:00:00 postgres: postgres my_app ::1(45035) idle
1 S postgres 9796 876 0 80 0 - 11964 sk_wai 11:01 ? 00:00:00 postgres: postgres my_app ::1(43084) idle
I see a lot of them. We are trying to fix our connection leak. But meanwhile, we want to set a timeout for these idle connections, maybe max to 5 minute.
It sounds like you have a connection leak in your application because it fails to close pooled connections. You aren't having issues just with <idle> in transaction sessions, but with too many connections overall.
Killing connections is not the right answer for that, but it's an OK-ish temporary workaround.
Rather than re-starting PostgreSQL to boot all other connections off a PostgreSQL database, see: How do I detach all other users from a postgres database? and How to drop a PostgreSQL database if there are active connections to it? . The latter shows a better query.
For setting timeouts, as #Doon suggested see How to close idle connections in PostgreSQL automatically?, which advises you to use PgBouncer to proxy for PostgreSQL and manage idle connections. This is a very good idea if you have a buggy application that leaks connections anyway; I very strongly recommend configuring PgBouncer.
A TCP keepalive won't do the job here, because the app is still connected and alive, it just shouldn't be.
In PostgreSQL 9.2 and above, you can use the new state_change timestamp column and the state field of pg_stat_activity to implement an idle connection reaper. Have a cron job run something like this:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'regress'
AND pid <> pg_backend_pid()
AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '5' MINUTE;
In older versions you need to implement complicated schemes that keep track of when the connection went idle. Do not bother; just use pgbouncer.
In PostgreSQL 9.6, there's a new option idle_in_transaction_session_timeout which should accomplish what you describe. You can set it using the SET command, e.g.:
SET SESSION idle_in_transaction_session_timeout = '5min';
In PostgreSQL 9.1, the idle connections with following query. It helped me to ward off the situation which warranted in restarting the database. This happens mostly with JDBC connections opened and not closed properly.
SELECT
pg_terminate_backend(procpid)
FROM
pg_stat_activity
WHERE
current_query = '<IDLE>'
AND
now() - query_start > '00:10:00';
if you are using postgresql 9.6+, then in your postgresql.conf you can set
idle_in_transaction_session_timeout = 30000 (msec)
There is a timeout on broken connections (i.e. due to network errors), which relies on the OS' TCP keepalive feature. By default on Linux, broken TCP connections are closed after ~2 hours (see sysctl net.ipv4.tcp_keepalive_time).
There is also a timeout on abandoned transactions, idle_in_transaction_session_timeout and on locks, lock_timeout. It is recommended to set these in postgresql.conf.
But there is no timeout for a properly established client connection. If a client wants to keep the connection open, then it should be able to do so indefinitely. If a client is leaking connections (like opening more and more connections and never closing), then fix the client. Do not try to abort properly established idle connections on the server side.
A possible workaround that allows to enable database session timeout without an external scheduled task is to use the extension pg_timeout that I have developped.
Another option is set this value "tcp_keepalives_idle". Check more in documentation https://www.postgresql.org/docs/10/runtime-config-connection.html.

How to close idle connections in PostgreSQL automatically?

Some clients connect to our postgresql database but leave the connections opened.
Is it possible to tell Postgresql to close those connection after a certain amount of inactivity ?
TL;DR
IF you're using a Postgresql version >= 9.2
THEN use the solution I came up with
IF you don't want to write any code
THEN use arqnid's solution
IF you don't want to write any code
AND you're using a Postgresql version >= 14
THEN use Laurenz Albe's solution
For those who are interested, here is the solution I came up with, inspired from Craig Ringer's comment:
(...) use a cron job to look at when the connection was last active (see pg_stat_activity) and use pg_terminate_backend to kill old ones.(...)
The chosen solution comes down like this:
First, we upgrade to Postgresql 9.2.
Then, we schedule a thread to run every second.
When the thread runs, it looks for any old inactive connections.
A connection is considered inactive if its state is either idle, idle in transaction, idle in transaction (aborted) or disabled.
A connection is considered old if its state stayed the same during more than 5 minutes.
There are additional threads that do the same as above. However, those threads connect to the database with different user.
We leave at least one connection open for any application connected to our database. (rank() function)
This is the SQL query run by the thread:
WITH inactive_connections AS (
SELECT
pid,
rank() over (partition by client_addr order by backend_start ASC) as rank
FROM
pg_stat_activity
WHERE
-- Exclude the thread owned connection (ie no auto-kill)
pid <> pg_backend_pid( )
AND
-- Exclude known applications connections
application_name !~ '(?:psql)|(?:pgAdmin.+)'
AND
-- Include connections to the same database the thread is connected to
datname = current_database()
AND
-- Include connections using the same thread username connection
usename = current_user
AND
-- Include inactive connections only
state in ('idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled')
AND
-- Include old connections (found with the state_change field)
current_timestamp - state_change > interval '5 minutes'
)
SELECT
pg_terminate_backend(pid)
FROM
inactive_connections
WHERE
rank > 1 -- Leave one connection for each application connected to the database
If you are using PostgreSQL >= 9.6 there is an even easier solution. Let's suppose you want to delete all idle connections every 5 minutes, just run the following:
alter system set idle_in_transaction_session_timeout='5min';
In case you don't have access as superuser (example on Azure cloud), try:
SET SESSION idle_in_transaction_session_timeout = '5min';
But this latter will work only for the current session, that most likely is not what you want.
To disable the feature,
alter system set idle_in_transaction_session_timeout=0;
or
SET SESSION idle_in_transaction_session_timeout = 0;
(by the way, 0 is the default value).
If you use alter system, you must reload configuration to start the change and the change is persistent, you won't have to re-run the query anymore if, for example, you will restart the server.
To check the feature status:
show idle_in_transaction_session_timeout;
Connect through a proxy like PgBouncer which will close connections after server_idle_timeout seconds.
From PostgreSQL v14 on, you can set the idle_session_timeout parameter to automatically disconnect client sessions that are idle.
If you use AWS with PostgreSQL >= 9.6, you have to do the following:
Create custom parameter group
go to RDS > Parameter groups > Create parameter group
Select the version of PSQL that you use, name it 'customParameters' or whatever and add description 'handle idle connections'.
Change the idle_in_transaction_session_timeout value
Fortunately it will create a copy of the default AWS group so you only have to tweak the things that you deem not suitable for your use-case.
Now click on the newly created parameter group and search 'idle'.
The default value for 'idle_in_transaction_session_timeout' is set to 24 hours (86400000 milliseconds). Divide this number by 24 to have hours (3600000) and then you have to again divide 3600000 by 4, 6 or 12 depending on whether you want the timeout to be respectively 15, 10 or 5 minutes (or equivalently multiply the number of minutes x 60000, so value 300 000 for 5 minutes).
Assign the group
Last, but not least, change the group:
go to RDS, select your DB and click on 'Modify'.
Now under 'Database options' you will find 'DB parameter group', change it to the newly created group.
You can then decide if you want to apply the modifications immediately (beware of downtime).
I have the problem of denied connections as there are too much clients connected on Postgresql 12 server (but not on similar projects using earlier 9.6 and 10 versions) and Ubuntu 18.
I wonder if those settings
tcp_keepalives_idle
tcp_keepalives_interval
could be more relevant than
idle_in_transaction_session_timeout
idle_in_transaction_session_timeout indeed closes only the idle connections from failed transactions, not the inactive connections whose statements terminate correctly...
the documentation reads that these socket-level settings have no impact with Unix-domain sockets but it could work on Ubuntu.
Up to PostgreSQL 13, you can use my extension pg_timeout.