Azure postgres flexible server and pgbouncer max connections - postgresql

I have an Azure postgresql flexible server running a General Purpose, D2s_v3, 2 vCores, 8 GiB RAM, 32 GiB storage instance and using pg_bouncer for connection pooling.
At all time there are 100 active connections and when I try to connection (not using the pgbouncer) I get the error Remaining connection slots are reserved. I can also see that there are sporadic errors on connecting that looks to be from pgbouncer as there are not failed connections on the postgresql server.
The server is configured with:
max_connections = 100
pgbouncer.default_pool_size = 50
pgbouncer.max_client_conn = 5000
pgbouncer.min_pool_size = 0
pgbouncer.pool_mode = TRANSACTION
Should the max connections be increased or is there some other configuration that should be adjusted such that pgbouncer don't allocate all connections?

Related

Handle more than 5000 connections without PGBouncer in PostgreSQL

We are using, PostgreSQL Version 14.
Our PC configuration at production
Windows 2016 Server
32 GB RAM
8 TB Hard disk
8 Core CPU
In my PostgreSQL.conf file
Shared_buffer 8 GB
Work_MEM 1 GB
Maintenance_work_mem 1 GB
Max_Connections 1000
I wish to handle 5000 connection at a time. Somebody suggest me to go with PGBouncer.
But we initially starts with PostgreSQL without PGBouncer.
I need to know, is my configuration is OK with the 5000 connection or we need to increase RAM or any other...
This is our first, PostgreSQL implementation. So Please suggest me to start with PostgreSQL with out PGBouncer.
Thank you
Note:
In SQL if we set -1, it will handle more number of connections. Like this is there any configuration is available in PostgreSQL

Increase Max connections in postgresql

My Server Config :
CPU - 16 core
RAM - 64 GB
Storage : 2 TB
OS : CentOs 64 Bit
I have DB and java application on the same server.
My postgres config file has the following:
max_connections = 9999
shared_buffers = 6GB
However, when i check DB via show max_connections it shows only 500.
How can i increase the max_connections value ?
Either you forgot to remove the comment (#) at the beginning of the postgresql.conf line, or you didn't restart PostgreSQL.
But a setting of 500 is already much too high, unless you have some 100 cores in the machine and an I/O system to match. Use a connection pool.

how to determine max_client_conn for pgbouncer

I'm sort of an "accidental dba" so apologies for a real noob question here. I'm using pgbouncer in pool_mode = transaction mode. Yesterday I started getting errors in my php log:
no more connections allowed (max_client_conn)
I had max_client_conn = 150 to match max_connections in my postgresql.conf.
So my first question is, should pgbouncer max_client_conn be set equal to postgresql max_connections, or am I totally misunderstanding that relationship?
I have 20 databases on a single postgres instance behind pgbouncer with the default default_pool_size = 20. So should max_client_conn be 400? (pool_size * number_of_databases)?
Thanks
https://pgbouncer.github.io/config.html
max_client_conn Maximum number of client connections allowed.
default_pool_size How many server connections to allow per user/database pair.
so max_client_conn should be way larger then postgres max_connections, otherwise why you use connection pooler at all?..
If you have 20 databases and set default_pool_size to 20, you will allow pgbouncer to open 400 connections to db, so you need to adjust posgtres.conf max_connections to 400 and set pgbouncer max_client_conn to smth like 4000 (to have average 10 connections in pool for each actual db connection)
This answer is only meant to provide an example for understanding the settings, not as a statement to literally follow. (eg I just saw a config with:
max_client_conn = 10000
default_pool_size = 100
max_db_connections = 100
max_user_connections = 100
for cluster with two databases and max_connections set to 100). Here the logic is different, also mind max_db_connections is set and in fact connection limits are set individually per database in pgbouncer [database] section.
So - play with small settings to get the idea of how config influence each other - this is "how to determine max_client_conn for pgbouncer" the best
Like almost everyone, then you are setting your pool size way to high. Don't let your postgresql server do the connection pooling. If you do then it severely hurts your performance.
The optimal setting for how many concurrent connection to postgresql is
connections = ((core_count * 2) + effective_spindle_count)
That means that if you are running your database on a 2 core server, then your total pool size from pgbouncer should be no more than 5. Pgbouncer is a lot better at handling pooling than postgresql, so let it do that.
So, leave the max_connections in my postgresql.conf to it's default 100 (no reason to change as it is a max. Also this should always be higher than what your application needs as some logging, admin and backup processes needs connections as well)
And in your pgbouncer.ini file set
max_db_connections=5
default_pool_size=5
max_client_conn=400
For more information https://www.percona.com/blog/2018/06/27/scaling-postgresql-with-pgbouncer-you-may-need-a-connection-pooler-sooner-than-you-expect/

google cloud not releasing connections

We migrated our mysql db to google cloud and before the migration, open connection to the db was constant at 30. However in cloud, it goes up and down average 200.
Is there any know issue with google cloud causing connections not being released?
connect_timeout and max_connections of Cloud SQL instances have different default values compare to standard MySQL installation. You need to look at your codes to see how connect_timeout impact the number of connections of your application:
variable Cloud SQL MySQL
------------------------------------------
connect_timeout 60 10
max_connections 250 151

Postgresql dies with 235 + concurrent connections

I have installed postgresql on an Azure VM and am running tests to see if postgresql can support the expected load. I have increased the max_connections value to 1000 but when I run ab -c 300, postgresql stops responding. Are there any other settings I should be changing?
Thanks, Kate.
PostgreSQL will perform best with a lot less than 1000 connections on most hardware. Usually less than 100. If your application cannot queue work using a connection pool, you should put an external connection pool like PgBouncer between your application and PostgreSQL.
See: https://wiki.postgresql.org/wiki/Number_Of_Database_Connections