How to scale the total number of connection with pgpool load balancing? - postgresql

I have 3 postgresql database (one master and two slave) with a pgpool, each database can handle 200 connections, and I want to be able to get 600 active connection on the pgpool.
My problem is that if I set pgpool with 600 child process, it can open the 600 connection on only one database (the master for example if all connection make a write query), but with 200 child process I only use +- 70 connection on each database.
So is there a way to configure pgpool to have a load balancing that scale with the number of database ?
Thanks.

Having 600 connections available in each db should not be an ideal solution. I would really look into my application before setting such a high connections value.
Load balancing scalability of pgpool can be increased by setting equal backend_weight parameter. So that no of sql queries will equally get distributed among postgresql nodes.
Also pgpool manages database connection pool using num_init_children and max_pool parameter.
The num_init_children parameter is used to span pgpool process that will connect to each PostgreSQL backends.
Also num_init_children parameter value is the allowed number of concurrent clients to connect with pgpool.
pgpool roughly tries to make max_pool*num_init_children no of connections to each postgresql backend.

Related

SQLAlchemy with Aurora Serverless V2 PostgreSQL - many connections

I have an AWS Serverless V2 database setup (postgresql) that is being accessed from a compute cluster. The cluster launches a large number of jobs (>1000) and each job independently puts/pulls some data from the database. The Serverless cluster is setup to autoscale from 2 to 32 units as needed.
The code being run by each cluster job is using SQLAlchemy (either the ORM or the core). I am setting up each database connection with a null pool and pessimistic disconnect handling (i.e., pool_pre_ping=True). From my reading of the docs this should be handling disconnects due to being idle mid-connection.
Code is also written to access the DB, get the results, close the connection (to avoid idle connections), and then reopen the connection after processing (5-30 minutes). This is working well because once processing is completed, the new connections are staggered and the DB has scaled up.
My logs are showing the standard, all connections are taken error: psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser and rds_superuser connections until the DB scales the available units high enough.
Questions:
Should I be configuring the SQLAlchemy connection differently? It feels like an anti-pattern to put in a custom retry to grab a connection while waiting for the DB to scale the number of available units as this type of capability seems to be built into SQLAlchemy usually.
Should I be using an RDS Proxy in front of the database? This also seems like an anti-pattern, adding a proxy in front of an autoscaling DB.
PG version is 10.

How to increase the max connections in Postgres 12?

What parameters and how should be changed in postgresql.conf to increase the maximum number of connections to 3000 if the RAM is 60GB?
To increase connection, you should change max_connections
But I suggest you use the website https://pgtune.leopard.in.ua/#/, This website help you config Postgres server
PS:
in my opinion, don't increase connection to 3000 because if you increase connection, database work memory decreased for each client, you can use pgbouncer
Pgbouncer is the proxy and connection manager when you have too many clients to connecting to the database, this proxy handle clients and create a pool connection

Number of concurrent database connections

We are using amazon r3.8xlarge postgres RDS for our production server.I checked the max connections limit of the RDS, it happens to be 8192 max connections limit.
I have a service which is deployed in ECS and each ECS tasks can take one database connection.The tasks go up to 2000 during peak load.That means we will have 2000 concurrent connections to the database.
I want to check whether it is ok to have 2000 concurrent connections to database.secondly, Will it impact the performance of amazon postgres RDS.
Having 2000 connection at time should not cause any performance issue, since AWS manages the performance part. There are many DB load testing tools available, if you want to be at most sure about this.

Settings for Pgpool-2 and 2 read replicas, goal is to split connections evenly between two replicas

My setup:
pgpool-2 V4.0.2
OS Ubuntu
2 aws rds read replicas (master db not included in the setup)
pgpool mode: master_slave mode + sub-mode streaming replication
Purpose of using pgpool (have not achieved)
Evenly split incoming db connections between two replicas, e.g. when there are 20 db connections come to pgpool, pgpool will open 10 connections to replica 1 and open 10 connections to replica 2.
Things that my current setup can do
Load balancing queries, cache connections, watch-dog fail over.
I got reply from official Pgpool-2 developer. Pgpool-2 does not split connections, it handles load balancing for query only, not for connections.

MongoDB Multiple Connections to Replica Set

Why does the MongoDB C# Client 2.0 create a connection to each member of the replica set when Read Preference is Primary (default)?
I have an application with MaxPoolSize set to 100, however it creates 300 connections, one to each node in the replica set. Surely it should just connect to the Primary, once it has identified which node is the Primary from the data received from the seed list?
I have two data nodes and one arbiter. The two data nodes are geographically close to the consuming application, with the arbiter a longer ping time away. Whilst I recognize the need to connect to each on a MongoClient level at least once. Why is the pool needing to connect to all nodes on each pooled connection?
I only allow Read Preference of Primary, so it is writing and reading from a single server. The issue is I am getting lots of Connection errors (hence me looking into this and discovering this).
I think the Client should connect to a single server on a pooled connection, either the Primary if writing, a pooled Secondary or Primary when reading depending on Read Preference. It should not connect more than once to the Arbiter.
Am I missing something here? It is causing an issue when I am bursting up my pooled connections and the connections are getting throttled by the Azure load balancer.
My connection string:
mongodb://user:pass#mongo1.domain.com:27000,mongo2.domain.com:27000
Note I do not specify the arbiter, it finds that after querying the replica set, and proceeds to open 100 connections, useless as it has no data.