What parameters and how should be changed in postgresql.conf to increase the maximum number of connections to 3000 if the RAM is 60GB?
To increase connection, you should change max_connections
But I suggest you use the website https://pgtune.leopard.in.ua/#/, This website help you config Postgres server
PS:
in my opinion, don't increase connection to 3000 because if you increase connection, database work memory decreased for each client, you can use pgbouncer
Pgbouncer is the proxy and connection manager when you have too many clients to connecting to the database, this proxy handle clients and create a pool connection
Related
I am trying to use Hikari for Postgres connection. Here is the setting:
config.setMinimumIdle(20);
config.setMaximumPoolSize(100);
However, it only seems to have 2 connections. I got this after ran netstat -ant | grep 5432
tcp4 0 0 127.0.0.1.5432 127.0.0.1.53183 ESTABLISHED
tcp4 0 0 127.0.0.1.53183 127.0.0.1.5432 ESTABLISHED
and also in Postgres console SELECT sum(numbackends) FROM pg_stat_database;
sum
-----
2
(1 row)
I am not sure what's going on and I appreciate your help!
Recommended setup is to not set minimumIdle
This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently. However, for maximum performance and responsiveness to spike demands, we recommend not setting this value and instead allowing HikariCP to act as a fixed size connection pool. Default: same as maximumPoolSize.
Seems HikariCP does not care much about it
If the idle connections dip below this value ... HikariCP will make a best effort to add additional connections
A good test for your value could be to open more than 20 connections and verify they are maintained in the pool during idleTimeout (10 minutes by default). Then the value should be kept at the setting value (20 in this case).
That minimum will be maintained once the number of connections reaches that value
Idle connections will not be retired once the pool reaches minimumIdle connections.
I run a JEE application on Payara 4.1 which uses PostgreSQL 9.5.8. The connection pool is configured in following way.
<jdbc-resource poolName="<poolName>" jndiName="<jndiName>" isConnectionValidationRequired="true"
connectionValidationMethod="table" validationTableName="version()" maxPoolSize="30"
validateAtmostOncePeriodInSeconds="30" statementTimeoutInSeconds="30" isTimerPool="true" steadyPoolSize="5"
idleTimeoutInSeconds="0" connectionCreationRetryAttempts="100000" connectionCreationRetryIntervalInSeconds="30"
maxWaitTimeInMillis="2000">
From what monitors say, the applications needs 1-3 DB connections to postgres when running. Steady pool size is set to 5, max pool size is 30.
I see, that about 4 times a day the application opens all connections to the database hitting the max pool size limit. Some requests to the server fail at this point with exception: java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
After some seconds all issues are gone, and the server runs fine till the next hiccup.
I have requested some TCP dumps to be performed to look closely into what happens exactly. I see that:
After 30 connections (sockets) have been opened, most of the connections are rarely used.
After some time (1h or so) the server tries to access some of such pooled connections to realize, that the socket is closed (DB responds immediately with a TCP RST).
As the pooled connections count decreases hitting steady pool size, the connection pool opens 25 connections (sockets) which takes some time (about 0,5 up to 1 second per connection – don’t know why this long, as the TCP handshakes are immediate). At this point some server transactions are failing.
The loop repeats.
This issue is driving me mad. I was wondering, whether I am missing some crucial pool configuration to revalidate the connections more often but could not find anything that would help.
EDIT:
What does not help, as we have tested it already:
Making the pool size bigger (same issues)
Removing idleTimeoutInSeconds="0". We had issues with the connection pool every 10 minutes we did that.
Context
I am running a SpringBoot application on Cloud Run which connects to a postgres11 CloudSQL database using a Hikari connection pool. I am using the smallest PSQL instance (1vcpu/614mb/25connection limit). For the setup, I have followed these resources:
Connecting to Cloud SQL from Cloud Run
Managing database connections
Problem
After deploying the third revision, I get the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
What I found out
Default connection pool size is 10, hence why it fails on the third deployment (30 > 25).
When deleting an old revision, active connections shown in the Cloud SQL admin panel drop by 10, and the next deployment succeeds.
Question
It seems, that old Cloud Run revisions are being kept in a "cold" state, maintaining their connection pools. Is there a way to close these connections without deleting the revisions?
In the best practices section it says:
...we recommend that you use a client library that supports connection pools that automatically reconnect broken client connections."
What is the recommended way of managing connection pools in Cloud Run, given that it seems old revisions somehow manage to maintain their connections?
Thanks!
Currently, Cloud Run doesn't provide any guarantees on how long it will remain warm after it's started up. When not in use, the instance is severely throttled by not necessarily shutdown. Thus, you have some revisions that are holding up connections even when not being directed traffic.
Even in this situation, I disagree that with the idea that you should avoid using connection pooling. Connection pooling can lower latency, improve stability, and help put an upper limit on the number of open connections. Alternatively, you can use some of the following configuration options to help keep your pool in check:
minimumIdle - This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently.
maximumPoolSize - This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections.
idleTimeout - This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize. Idle connections will not be retired once the pool reaches minimumIdle connections.
If you set minimumIdle to 0, your application will still be able to use up to maximumPoolSize connections at once. However, once a connection is idle in the pool for idleTimeout seconds, it will be closed. If you set idleTimeout to something small like 1 minute, it will allow the number of connections your pool is using to scale down to 0 when not in use.
Hope this helps!
The issue here is that the connections don't get closed by HikariCP when they are opened. I don't know much about Hikari but I found this which explains how connections should be handled through Hikari. I hope that helps!
We are using amazon r3.8xlarge postgres RDS for our production server.I checked the max connections limit of the RDS, it happens to be 8192 max connections limit.
I have a service which is deployed in ECS and each ECS tasks can take one database connection.The tasks go up to 2000 during peak load.That means we will have 2000 concurrent connections to the database.
I want to check whether it is ok to have 2000 concurrent connections to database.secondly, Will it impact the performance of amazon postgres RDS.
Having 2000 connection at time should not cause any performance issue, since AWS manages the performance part. There are many DB load testing tools available, if you want to be at most sure about this.
I have 3 postgresql database (one master and two slave) with a pgpool, each database can handle 200 connections, and I want to be able to get 600 active connection on the pgpool.
My problem is that if I set pgpool with 600 child process, it can open the 600 connection on only one database (the master for example if all connection make a write query), but with 200 child process I only use +- 70 connection on each database.
So is there a way to configure pgpool to have a load balancing that scale with the number of database ?
Thanks.
Having 600 connections available in each db should not be an ideal solution. I would really look into my application before setting such a high connections value.
Load balancing scalability of pgpool can be increased by setting equal backend_weight parameter. So that no of sql queries will equally get distributed among postgresql nodes.
Also pgpool manages database connection pool using num_init_children and max_pool parameter.
The num_init_children parameter is used to span pgpool process that will connect to each PostgreSQL backends.
Also num_init_children parameter value is the allowed number of concurrent clients to connect with pgpool.
pgpool roughly tries to make max_pool*num_init_children no of connections to each postgresql backend.