JDBC connection pool never shrinks - postgresql

I run 3 processes at the same time , all of them are using the same DB (RDS postgres)
all of them are java application that uses JDBC for connecting to the DB
I am using PGPoolingDataSource in every process as a connection pool for the DB
every request is handled by the book - ended with
finally{
connection.close();
}
main problems:
1.
I ran out of connections really fast because I do a massive work
with the DB at the beginning (which is ok) but the pool never
shrinks.
I get some exceptions in the code because there are not enough connections and I wish I could expand the timeout when a requesting
a connection.
My insights:
the PGPoolinDataSource never shrinks by definition! I couldn't find any documentation about it, but I assume this is the case. So I tried the apache DBCP pool and again I am having the same problem .
I think there must be timeout when waiting for a connection - I would guess that this timeout can be configured, but I couldn't find this configuration on both pools .
My Questions:
why does the pool never shrinks?!
how to determine how many connections to allocate for each pool\process (here every process has one pool)
what happens if I don't close the pool (not the connections) and the app is dead does the connections on the pool are still alive? this happens a lot when I update the application I stop and start it so I never close the pool.
what would be a good JDBC connection pool that works best with postgres and that has an option to set the timeout for the getConnection ?
Thanks

Related

Issues with Postgres connections pool on Payara/Glassfish

I run a JEE application on Payara 4.1 which uses PostgreSQL 9.5.8. The connection pool is configured in following way.
<jdbc-resource poolName="<poolName>" jndiName="<jndiName>" isConnectionValidationRequired="true"
connectionValidationMethod="table" validationTableName="version()" maxPoolSize="30"
validateAtmostOncePeriodInSeconds="30" statementTimeoutInSeconds="30" isTimerPool="true" steadyPoolSize="5"
idleTimeoutInSeconds="0" connectionCreationRetryAttempts="100000" connectionCreationRetryIntervalInSeconds="30"
maxWaitTimeInMillis="2000">
From what monitors say, the applications needs 1-3 DB connections to postgres when running. Steady pool size is set to 5, max pool size is 30.
I see, that about 4 times a day the application opens all connections to the database hitting the max pool size limit. Some requests to the server fail at this point with exception: java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
After some seconds all issues are gone, and the server runs fine till the next hiccup.
I have requested some TCP dumps to be performed to look closely into what happens exactly. I see that:
After 30 connections (sockets) have been opened, most of the connections are rarely used.
After some time (1h or so) the server tries to access some of such pooled connections to realize, that the socket is closed (DB responds immediately with a TCP RST).
As the pooled connections count decreases hitting steady pool size, the connection pool opens 25 connections (sockets) which takes some time (about 0,5 up to 1 second per connection – don’t know why this long, as the TCP handshakes are immediate). At this point some server transactions are failing.
The loop repeats.
This issue is driving me mad. I was wondering, whether I am missing some crucial pool configuration to revalidate the connections more often but could not find anything that would help.
EDIT:
What does not help, as we have tested it already:
Making the pool size bigger (same issues)
Removing idleTimeoutInSeconds="0". We had issues with the connection pool every 10 minutes we did that.

Why does my mongoDB account have 292 connections?

I only write data into my mongoDB database once a day and I am not currently writing any data into it but there have been a consistent 292 connections into my database for the past three hours. No reads or writes, just connections and a consistent 29 commands per second since this started.
Concerned by this, I adjusted settings to only allow access from one specific IP, and changed all my passwords but the number hasn't changed, still 292 connections and 29 commands per second. Any idea what is causing this or perhaps how I can dig in further?
The number of connections depends on the cluster setup. A connection can be external (e.g. your app or monitoring tools) or internal (e.g. to replicate your data to secondary nodes or a backup process).
You can use db.currentOp() to list the active connections.
Consider that you app instance(s) may not open just 1 connection, but several, depending on the driver that connects to the DB and how it handles connection pooling. The connection pool size can be thought of as the max number of concurrent requests that your driver can service. For example, the default connection pool size for the Node.js MongoDB driver is 5. If you have set a high pool size, either with the driver or connection string, your app may open many connections to concurrently process the write commands.
You can start by process of elimination:
Completely cut your app off from the DB. There is a keep-alive time, so connections won‘t close immediately unless the driver closes them formally. You may have to wait some time, depending on the keep-alive setting. You can also restart your cluster and see how many connections there are initially.
Connect you app to the DB and check how the connection number changes with each request. Check whether your app properly closes connections to the DB at some point after opening them.

How to close SQL connections of old Cloud Run revisions?

Context
I am running a SpringBoot application on Cloud Run which connects to a postgres11 CloudSQL database using a Hikari connection pool. I am using the smallest PSQL instance (1vcpu/614mb/25connection limit). For the setup, I have followed these resources:
Connecting to Cloud SQL from Cloud Run
Managing database connections
Problem
After deploying the third revision, I get the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
What I found out
Default connection pool size is 10, hence why it fails on the third deployment (30 > 25).
When deleting an old revision, active connections shown in the Cloud SQL admin panel drop by 10, and the next deployment succeeds.
Question
It seems, that old Cloud Run revisions are being kept in a "cold" state, maintaining their connection pools. Is there a way to close these connections without deleting the revisions?
In the best practices section it says:
...we recommend that you use a client library that supports connection pools that automatically reconnect broken client connections."
What is the recommended way of managing connection pools in Cloud Run, given that it seems old revisions somehow manage to maintain their connections?
Thanks!
Currently, Cloud Run doesn't provide any guarantees on how long it will remain warm after it's started up. When not in use, the instance is severely throttled by not necessarily shutdown. Thus, you have some revisions that are holding up connections even when not being directed traffic.
Even in this situation, I disagree that with the idea that you should avoid using connection pooling. Connection pooling can lower latency, improve stability, and help put an upper limit on the number of open connections. Alternatively, you can use some of the following configuration options to help keep your pool in check:
minimumIdle - This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently.
maximumPoolSize - This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections.
idleTimeout - This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize. Idle connections will not be retired once the pool reaches minimumIdle connections.
If you set minimumIdle to 0, your application will still be able to use up to maximumPoolSize connections at once. However, once a connection is idle in the pool for idleTimeout seconds, it will be closed. If you set idleTimeout to something small like 1 minute, it will allow the number of connections your pool is using to scale down to 0 when not in use.
Hope this helps!
The issue here is that the connections don't get closed by HikariCP when they are opened. I don't know much about Hikari but I found this which explains how connections should be handled through Hikari. I hope that helps!

PostgreSQL queries not killed on app server shutdown

I have a WildFly which hosts an app which is calling some long running SQL queries (say queries or SP calls which take 10-20 mins or more).
Previously this WildFly was pointing to SQL Server 2008, now to Postgres 11.
Previously when I killed/rebooted WildFly, I had noticed that pretty quickly (if not instantaneously) the long-running SP/query calls that were triggered from the Java code (running in WildFly) were being killed too.
Now... with Postgres I noticed that these long running queries remain there and keep running in Postgres even after the app server has been shut down.
What is causing this? Who was killing them in SQL Server (the server-side itself or the JDBC driver or ...)? Is there some setting/parameter in Postgres which controls this behavior i.e. what the DB server will do with the query provided that the client who triggered the query has been shut down.
EDIT: We do a graceful WildFly shutdown by sending a command to WF to shutdown itself. Still the behavior seems different between SQL Server and Postgres.
If shutting down the application server causes JDBC calls that terminate the database session, this should not happen. If it doesn't close the JDBC connection properly, I'd call that a bug in the application server. If it does, but the queries on the backend are not canceled, I'd call that a bug in the JDBC driver.
Anyway, a workaround is to set tcp_keepalives_idle to a low value so that the server detects dead TCP connections quickly and terminates the query.

Broken connection after Mongod restart

I'm using the Mongo Java driver (2.8.0) to connect to a Mongo instance.
I noticed that if I restart mongod, then the first operation after the restart (even a simple count()) always fails with EOFException or a Broken pipe.
I'm using the following Mongo options:
opts.autoConnectRetry = true;
opts.maxAutoConnectRetryTime = 2000L;
opts.connectTimeout = 30000;
opts.socketTimeout = 60000;
Is there a way to tell the driver to try to re-establish the connections? I thought that "autoReconnectRetry" will do that, but that only works after the connection is "discovered" (through a single failed operation) to be broken.
The AutoConnectRetry option will retry when opening a connection to the server, but doesn't guarantee you won't get a read exception. You still need to handle exceptions in your application and retry if appropriate.
Blurb from the docs:
If true, the driver will keep trying to connect to the same server in case that the socket cannot be established. There is maximum amount of time to keep retrying, which is 15s by default. This can be useful to avoid some exceptions being thrown when a server is down temporarily by blocking the operations. It also can be useful to smooth the transition to a new master (so that a new master is elected within the retry time). Note that when using this flag: - for a replica set, the driver will trying to connect to the old master for that time, instead of failing over to the new one right away - this does not prevent exception from being thrown in read/write operations on the socket, which must be handled by the application. Even if this flag is false, the driver already has mechanisms to automatically recreate broken connections and retry the read operations. Default is false.