Vertx connection pooling issue, high value of vertx_sql_queue_pending - postgresql

I am very new to Vertx, so please excuse me if this is a very basic question. I am facing issues with Vertx connection pool, I am using io.vertx:vertx-pg-client with Vertx version 4.2.7.
After sometime, application is not able to get a connection from the pool. We looked at a couple of metrics and we found vertx_sql_queue_pending high value aligns with the connection pool issue.
Why there are pending Jobs?
Do I need to increase connection count?
Any other resource which I need to increase to fix this issue?
Basically, I am not very sure what the exact issue is and how do I solve it.

Related

Why am I experiencing endless connection timeouts using quarkus microprofile reactive rest client

At some point of my quarkus app life (under kubernetes) it begins getting endless connection timeouts from multiple different hosts (timeout configured to be 1 second). As of this point the app never recovers until I restart the k8s pod.
These endless connection timeouts are not due to the hosts since other apps in the cluster do not suffer from this, also a restart of my app fixes the problem.
I am declaring multiple hosts(base-uri) through the quarkus application.properties. (maybe its using a single vertx/netty event-loop and it's wrong?)

How to close SQL connections of old Cloud Run revisions?

Context
I am running a SpringBoot application on Cloud Run which connects to a postgres11 CloudSQL database using a Hikari connection pool. I am using the smallest PSQL instance (1vcpu/614mb/25connection limit). For the setup, I have followed these resources:
Connecting to Cloud SQL from Cloud Run
Managing database connections
Problem
After deploying the third revision, I get the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
What I found out
Default connection pool size is 10, hence why it fails on the third deployment (30 > 25).
When deleting an old revision, active connections shown in the Cloud SQL admin panel drop by 10, and the next deployment succeeds.
Question
It seems, that old Cloud Run revisions are being kept in a "cold" state, maintaining their connection pools. Is there a way to close these connections without deleting the revisions?
In the best practices section it says:
...we recommend that you use a client library that supports connection pools that automatically reconnect broken client connections."
What is the recommended way of managing connection pools in Cloud Run, given that it seems old revisions somehow manage to maintain their connections?
Thanks!
Currently, Cloud Run doesn't provide any guarantees on how long it will remain warm after it's started up. When not in use, the instance is severely throttled by not necessarily shutdown. Thus, you have some revisions that are holding up connections even when not being directed traffic.
Even in this situation, I disagree that with the idea that you should avoid using connection pooling. Connection pooling can lower latency, improve stability, and help put an upper limit on the number of open connections. Alternatively, you can use some of the following configuration options to help keep your pool in check:
minimumIdle - This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently.
maximumPoolSize - This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections.
idleTimeout - This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize. Idle connections will not be retired once the pool reaches minimumIdle connections.
If you set minimumIdle to 0, your application will still be able to use up to maximumPoolSize connections at once. However, once a connection is idle in the pool for idleTimeout seconds, it will be closed. If you set idleTimeout to something small like 1 minute, it will allow the number of connections your pool is using to scale down to 0 when not in use.
Hope this helps!
The issue here is that the connections don't get closed by HikariCP when they are opened. I don't know much about Hikari but I found this which explains how connections should be handled through Hikari. I hope that helps!

JDBC connection pool never shrinks

I run 3 processes at the same time , all of them are using the same DB (RDS postgres)
all of them are java application that uses JDBC for connecting to the DB
I am using PGPoolingDataSource in every process as a connection pool for the DB
every request is handled by the book - ended with
finally{
connection.close();
}
main problems:
1.
I ran out of connections really fast because I do a massive work
with the DB at the beginning (which is ok) but the pool never
shrinks.
I get some exceptions in the code because there are not enough connections and I wish I could expand the timeout when a requesting
a connection.
My insights:
the PGPoolinDataSource never shrinks by definition! I couldn't find any documentation about it, but I assume this is the case. So I tried the apache DBCP pool and again I am having the same problem .
I think there must be timeout when waiting for a connection - I would guess that this timeout can be configured, but I couldn't find this configuration on both pools .
My Questions:
why does the pool never shrinks?!
how to determine how many connections to allocate for each pool\process (here every process has one pool)
what happens if I don't close the pool (not the connections) and the app is dead does the connections on the pool are still alive? this happens a lot when I update the application I stop and start it so I never close the pool.
what would be a good JDBC connection pool that works best with postgres and that has an option to set the timeout for the getConnection ?
Thanks

Creating a Cassandra Connection Pool with JBoss

I'm new to Cassandra and JBoss, and am trying to create a connection pool. I've searched everywhere and found bits and pieces of information, but I'm still missing something.
I'm not clear on what I need in my standalone file, within the driver element. What should I specify for driver-class and xa-datasource-class?
And, in module..xml, what path should I be using in the resource-root element?
I have these 2 jar files - are they correct?
cassandra-driver-core-2.0.2.jar
cassandra-driver-dse-2.0.2.jar
I'm able to open a connection and execute cql queries from a standalone Java class, but now I need to create a connection pool in JBoss. Any help would be appreciated. Thanks.
Cassandra driver, itself maintains connection pool (atleast with datastax jars), which is configurable, in the run-time,
and also can be configured while making the session.
On top of that cassandra driver even lets you read connection pool status, if you have chosen to do that. So you can create your own monitoring service for connection pool status.
So, not sure, what you are trying to achieve here, pool on top of another pool?

Morphia connection pool?

Is there a way to configure the connection pool properties of Morphia? I see the connection count increase appropriately in the console as I run multiple, concurrent tests against my application. However, I have been unable to locate any documentation that explains how to configure the initial number of connections, timeouts, size of the pool, etc.
Any resources you can point me to that would explain how to do this?
You would tune that through the Mongo (or MongoClient) you use to connect. Morphia itself doesn't do any pooling. More documentation on the java driver can be found here: http://docs.mongodb.org/ecosystem/drivers/java/