Springboot hikari JDBC Connection Pool Cache settings - hikaricp

Am using a Springboot application and use Jdbc hikari driver, When the DB team points the DB name to different servers my app caches the old ips and not reconnecting to new ips, what is the cache settings to refresh the connection pool

Related

How to rds connection drops when downscaling?

I’m thinking of using autoscaling with my Amazon aurora postgres, but I’m worried about what to do if a replica is downscaling and a client still holds a connection to that replica. How can I make sure that the client can handle this situation?
The connection is based on TCP, and once it's disconnected, the JDBC drive opens a new TCP connection to the RDS instance.
I can recommend using AWS RDS Proxy - it holds the connection pool to the application and takes care of the connections to the backend.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
Also, AWS have provided their own JDBC connector, which is recommended for faster recovery https://github.com/pgjdbc/pgjdbc

SQLAlchemy with Aurora Serverless V2 PostgreSQL - many connections

I have an AWS Serverless V2 database setup (postgresql) that is being accessed from a compute cluster. The cluster launches a large number of jobs (>1000) and each job independently puts/pulls some data from the database. The Serverless cluster is setup to autoscale from 2 to 32 units as needed.
The code being run by each cluster job is using SQLAlchemy (either the ORM or the core). I am setting up each database connection with a null pool and pessimistic disconnect handling (i.e., pool_pre_ping=True). From my reading of the docs this should be handling disconnects due to being idle mid-connection.
Code is also written to access the DB, get the results, close the connection (to avoid idle connections), and then reopen the connection after processing (5-30 minutes). This is working well because once processing is completed, the new connections are staggered and the DB has scaled up.
My logs are showing the standard, all connections are taken error: psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser and rds_superuser connections until the DB scales the available units high enough.
Questions:
Should I be configuring the SQLAlchemy connection differently? It feels like an anti-pattern to put in a custom retry to grab a connection while waiting for the DB to scale the number of available units as this type of capability seems to be built into SQLAlchemy usually.
Should I be using an RDS Proxy in front of the database? This also seems like an anti-pattern, adding a proxy in front of an autoscaling DB.
PG version is 10.

create a connection pool for many DBs on the same DB server (Spring Boot)

I'm looking for a way to create a connection pool for many DBs on the same DB server (PostgreSQL Aurora).
This means that I need the ability of changing the target DB of a connection at run time.
Currently I'm using HikariCP for connection pooling, in a stack of Spring Boot and JHispter.
Background:
we need to deploy a multi-tenancy micro-service system with a single DB server (to be specific, a single AWS Aurora PostgreSQL instance)
our solution of multi-tenancy is that each tenant has a DB, in that DB we have many schema for each service. All the DBs are in the same AWS Aurora instance.
Our problem:
with this deployment, we have a connection pool for each (tenant x micro-service instance).
This leads to a huge number of connections.
Ie: with the pool size of 50 connections/pool. We need: 500 tenants x 20 micro-service instances x 50 connections/pool = 500000 connections.
The maximum connections allowed on any Aurora DB is 16000, and actually by default the "max_connections" parameter is typically set to something lower.
So now I'm looking for a way to make our pooling scope larger, so that many tenants can share the same pool. Since we use only 1 Aurora server instance, I think it's possible to create a connection pool that can be shared between many tenants.
Is there any way to have a connection pool that can switch the DB at run time?
Unless Aurora has done some customization on this, you cannot change the database of a connection once it is established in PostgreSQL. You can still use a pooler, but it will effectively be a separate pool for each database. This is pretty fundamental, there is nothing you can do about it.

too many clients in postgresql in docker-swarm with spring boot

I set up a development environment in docker swarm environment, which consists of 2 nodes, a few networks and a few microservices. Following gives an example how it looks in cluster.
Service Network Node Image Version
nginx reverse proxy backend, frontend node 1 latest stable-alpine
Service A backend, database node 2 8-jre-alpine
Service B backend, database node 2 8-jre-alpine
Service C backend, database node 1 8-jre-alpine
Database postgresql database node 1 latest alpine
Services are spring boot 2.1.7 applications with boot-data-jpa. All the services above contain database connection to postgresql instance. For database I configured only following properties in application.properties:
spring.datasource.url
spring.datasource.username
spring.datasource.password
spring.datasource.driver-class-name
spring.jpa.hibernate.ddl-auto=
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
After some time I see that the connection limit in postgresql exceeds, which does not allow to create a new connection.
2019-09-21 13:01:07.031 1 --- [onnection adder] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Cannot acquire connection from data source org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
A similar error is shown also when I try to connect over ssh to database.
psql: FATAL: sorry, too many clients already
What I tried till now:
spring.datasource.hikari.leak-detection-threshold=20000
which didnt help.
I found several answers to this problem like:
increase connection limit in postgresql
No I dont want to do this. It is just a temporary solution. It will pollute the connection again but a bit later maybe.
add idle timeout in hikaripCP configuration
Default configuration of hikariCP has already a default value of 10mins, which doesnt help
add max life time to hikariCP configuration
Default configuration of hikariCP has already a default value of 30mins, which doesnt help
reduce number of idle connection in hikariCP configuration
Default configuration of hikariCP has already a default value of 10, which doesnt help
set min idle in hikariCP configuration
Default is 10 and I am fine with it. b
I am expecting a connection around 30 for the services but I find nearly 100 connections. Restarting the services or stopping them does not close the idle connections neither. What are your suggestions? Is it a docker specific problem? Did someone experience the same problem?

Playframework scala slick database connections and aws

I have a problem with my database connections. Whenever my project runs a db.run from any of my microservices it creates a new connection to my db. I have a play-scala-slick project and I'm using Amazon Web Services (AWS).
Do I have to manually open and close connections e.g. for every query from any of my microservices or is there a clean, proper, smooth way to handle my connections? Thanks!
A solution is to steh the timeout variables in aws rds.
First you have to set up a Parameter Group
set global wait_timeout=3;
set global interactive_timeout=3;