Number of concurrent database connections - postgresql

We are using amazon r3.8xlarge postgres RDS for our production server.I checked the max connections limit of the RDS, it happens to be 8192 max connections limit.
I have a service which is deployed in ECS and each ECS tasks can take one database connection.The tasks go up to 2000 during peak load.That means we will have 2000 concurrent connections to the database.
I want to check whether it is ok to have 2000 concurrent connections to database.secondly, Will it impact the performance of amazon postgres RDS.

Having 2000 connection at time should not cause any performance issue, since AWS manages the performance part. There are many DB load testing tools available, if you want to be at most sure about this.

Related

SQLAlchemy with Aurora Serverless V2 PostgreSQL - many connections

I have an AWS Serverless V2 database setup (postgresql) that is being accessed from a compute cluster. The cluster launches a large number of jobs (>1000) and each job independently puts/pulls some data from the database. The Serverless cluster is setup to autoscale from 2 to 32 units as needed.
The code being run by each cluster job is using SQLAlchemy (either the ORM or the core). I am setting up each database connection with a null pool and pessimistic disconnect handling (i.e., pool_pre_ping=True). From my reading of the docs this should be handling disconnects due to being idle mid-connection.
Code is also written to access the DB, get the results, close the connection (to avoid idle connections), and then reopen the connection after processing (5-30 minutes). This is working well because once processing is completed, the new connections are staggered and the DB has scaled up.
My logs are showing the standard, all connections are taken error: psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser and rds_superuser connections until the DB scales the available units high enough.
Questions:
Should I be configuring the SQLAlchemy connection differently? It feels like an anti-pattern to put in a custom retry to grab a connection while waiting for the DB to scale the number of available units as this type of capability seems to be built into SQLAlchemy usually.
Should I be using an RDS Proxy in front of the database? This also seems like an anti-pattern, adding a proxy in front of an autoscaling DB.
PG version is 10.

1 million concurrent database connections

In https://cloud.google.com/sql/docs/quotas, it mentioned that "Cloud Run services are limited to 100 connections to a Cloud SQL database.". Assume I deploy my service as Cloud Run, what's the right way to handle 1 million concurrent connections? Can cloud spanner enables this - I can't find documentation discussing maximum concurrent connections on cloud spanner maximum concurrent connection with Cloud Run.
Do you want Cloud Run to handle a million concurrent connections, or do you want Cloud SQL to handle a million concurrent connections?
If you want Cloud SQL to handle a million concurrent connections, you are probably wrong. Check out this article about Pool sizing (it's on a Java repo, but is general enough to apply to all connection pooling). If you are at the point where you need a million concurrent connections, you would need to invest in more advanced architectures (such as sharding).

create a connection pool for many DBs on the same DB server (Spring Boot)

I'm looking for a way to create a connection pool for many DBs on the same DB server (PostgreSQL Aurora).
This means that I need the ability of changing the target DB of a connection at run time.
Currently I'm using HikariCP for connection pooling, in a stack of Spring Boot and JHispter.
Background:
we need to deploy a multi-tenancy micro-service system with a single DB server (to be specific, a single AWS Aurora PostgreSQL instance)
our solution of multi-tenancy is that each tenant has a DB, in that DB we have many schema for each service. All the DBs are in the same AWS Aurora instance.
Our problem:
with this deployment, we have a connection pool for each (tenant x micro-service instance).
This leads to a huge number of connections.
Ie: with the pool size of 50 connections/pool. We need: 500 tenants x 20 micro-service instances x 50 connections/pool = 500000 connections.
The maximum connections allowed on any Aurora DB is 16000, and actually by default the "max_connections" parameter is typically set to something lower.
So now I'm looking for a way to make our pooling scope larger, so that many tenants can share the same pool. Since we use only 1 Aurora server instance, I think it's possible to create a connection pool that can be shared between many tenants.
Is there any way to have a connection pool that can switch the DB at run time?
Unless Aurora has done some customization on this, you cannot change the database of a connection once it is established in PostgreSQL. You can still use a pooler, but it will effectively be a separate pool for each database. This is pretty fundamental, there is nothing you can do about it.

High CPU Utilisation on AWS RDS - Postgres

Attempted to migrate my production environment from Native Postgres environment (hosted on AWS EC2) to RDS Postgres (9.4.4) but it failed miserably. The CPU utilisation of RDS Postgres instances shooted up drastically when compared to that of Native Postgres instances.
My environment details goes here
Master: db.m3.2xlarge instance
Slave1: db.m3.2xlarge instance
Slave2: db.m3.2xlarge instance
Slave3: db.m3.xlarge instance
Slave4: db.m3.xlarge instance
[Note: All the slaves were at Level 1 replication]
I had configured Master to receive only write request and this instance was all fine. The write count was 50 to 80 per second and they CPU utilisation was around 20 to 30%
But apart from this instance, all my slaves performed very bad. The Slaves were configured only to receive Read requests and I assume all writes that were happening was due to replication.
Provisioned IOPS on these boxes were 1000
And on an average there were 5 to 7 Read request hitting each slave and the CPU utilisation was 60%.
Where as in Native Postgres, we stay well with in 30% for this traffic.
Couldn't figure whats going wrong on RDS setup and AWS support is not able to provide good leads.
Did anyone face similar things with RDS Postgres?
There are lots of factors, that maximize the CPU utilization on PostgreSQL like:
Free disk space
CPU Usage
I/O usage etc.
I came across with the same issue few days ago. For me the reason was that some transactions was getting stuck and running since long time. Hence forth CPU utilization got inceased. I came to know about this, by running some postgreSql monitoring command:
SELECT max(now() - xact_start) FROM pg_stat_activity
WHERE state IN ('idle in transaction', 'active');
This command shows the time from which a transaction is running. This time should not be greater than one hour. So killing the transaction which was running from long time or that was stuck at any point, worked for me. I followed this post for monitoring and solving my issue. Post includes lots of useful commands to monitor this situation.
I would suggest increasing your work_mem value, as it might be too low, and doing normal query optimization research to see if you're using queries without proper indexes.

Connection limit for SQLDB with small_plan (Bluemix)

What is the concurrent connection limit for SQLDB Small plan? We have a liberty application bounded to a SQLDB service with Small plan and got the following error. DB2 SQL Error: SQLCODE=-4712
The Free Beta plan features 100MB per instance and 10 concurrent connections.
The Small plan features 10GB max per instance and 20 concurrent connections.
The Premium plan features 500GB max storage per instance and 100 concurrent connections.
See the link below for more information:
https://console.ng.bluemix.net/?ace_base=true/#/store/cloudOEPaneId=store&serviceOfferingGuid=0d5a104d-d700-4315-9b7c-8f84a9c85ae3&fromCatalog=true
If you think you're close to exhausting your connections, you should use the Monitoring and Analytics Service to monitor connection pools:
https://www.ng.bluemix.net/docs/#services/monana/index.html#gettingstartedtemplate