postgresql what means max connections? - postgresql

What does "max connections" mean in PostgreSQL?
Maybe I have max_connections = 500, and 1000 users come to my site then for the other 500 is the database not available? Or did I misunderstand this?

If you set max_connections to 500 then the database will accept at most 500 connections.
The typical approach to support a (much) higher number of online users is to use a connection pool inside your application - especially fo web applications. It's not unheard of that the ratio between the actual number of used (physical) database connections and the number of "online users" is in the area of 1:100 (so for 1000 users you would only need a connection pool that maintains 10 connections). Depending on the application and workload that ration might even be higher.
How exactly you enable a connection pool in your application depends on the technology you are using. Java web applications typically use that by default (through a JNDI datasource).
If your application (or technology) doesn't support a connection pool directly you need to use an external pooler like pgPool or pgBouncer.

Related

knexfile settings when using PgBouncer

We have a setup where multiple Node processes write into the same database (different tables), and as a result, when using Knex, we end up with more connections to the database than desirable. So, I was thinking of using PgBouncer as a middleware for the Knex processes to connect to, but I'm unsure of how Knex's attempts at connection pooling will work with PgBouncer, which will setup its own pool of connections.
Please assume the following:
A 2vCPU database server
10+ Node processes interacting with the database
PgBouncer running with a pool size of 5
Questions:
If I set min/max size as 1/5 in each Knex setup, will I run out of connections or will PgBouncer somehow be able to "fool" each Knex setup into believing that it has its own pool?
It doesn't feel like I can use a Knex pool in this scenario. Even using min/max pool sizes as 1/1 will leave me out of options if the first five Knex steups I launch claim a connection each.
Is there a way to make Knex drop pooling and open/close connections as needed? This is the ideal setup for me because now PgBouncer won't actually be opening/closing connections but returning them to the pool (unless I'm mistaken about this?).
What strategy should I use? What should my knexfile look like? And would I need to code differently for this? Any help or ideas are welcome!
While it would be ridiculous to allow 32000 connections, it is also ridiculous to allow only 5. I think the lesson from your link should be not that there is a precisely defined magic number of connections, but that you need to look at the waitevents of your performing database, or just do experiments, to see what is going on and whether you have too many connections.
While repeatedly connecting to pgbouncer (which reuses its internal connection to PostgreSQL) might be less expensive than repeatedly connecting all the way through to PostgreSQL, it will still be far more expensive than just re-using an existing connection from knex's internal connection pool. If your connection load is high enough to matter, then bypassing the internal connection pool to just use pgbouncer would be a mistake. Most likely using pgbouncer at all is a mistake, as it just introduces yet another moving piece for no good reason.
Using knex pooler with min:1 and max:5 with 10 different knex app servers and a limit of 5 connections in pgbouncer would mean that only 5 of your app servers could have a connection. The rest would be forced to wait, but it isn't clear what they would be waiting for. Presumably they would wait forever, or until they caught a timeout error, or until one of other app servers exited or shutdown its pool. Pgbouncer would fool them all right, but not in a helpful way. It might make more sense to use this a min:0 (which is now the recommended setting, but still not the default), as that way an app server would at least release its final connection after idleTimeoutMillis, allowing another app to use it.
Using min:1 max:1 could be useful if pgbouncer were not used or used with a large enough pool size, but it could also break entirely. For example, if an app needs at least 2 simultaneous connections to work correctly. That would probably be a poorly written app, but poorly written apps are the rule, not the exception.

Connection Pool Capabilities in DigitalOcean PostgreSQL Managed Databases

I have connection pools setup for my system to handle concurrent connections for my managed database clusters in DigitalOcean.
Overall, each client I have, has their own DB, then I create a pool for that connection to avoid the error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
Yesterday I ran into connection issues with a default database that my system also uses, I hadn't thought the connection pooling was needed for whatever dumb reason or another. No worries, I started getting flooded with error emails and then fixed the system to use the correct pooling mechanism.
This is where my question comes in, with the pooling on DigitalOcean they give you a specific "size" depending on your subscription, my subscription has an available "size" for the clusters of 97. As my clients grow I will be creating new pools and databases for them, so eventually I will run out of slots to assign a pool...what does this "size" dictate?
For example 1 client I have has an allotted size of 10 to their connection pool. Speaking to support:
The connection pool with a size of 1 will only allow 1 connection at a time. As for how you can estimate the number of simultaneous users, this is something you'll need to look over as your user and application grow. We don't have a way to give you that estimate from our back end.
So with that client that has a size of 10 alloted to their pool, they have 88 staff users that use the system simultaneously throughout the day, then they have about 4,000 users that they manage that can all sign in theoretically at once.
This is a lot more than 10 connections, and I get no errors on connection size at least that I've seen so far.
Given that I have a limited amount, how do I determine the appropriate size to use, does anybody have experience with this in production?
For example, with the connections listed above, is 10 too much, too little, just right?
Update 2/14/23
I have tested the capabilities bit because I was curious and can't get any semi-logical answer. When I use 1 connection pool for my 4,000 user client (although all users would not hit their DB/pool at the same time), I get connection errors (specifically when running background tasks from django-celery and Celery in the middle of the night).
Here are those errors, overall just connection already closed from here:
File "/usr/local/lib/python3.11/site-packages/django/db/backends/postgresql/base.py", line 269, in create_cursor cursor = self.connection.cursor()
This issue happened concurrently on 2 nights, but never during the day during normal user activity.
Once I upped the connection pool for said 4,000 user client to 2 instead of 1 the connection already closed error never occurred again.

Why WSO2 APIm needs 50+ DB connections at startup?

In our WSO2 setup, whenever the APIm comes up, it creates close to 50+ DB connections towards the PostGres DB. In stable phase, each APIm instance has only 4 DB connections. I would like to understand why it needs 50+ connections at startup? is it a bug or by design?
We run WSO2 in kubernetes setup, PostGres has a max connection limit set to 100, and two instances of APIm is not able to come-up due to this issue.
Within the WSO2 platform, the Tomcat JDBC pooling is used as the default pooling framework due to its production-ready stability and high performance. The goal of tuning the pool properties is to maintain a pool that is large enough to handle peak load without unnecessarily utilizing resources. These pooling configurations can be tuned for your production server in general in the <PRODUCT_HOME>/repository/conf/datasources/master-datasources.xml file. This is applicable if you are using an APIM version less than or equal to 2.6. If you are using APIM-3.X.X then these configurations can be found in <PRODUCT_HOME>/repository/conf/deployment.toml file.
The following parameters should be considered when tuning the connection pool:
The application's concurrency requirement.
The average time used for running a database query.
The maximum number of connections the database server can support.
The maxActive value is the maximum number of active connections that can be allocated from the connection pool at the same time. The default value is 100. The maximum latency (approximately) = (P / M) * T, where,
M = maxActive value
P = Peak concurrency value
T = Time (average) taken to process a query
Therefore, by increasing the maxActive value (up to the expected highest number of concurrency), the time that requests wait in the queue for a connection to be released will decrease. But before increasing the Max. Active value, consult the database administrator, as it will create up to maxActive connections from a single node during peak times, and it may not be possible for the DBMS to handle the accumulated count of these active connections.
Note that this value should not exceed the maximum number of requests allowed for your database.
For more details on this topic please refer to the official documents[1, 2].
[1] https://docs.wso2.com/display/ADMIN44x/Performance+Tuning#PerformanceTuning-JDBCpoolconfiguration
[2] http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html

PostgreSQL + Apache connection pooling: MaxIdle and MaxActive vs RAM usage

We are running a PostgreSQL database in the Google Cloud. On top of this we have an app. In the app we can configure runtime connection pooling settings for the database.
Our Google SQL server has 30GB ram so the default max_connections is 500 as I read in the Google docs. In our app we have set the following connection pooling (Apache commons pooling) settings:
MaxActive: 200
MaxIdle: 200
MinIdle: 50
We are experiencing issues with these settings. First of all, we often run into the MaxActive limit. I can see a flatline in the connections graph at 200 connections a couple times a day. At those moments our logs are flooded with SQL connection errors.
The server is using around 28GB ram on peak moments (with 200 active connections). So we are close to the RAM limit as well.
Instead of blindly increasing the RAM and MaxActive I was hoping to get some insights on what would be a best practice in our situation. I see 2 solutions to our problem:
Increase RAM, increase MaxActive and increase MaxIdle (not very cost efficient)
Increase MaxActive, keep MaxIdle the same (or even lower) and keep MinIdle the same (or even lower)
Option 1 would be more cost expensive so I am wondering about option 2. Because I lower the Idle connections I would take up less RAM. However, will this have a noticeable impact on performance? I was thought to keep MaxIdle as close to MaxActive as possible, to ensure least overhead in creating new connections.
I have done quite some research, but I came to the conclusion that tuning these settings are very situation specific and there is not really a general best practice on these settings. I could not find a definitive answer to the performance impact of option 1 vs option 2.
Ps. we are also experiencing some slow queries in our app, so of course we can optimize things or change the design of our app to decrease the amount of concurrent connections.
I really hope someone can give some helpful insights / advice / best practices. Thanks a lot in advance!

What is connections per host mongodb?

I followed this tutorial and there is configuration connections per host.
What is this?
connectionsPerHost are the amount of physical connections a single Mongo client instance (it's singleton so you usually have one per application) can establish to a mongod/mongos process. At time of writing the java driver will establish this amount of connections eventually even if the actual query throughput is low (in order words you will see the "conn" statistic in mongostat rise until it hits this number per app server).
There is no need to set this higher than 100 in most cases but this setting is one of those "test it and see" things. Do note that you will have to make sure you set this low enough so that the total amount of connections to your server do not exceed
Found here How to configure MongoDB Java driver MongoOptions for production use?