Forgive me if there's already been an answer to this somewhere, but I haven't seen anything definitive in the docs.
Is there a limit to the size of the connection pool?
I have a situation where there could be 100s or 1000s of connections open at once - should the connection pool be used for this or would that be an abuse of the feature?
Is there a limit to the size of the connection pool?
Probably not, however a greater concern is that each connection will take up RAM.
I have a situation where there could be 100s or 1000s of connections open at once - should the connection pool be used for this or would that be an abuse of the feature?
I don't see this being an abuse. I think during the times you have 100 or 1000 clients connecting at once, the server would handle the connections much better.
However if there is only 10 clients connecting and you have a connection pool of 1000, 900 connections could be seen as wasted resources.
Source: Deep Dive into Connection Pooling
I do not have any experience with connection pools, so don't take my information as granted. I would like to hear from someone with experience on the topic.
It turns out that using the connection pool to manage traffic to more than one database isn't possible - which is what I was aiming to do. The solution I'll be using involves creating numerous connections using createConnection(), closing any unused ones.
See an issue I opened in the Mongoose git project for a fuller explanation https://github.com/Automattic/mongoose/issues/6206
Related
I'm trying to get a sense if my connection pool sizes are big enough. I can't seem to find any hints on seeing how many connections within the pool are available or in use. I would love to just graph this over time. Alternatively, is there a way to see the high water mark for the maximum number of concurrent connections in use within the pool?
MongoDB 4.2
mongodb nodejs 3.5.8
Mongoose 5.9.16
You can add a CMAP event subscriber to receive notifications when connections are created and closed.
I am unaware of standardized driver functionality to obtain the current pool size, but you can figure it out by tracking connection creation.
We have mongodb as datastorage, and there is a MongoClient which we are using for connection pooling.
Question is whether to explicitly use the MongoClient.close to shutdown the connection pool or not.
Here's what I have explored on this so far.
The documentation for the close API says
Closes all resources associated with this instance, in particular any open network connections. Once called, this instance and any databases obtained from it can no longer be used.
But when I referred other questions on this topic, it says you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
Java MongoDB connection pool
Both of them are contradicting. If I were to follow the second, what will be the downsides of it?
Will the connections in the pool be closed when the mongoclient object is de-referenced from jvm?
or will the connections stay open for a particular period of time and then expire?
I would like to know what are the actual downsides of this approach. Any pointers on this is highly appreciated.
IMO, using close on server shut down seems to be the clean way to do it.
But I would like to get an expert opinion on this.
Update: There is no need to explicitly close the connection pool via API. Mongo driver takes care of it.
I'm looking for more detailed guidance / other people's experience of using Npgsql in production with Pgbouncer.
Basically we have the following setup using GKE and Google Cloud SQL:
Right now - I've got npgsql configured as if pgbouncer wasn't in place, using a local connection pool. I've added pgbouncer as a deployment in my GKE cluster as Google SQL has very low max connection limits - and to be able to scale my application horizontally inside of Kubernetes I need to protect against overwhelming it.
My problem is one of reliability when one of the pgbouncer pods dies (due to a node failure or as I'm scaling up/down).
When that happens (1) all of the existing open connections from the client side connection pools in the application pods don't immediately close (2) - and basically result in exceptions to my application as it tries to execute commands. Not ideal!
As I see it (and looking at the advice at https://www.npgsql.org/doc/compatibility.html) I have three options.
Live with it, and handle retries of SQL commands within my application. Possible, but seems like a lot of effort and creates lots of possible bugs if I get it wrong.
Turn on keep alives and let npgsql itself 'fail out' relatively quickly the bad connections when those fail. I'm not even sure if this will work or if it will cause further problems.
Turn off client side connection pooling entirely. This seems to be the official advice, but I am loathe to do this for performance reasons, it seems very wasteful for Npgsql to have to open a connnection to pgbouncer for each session - and runs counter to all of my experience with other RDBMS like SQL Server.
Am I on the right track with one of those options? Or am I missing something?
You are generally on the right track and your analysis seems accurate. Some comments:
Option 2 (turning out keepalives) will help remove idle connections in Npgsql's pool which have been broken. As you've written your application will still some failures (as some bad idle connections may not be removed in time). There is no particular reason to think this would cause further problems - this should be pretty safe to turn on.
Option 3 is indeed problematic for perf, as a TCP connection to pgbouncer would have to be established every single time a database connection is needed. It will also not provide a 100% fail-proof mechanism, since pgbouncer may still drop out while a connection is in use.
At the end of the day, you're asking about resiliency in the face of arbitrary network/server failure, which isn't an easy thing to achieve. The only 100% reliable way to deal with this is in your application, via a dedicated layer which would retry operations when a transient exception occurs. You may want to look at Polly, and note that Npgsql helps our a bit by exposing an IsTransient exception which can be used as a trigger to retry (Entity Framework Core also includes a similar "retry strategy"). If you do go down this path, note that transactions are particularly difficult to handle correctly.
I am trying to write a server side code for sending push notifications for my applications. As per Apple recommendation, I am planning to retain the connection and send push notification as required.
Apple also allows opening and retaining multiple parallel connections for sending push notifications.
"You may establish multiple, parallel connections to the same gateway or to multiple gateway instances."
For this purpose I would like to maintain a connections pool.
My question is what is the limitation of connections pool, or the number of persistent connections with APNS can I maintain?
Thanks for anticipated help.
Don't know if you're going to get a precise answer to this one. As large and dynamic a system as APNS is, it behooves Apple to be ambiguous about such a number; it gives them liberty to change it at will. I found a similar vagueness here.
From this discussion it appears a rule of thumb is 15 connections max
One suggestion is to have an open-ended pool where new connections can be created until they start being refused. Just an idea.
I agree with #paislee, I don't think you'll get a precise number. I'm opening over 20 distinct connections simultaneously and there are ok.
In order to help you with your test, use TcpView, where it is possible to see every opened connection.
Regards
I have this question asked in the Go mailing list, but I think it is more general to get better response from SO.
When work with Java/.Net platform, I never had to manage database connection manually as the drivers handle it. Now, when try to connect to a no sql db with very basic driver support, it is my responsibility to manage the connection. The driver let connect, close, reconnect to a tcp port, but not sure how should i manage it (see the link). Do i have to create a new connection for each db request? can I use other 3rd party connection pooling libraries?
thanks.
I don't know enough about MongoDB to answer this question directly, but do you know how MongoDB handles requests over TCP? For example, one problem with a single TCP connection can be that the db will handle each request serially, potentially causing high latency even though it may be bottlenecking on a single machine and could handle a higher capacity.
Are the machines all running on a local network? If so, the cost of opening a new connection won't be too high, and might even be insignificant from a performance perspective regardless.
My two cents: Do one TCP connection per request and just profile it and see what happens. It is very easy to add pooling later if you're DoSing yourself, but it may never be a problem. That'll work right now, and you won't have to mess around with a third party library that may cause more problems than it solves.
Also, TCP programming is really easy. Don't be intimidated by it, detecting a closed socket, and reconnecting synchronously or asynchronously is simple.
Most mongodb drivers (clients) will create and use a connection pool when connecting to the server. Each socket (connection) can do one operation at a time at the server; because of how data is read off the socket you can issue many requests and server will just get them one after another and return data as each one completes.
There is a Go mongo db driver but it doesn't seem to do connection pooling. http://github.com/mikejs/gomongo
In addition to the answers here: if you find you do need to do some kind of connection pooling redis.go is a decent example of a database driver that pools connections. Specifically, look at the Client.popCon and Client.pushCon methods in the source.