NodeJS+mongo: How many connections in a pool are actually being used? - mongodb

I'm trying to get a sense if my connection pool sizes are big enough. I can't seem to find any hints on seeing how many connections within the pool are available or in use. I would love to just graph this over time. Alternatively, is there a way to see the high water mark for the maximum number of concurrent connections in use within the pool?
MongoDB 4.2
mongodb nodejs 3.5.8
Mongoose 5.9.16

You can add a CMAP event subscriber to receive notifications when connections are created and closed.
I am unaware of standardized driver functionality to obtain the current pool size, but you can figure it out by tracking connection creation.

Related

Is there a way to Rate Limit or Throttle a user or a connection in PostgreSql?

We have a setup wherein a Database instance is shared between multiple users.
We are trying to implement some form or throttling or Rate limiting for a shared PostgreSQL so that one user may not starve other users from consuming all the resources.
One approach that we can think of is adding connections pools and fixing the number of connections that we give each tenant.
But one user can still starve all the resource over a few connections. Is there a way to throttle resource usage per connection or per user in PostgreSQL?
No, the postgres documentation makes it clear that's not possible using Postgres alone.
It's usually a (very) bad sign if your application allows one user to starve resources from others - it suggests you've got a bottleneck in your application, and that bottleneck will appear when you least want it to.

Does more instances means more connections?

I am trying to build a messaging app such that each message will be inserted to the database. Also, my backend will hit the database every second to retrieve the latest messages. I am worried that if I have many users who are using the messages feature, I will hit the database maximum number of connection quickly and the app will not work for other users. So, how can I make sure this problem will not happen?
I am just wondering if I created two (db-g1-small) instances, will I have twice the number of connection (1000 per instance)?. Google documentation says (db-g1-small) 1,000 Maximum Connections.
How can I keep track of the number of connections?. What will happen if the number of connection to the database reaches the maximum?
https://cloud.google.com/sql/pricing#2nd-gen-instance-pricing
You shouldn't have a unique connection per user to your database. Instead, your backend should use connection pooling to maintain a consistent number of connections to your instance. You can view some example of how best to do this on the Managing Database Connections page.
It's incredibly unlikely that you'll need 1000 open connections. Most applications use far, far less for optimal performance. You can check out this article about benchmarking different connection pool sizes.

MongoDB Connection Pooling Shutdown

We have mongodb as datastorage, and there is a MongoClient which we are using for connection pooling.
Question is whether to explicitly use the MongoClient.close to shutdown the connection pool or not.
Here's what I have explored on this so far.
The documentation for the close API says
Closes all resources associated with this instance, in particular any open network connections. Once called, this instance and any databases obtained from it can no longer be used.
But when I referred other questions on this topic, it says you can perform your operations and don't need to explicitly manage operations like MongoClient.close, as this object manages connection pooling automatically.
Java MongoDB connection pool
Both of them are contradicting. If I were to follow the second, what will be the downsides of it?
Will the connections in the pool be closed when the mongoclient object is de-referenced from jvm?
or will the connections stay open for a particular period of time and then expire?
I would like to know what are the actual downsides of this approach. Any pointers on this is highly appreciated.
IMO, using close on server shut down seems to be the clean way to do it.
But I would like to get an expert opinion on this.
Update: There is no need to explicitly close the connection pool via API. Mongo driver takes care of it.

mongoose poolsize best practice

Forgive me if there's already been an answer to this somewhere, but I haven't seen anything definitive in the docs.
Is there a limit to the size of the connection pool?
I have a situation where there could be 100s or 1000s of connections open at once - should the connection pool be used for this or would that be an abuse of the feature?
Is there a limit to the size of the connection pool?
Probably not, however a greater concern is that each connection will take up RAM.
I have a situation where there could be 100s or 1000s of connections open at once - should the connection pool be used for this or would that be an abuse of the feature?
I don't see this being an abuse. I think during the times you have 100 or 1000 clients connecting at once, the server would handle the connections much better.
However if there is only 10 clients connecting and you have a connection pool of 1000, 900 connections could be seen as wasted resources.
Source: Deep Dive into Connection Pooling
I do not have any experience with connection pools, so don't take my information as granted. I would like to hear from someone with experience on the topic.
It turns out that using the connection pool to manage traffic to more than one database isn't possible - which is what I was aiming to do. The solution I'll be using involves creating numerous connections using createConnection(), closing any unused ones.
See an issue I opened in the Mongoose git project for a fuller explanation https://github.com/Automattic/mongoose/issues/6206

mongodb & max connections

i am going to have a website with 20k+ concurrent users.
i am going to use mongodb using one management node and 3 or more nodes for data sharding.
now my problem is maximum connections. if i have that many users accessing the database, how can i make sure they don't reach the maximum limit? also do i have to change anything maybe on the kernel to increase the connections?
basically the database will be used to keep hold of connected users to the site, so there are going to be heavy read/write operations.
thank you in advance.
You don't want to open a new database connection each time a new user connects. I don't know if you'll be able to scale to 20k+ concurrent users easily, since MongoDB uses a new thread for each new connection. You want your web app backend to have just one to a few database connections open and just use those in a pool, particularly since web usage is very asynchronous and event driven.
see: http://www.mongodb.org/display/DOCS/Connections
The server will use one thread per TCP
connection, therefore it is highly recomended that your application
use some sort of connection pooling. Luckily, most drivers handle this
for you behind the scenes. One notable exception is setups where your
app spawns a new process for each request, such as CGI and some
configurations of PHP.
Whatever driver you're using, you'll have to find out how they handle connections and if they pool or not. For instance, Node's Mongoose is non-blocking and so you use one connection per app usually. This is the kind of thing you probably want.