Why does my mongoDB account have 292 connections? - mongodb

I only write data into my mongoDB database once a day and I am not currently writing any data into it but there have been a consistent 292 connections into my database for the past three hours. No reads or writes, just connections and a consistent 29 commands per second since this started.
Concerned by this, I adjusted settings to only allow access from one specific IP, and changed all my passwords but the number hasn't changed, still 292 connections and 29 commands per second. Any idea what is causing this or perhaps how I can dig in further?

The number of connections depends on the cluster setup. A connection can be external (e.g. your app or monitoring tools) or internal (e.g. to replicate your data to secondary nodes or a backup process).
You can use db.currentOp() to list the active connections.
Consider that you app instance(s) may not open just 1 connection, but several, depending on the driver that connects to the DB and how it handles connection pooling. The connection pool size can be thought of as the max number of concurrent requests that your driver can service. For example, the default connection pool size for the Node.js MongoDB driver is 5. If you have set a high pool size, either with the driver or connection string, your app may open many connections to concurrently process the write commands.
You can start by process of elimination:
Completely cut your app off from the DB. There is a keep-alive time, so connections won‘t close immediately unless the driver closes them formally. You may have to wait some time, depending on the keep-alive setting. You can also restart your cluster and see how many connections there are initially.
Connect you app to the DB and check how the connection number changes with each request. Check whether your app properly closes connections to the DB at some point after opening them.

Related

Issues with Postgres connections pool on Payara/Glassfish

I run a JEE application on Payara 4.1 which uses PostgreSQL 9.5.8. The connection pool is configured in following way.
<jdbc-resource poolName="<poolName>" jndiName="<jndiName>" isConnectionValidationRequired="true"
connectionValidationMethod="table" validationTableName="version()" maxPoolSize="30"
validateAtmostOncePeriodInSeconds="30" statementTimeoutInSeconds="30" isTimerPool="true" steadyPoolSize="5"
idleTimeoutInSeconds="0" connectionCreationRetryAttempts="100000" connectionCreationRetryIntervalInSeconds="30"
maxWaitTimeInMillis="2000">
From what monitors say, the applications needs 1-3 DB connections to postgres when running. Steady pool size is set to 5, max pool size is 30.
I see, that about 4 times a day the application opens all connections to the database hitting the max pool size limit. Some requests to the server fail at this point with exception: java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
After some seconds all issues are gone, and the server runs fine till the next hiccup.
I have requested some TCP dumps to be performed to look closely into what happens exactly. I see that:
After 30 connections (sockets) have been opened, most of the connections are rarely used.
After some time (1h or so) the server tries to access some of such pooled connections to realize, that the socket is closed (DB responds immediately with a TCP RST).
As the pooled connections count decreases hitting steady pool size, the connection pool opens 25 connections (sockets) which takes some time (about 0,5 up to 1 second per connection – don’t know why this long, as the TCP handshakes are immediate). At this point some server transactions are failing.
The loop repeats.
This issue is driving me mad. I was wondering, whether I am missing some crucial pool configuration to revalidate the connections more often but could not find anything that would help.
EDIT:
What does not help, as we have tested it already:
Making the pool size bigger (same issues)
Removing idleTimeoutInSeconds="0". We had issues with the connection pool every 10 minutes we did that.

How to close SQL connections of old Cloud Run revisions?

Context
I am running a SpringBoot application on Cloud Run which connects to a postgres11 CloudSQL database using a Hikari connection pool. I am using the smallest PSQL instance (1vcpu/614mb/25connection limit). For the setup, I have followed these resources:
Connecting to Cloud SQL from Cloud Run
Managing database connections
Problem
After deploying the third revision, I get the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
What I found out
Default connection pool size is 10, hence why it fails on the third deployment (30 > 25).
When deleting an old revision, active connections shown in the Cloud SQL admin panel drop by 10, and the next deployment succeeds.
Question
It seems, that old Cloud Run revisions are being kept in a "cold" state, maintaining their connection pools. Is there a way to close these connections without deleting the revisions?
In the best practices section it says:
...we recommend that you use a client library that supports connection pools that automatically reconnect broken client connections."
What is the recommended way of managing connection pools in Cloud Run, given that it seems old revisions somehow manage to maintain their connections?
Thanks!
Currently, Cloud Run doesn't provide any guarantees on how long it will remain warm after it's started up. When not in use, the instance is severely throttled by not necessarily shutdown. Thus, you have some revisions that are holding up connections even when not being directed traffic.
Even in this situation, I disagree that with the idea that you should avoid using connection pooling. Connection pooling can lower latency, improve stability, and help put an upper limit on the number of open connections. Alternatively, you can use some of the following configuration options to help keep your pool in check:
minimumIdle - This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. If the idle connections dip below this value and total connections in the pool are less than maximumPoolSize, HikariCP will make a best effort to add additional connections quickly and efficiently.
maximumPoolSize - This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections.
idleTimeout - This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize. Idle connections will not be retired once the pool reaches minimumIdle connections.
If you set minimumIdle to 0, your application will still be able to use up to maximumPoolSize connections at once. However, once a connection is idle in the pool for idleTimeout seconds, it will be closed. If you set idleTimeout to something small like 1 minute, it will allow the number of connections your pool is using to scale down to 0 when not in use.
Hope this helps!
The issue here is that the connections don't get closed by HikariCP when they are opened. I don't know much about Hikari but I found this which explains how connections should be handled through Hikari. I hope that helps!

MongoDB connection fails on multiple app servers

We have mongodb with mgo driver for golang. There are two app servers connecting to mongodb running besides apps (golang binaries). Mongodb runs as a replica set and each server connects two primary or secondary depending on replica's current state.
We have experienced the SocketException handling request, closing client connection: 9001 socket exception on one of the mongo servers( which resulted in the connection to mongodb from our apps to die. After that, replica set continued to be functional but our second server (on which the error didn't happen) the connection died as well.
In the golang logs it was manifested as:
read tcp 10.10.0.5:37698-\u003e10.10.0.7:27017: i/o timeout
Why did this happen? How can this be prevented?
As I understand, mgo connects to the whole replica by the url (it detects whole topology by the single instance's url) but why did dy·ing of the connection on one of the servers killed it on second one?
Edit:
Full package path that is used "gopkg.in/mgo.v2"
Unfortunately can't share mongo files here. But besides the socketexecption mongo logs don't contain anything useful. There is indication of some degree of lock contention where lock acquired time is quite high some times but nothing beyond that
MongoDB does some heavy indexing some times but the wasn't any unusual spikes recently so it's nothing beyond normal
First, the mgo driver you are using: gopkg.in/mgo.v2 developed by Gustavo Niemeyer (hosted at https://github.com/go-mgo/mgo) is not maintained anymore.
Instead use the community supported fork github.com/globalsign/mgo. This one continues to get patched and evolve.
Its changelog includes: "Improved connection handling" which seems to be directly relating to your issue.
Its details can be read here https://github.com/globalsign/mgo/pull/5 which points to the original pull request https://github.com/go-mgo/mgo/pull/437:
If mongoServer fail to dial server, it will close all sockets that are alive, whether they're currently use or not.
There are two cons:
Inflight requests will be interrupt rudely.
All sockets closed at the same time, and likely to dial server at the same time. Any occasional fail in the massive dial requests (high concurrency scenario) will make all sockets closed again, and repeat...(It happened in our production environment)
So I think sockets currently in use should closed after idle.
Note that the github.com/globalsign/mgo has backward compatible API, it basically just added a few new things / features (besides the fixes and patches), which means you should be able to just change the import paths and all should be working without further changes.

MongoDB Multiple Connections to Replica Set

Why does the MongoDB C# Client 2.0 create a connection to each member of the replica set when Read Preference is Primary (default)?
I have an application with MaxPoolSize set to 100, however it creates 300 connections, one to each node in the replica set. Surely it should just connect to the Primary, once it has identified which node is the Primary from the data received from the seed list?
I have two data nodes and one arbiter. The two data nodes are geographically close to the consuming application, with the arbiter a longer ping time away. Whilst I recognize the need to connect to each on a MongoClient level at least once. Why is the pool needing to connect to all nodes on each pooled connection?
I only allow Read Preference of Primary, so it is writing and reading from a single server. The issue is I am getting lots of Connection errors (hence me looking into this and discovering this).
I think the Client should connect to a single server on a pooled connection, either the Primary if writing, a pooled Secondary or Primary when reading depending on Read Preference. It should not connect more than once to the Arbiter.
Am I missing something here? It is causing an issue when I am bursting up my pooled connections and the connections are getting throttled by the Azure load balancer.
My connection string:
mongodb://user:pass#mongo1.domain.com:27000,mongo2.domain.com:27000
Note I do not specify the arbiter, it finds that after querying the replica set, and proceeds to open 100 connections, useless as it has no data.

Too many open mongoDB connections when using Celery

I'm using Celery to download feeds and resize images. The feeds and image paths are then stored in MongoDB using mongoengine. When I check current connections (db.serverStatus()["connections"]) after running the tasks I have between 50-80 "current" connections, which remain open until I shutdown celeryd. Has anyone experienced this issue and/or do you know what I can do to solve it?
Thanks,
Kenzic
This just means that there are between 50 and 80 connections open to the MongoDB server, and isn't cause for concern. PyMongo (and therefore MongoEngine) maintain an internal pool of connections (that is, sockets) to mongod, so even when nothing is happening (no active queries, commands, etc), the connections remain open to the database for the next time they will be used. By default, PyMongo attempts to retain no more than 10 open connections per Connection object.
Are you experiencing any specific problems due to the number of open connections?