Reconnection to the failed mongo server - mongodb

I'm connecting to the mongo with reconnect options on the startup and using created db over the whole app.
var options = {
"server": {
"auto_reconnect": true,
"poolSize": 10,
"socketOptions": {
"keepAlive": 1
}
},
"db": {
"numberOfRetries": 60,
"retryMiliSeconds": 5000
}
};
MongoClient.connect(dbName, options).then(useDb).catch(errorHandler)
When I restart mongo server, driver reconnect successful. If I stop server and start it after a 30 second I get MongoError "topology was destroyed" on every operation. This 30 second seems to me is a default value for numberOfRetries = 5 and my given option doesn't have effect. Am I doing something wrong? How can I manage reconnection for a long time?

According to this answer, in order to fix this error, you should increase connection timeout in the options:
var options = {
"server": {
"auto_reconnect": true,
"poolSize": 10,
"socketOptions": {
"keepAlive": 1,
"connectTimeoutMS": 30000 // increased connection timeout
}
},
"db": {
"numberOfRetries": 60,
"retryMiliSeconds": 5000
}
};

Related

OpsManager mongodb deployment issue adding PLAIN auth

I'm trying to enable PLAIN authentication security over a mongodb replica shard managed with OpsManager following their documentation https://docs.opsmanager.mongodb.com/v4.0/tutorial/enable-ldap-authentication-for-group/ .
The issue I'm facing is at the automation-agent trying to get mongoS status while restarting after enabling security. Please see the error output below:
<mongos_5> [09:18:19.711] Failed to compute states :
<mongos_5> [09:18:19.711] Error calling ComputeState : <mongos_5> [09:18:19.632] Error getting current config from running mongo using conn params = mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error getting pid for mongos01:27017 (local=false) :
<mongos_5> [09:18:19.632] Error running command for runCommandWithTimeout(dbName=admin, cmd=[{serverStatus 1} {locks false} {recordStats false}]) :
result={"$clusterTime":{"clusterTime":6808443558471663617,"signature": {"hash":"e44BxV30B7dTpampo4VZsVuio7E=","keyId":6808441655801151517}},"code":13,"codeName":"Unauthorized",
"errmsg":"command serverStatus requires authentication","ok":0,"operationTime":6808443558471663617} connection=&{mongos01:27017 (local=false) 2 true 0xc4207b21a0 2020-03-26 09:18:19.627337419 +0000 UTC 0xc4207bdef0 <nil> }
identityUsed= : command serverStatus requires authentication
I noticed that even if opsmanager is not able to get the status the security was enabled successfully and PLAIN authentication mechanism works but the status hangs at
Start the process ... Start MongoDB process
I tried this over the API following mongodb-labs repo https://github.com/mongodb-labs/mms-api-examples/blob/master/automation/api_usage_example/configs/security_ldap_cluster.json but also manually following mongodb docs but everytime I'm facing the same error.
After all I enabled LDAP(PLAIN) only for mongo in mongoconfig file (see below the ops manager API snippet call example), and avoid enable in opsmanager for the agents also.
{
"args2_6": {
"net": {
"port": 28001
},
"replication": {
"replSetName": "rs0"
},
"storage": {
"dbPath": "/data/mongo"
},
"systemLog": {
"destination": "file",
"path": "/data/mongo/mongodb.log"
},
"security": {
"authorization": "enabled"
},
"setParameter": {
"saslauthdPath": "",
"authenticationMechanisms": "PLAIN,MONGO-CR,SCRAM-SHA-256",
}
}, ...

A quiestion about mcrouter, WarmUpRoute handle can not set multiple cold servers

I wanna use WarmUpRoute to prepare two cold memcached data node, I use this config:
{
"pools": {
"cold": {
"servers": ["xxxxx:11212", "xxxx:11213"]
},
"warm": {
"servers": ["xxxxx:11211"]
}
},
"route": {
"type": "WarmUpRoute",
"cold": "PoolRoute|cold",
"warm": "PoolRoute|warm"
}
}
But when I do some test, I connet mcrouter and set some data, found that only 1 cold server and warm server are success to save data, another cold node cannot set data successfuly. I am confuse, my config has some problem or is this a bug?

Node and Mongoose - Not reconnecting if mongod was not running when first tried to connect

Im using docker-composer and Im finding issues with execution order of services. The main issue happens when my express app tries to connect to mongod but this is not yet ready.
The issue can be reproduced easily by running first the nodejs application but not mongod (manually forcing this case).
My app uses mongoose and try to establish connection to mongod. Because mongod is not up and running, the app throws an error about it.
$ nodemon server/app.js
24 Apr 21:42:05 - [nodemon] v1.7.0
24 Apr 21:42:05 - [nodemon] to restart at any time, enter `rs`
24 Apr 21:42:05 - [nodemon] watching: *.*
24 Apr 21:42:05 - [nodemon] starting `node server/app.js`
Listening on port 8000
disconnected
connection error: { [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
Starting mongod later seems to reconnect.
24 Apr 21:51:28 - [nodemon] v1.7.0
24 Apr 21:51:28 - [nodemon] to restart at any time, enter `rs`
24 Apr 21:51:28 - [nodemon] watching: *.*
24 Apr 21:51:28 - [nodemon] starting `node server/app.js`
Listening on port 8000
disconnected
connection error: { [MongoError: connect ECONNREFUSED] name: 'MongoError', message: 'connect ECONNREFUSED' }
connected
reconnected
Despite of that, operations that require access to mongo will not come through... neither error is shown
This is the code to connect to mongo using mongoose:
// Starting mongo
mongoose.connect(config.database, {
server:{
auto_reconnect:true,
reconnectTries: 10,
reconnectInterval: 5000,
}
});
// Listening for connection
var mongo = {};
var db = mongoose.connection;
db.on('connected', console.error.bind(console, 'connected'));
db.on('error', console.error.bind(console, 'connection error:'));
db.on('close', console.error.bind(console, 'connection close.'));
db.once('open', function() {
console.log("We are alive");
});
db.on('reconnected', function(){
console.error('reconnected');
});
db.on('disconnected', console.error.bind(console, 'disconnected'));
And here is the route that will try to get data from mongo but fail.
router.post('/auth', function(req, res){
User.findOne({name: req.body.name})
.then(function(user){
if(!user)
{
res.status(401).send({ success: false, message: 'Authentication failed. User not found.' });
}
...
How can I recover from running nodejs before mongo is ready?.
In my case, I created separate function only for mongoose connect method:
const connect = () => {
mongoose.connect('mongodb://localhost:27017/myapp', {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
poolSize: 10,
});
};
I'm calling it at the same start. I also added Event Handler for error event:
mongoose.connection.on('error', (e) => {
console.log('[MongoDB] Something went super wrong!', e);
setTimeout(() => {
connect();
}, 10000);
});
If mongoose fails to connect because MongoDB is not running, error event handler is fired and setTimeout schedules "custom" reconnect.
Hope it helps.
How long does it take before mongod is ready? Because it seems like this is an edge case issue, where mongod might take a couple of seconds to get ready; and when mongoose is connected it serves requests as expected. Just trying to understand why the slight delay (probably a only a few seconds) is necessary to resolve?
But here is a solution anyway:
You could set up an express middleware to check if mongoose is ready and throw an error if not:
app.use(function(req,res,next){
if (mongoose.Connection.STATES.connected === mongoose.connection.readyState){
next();
} else {
res.status(503).send({success:false, message: 'DB not ready' });
}
});
This should go before you inject your router.
I had the same issue with Mongoose 5+. I was able to get this working by creating a retry function using set timeout.
const mongoose = require('mongoose');
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB,
MONGO_DEBUG,
MONGO_RECONNECT_TRIES,
MONGO_RECONNECT_INTERVAL,
MONGO_TIMEOUT_MS,
} = process.env;
if (MONGO_DEBUG) {
console.log(`********* MongoDB DEBUG MODE *********`);
mongoose.set('debug', true);
}
const DB_OPTIONS = {
useNewUrlParser: true,
reconnectTries: MONGO_RECONNECT_TRIES,
reconnectInterval: MONGO_RECONNECT_INTERVAL,
connectTimeoutMS: MONGO_TIMEOUT_MS,
};
const DB_URL = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}#${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
// Initialize conenction retry counter
let reconnectTriesAlready = 1;
// Connect to database with timeout and retry
const connectWithRetry = () => {
mongoose.connect(DB_URL, DB_OPTIONS).then(() => {
// Connected successfully
console.log('********* MongoDB connected successfully *********');
// Reset retry counter
reconnectTriesAlready = 1;
}).catch(err => {
// Connection failed
console.error(`********* ERROR: MongoDB connection failed ${err} *********`)
// Compare retries made already to maximum retry count
if (reconnectTriesAlready <= DB_OPTIONS.reconnectTries) {
// Increment retry counter
reconnectTriesAlready = reconnectTriesAlready + 1;
// Reconnect retries made already has not exceeded maximum retry count
console.log(`********* MongoDB connection retry after ${MONGO_RECONNECT_INTERVAL / 1000} seconds *********`)
// Connection retry
setTimeout(connectWithRetry, MONGO_RECONNECT_INTERVAL)
} else {
// Reconnect retries made already has exceeded maximum retry count
console.error(`********* ERROR: MongoDB maximum connection retry attempts have been made already ${DB_OPTIONS.reconnectTries} stopping *********`)
}
})
}
connectWithRetry();

node-mysql pool experiences ETIMEDOUT

I have a node-mysql pool configuration of
var db_init={
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL',
password : 'password here',
database : 'db here',
supportBigNumbers: true,
connectionLimit:100
};
Pool was created using
GLOBAL.db_foobar = mysql.createPool(db_init);
I basically just left the connection on for a couple of hours and I saw this error reported by my connection.query Request (after getConnection of course):
prodAPI-104 (out): { status: 'Error',
prodAPI-104 (out): details: '[foobar_function]Error in query',
prodAPI-104 (out): err: '{ [Error: read ETIMEDOUT]\n code: \'ETIMEDOUT\',\n errno: \'ETIMEDOUT\',\n syscall: \'read\',\n fatal: true }',
prodAPI-104 (out): query: 'SELECT * FROM `foobar_table`;' }
Why is this happening? The MySQL in Google-Cloud-SQL didn't report a query taking too long to create so I dunno why this happened.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

Mongoose: Odd behaviour with socketTimeoutMS in connection options

I'm trying to define custom timeout values when first establishing a connection with mongoose.connect(), but am seeing some strange results:
If I use basic options (without any timeouts specified), then everything works fine:
options = { server:{ auto_reconnect: true, } }
However, if I try to specify socketTimeoutMS (e.g. 5000ms), then the connection repeatedly times out.
options = {
server:{
auto_reconnect: true,
socketOptions:{
connectTimeoutMS : 30000,
socketTimeoutMS : 5000,
keepAlive : 1
}
}
}
However, despite the [Error: connection to xxx timed out] errors that I get, the application still works!
Can anyone explain this behaviour?
Other info:
Mongoose v3.8.12 (Native driver 1.4.5)
MongoDb Server v2.4.5
Connecting to server on localhost (Windows 7 64bit)