Why is mongodb atlas using 25 connections per 1 active client - mongodb

I am currently connecting to mongodb atlas using the following code
exports.connectToMongoose = async() =>{
try{
const result = await mongoose.connect(process.env.mongodbURL,{
useNewUrlParser: true,
useUnifiedTopology: true,
useCreateIndex:true,
useFindAndModify:false
})
}
const connectionResult = await mongooseConnection.connectToMongoose();
Although I am the only one user logged In . I can see that mongodb atlas is using 25 connections from the 500 available. Why does each client uses 25 connections and does that mean the mongodb atlas can only handle 20 concurrent clients

db.serverStatus().connections This may be helpful to get the total connections from your clients to your server or primary node in your cluster

Are you sure thats from your one connection? Each Mongo server will have multiple connections from other servers in the cluster, such as Mongos, and Arbiters, as well as health checks and monitoring.
Found this a while back that gives a snapshot of the connections and the IPs they came from.. if this helps
db.currentOp(true).inprog.reduce((accumulator, connection) => {
ipaddress = connection.client ? connection.client.split(":")[0] : "Internal";
if(typeof accumulator[ipaddress] == "undefined"){
accumulator[ipaddress]= {"active":0, "inactive":0};
}
if(connection.active==true) {
accumulator[ipaddress]["active"] = (accumulator[ipaddress]["active"] || 0) + 1;
accumulator["ACTIVE"] = (accumulator["ACTIVE"] || 0 ) +1;
} else {
accumulator[ipaddress]["inactive"] = (accumulator[ipaddress]["inactive"] || 0) + 1;
accumulator["INACTIVE"] = (accumulator["INACTIVE"] || 0 ) +1;
}
accumulator["TOTAL_CONNECTION_COUNT"]++;
return accumulator;
},
{ TOTAL_CONNECTION_COUNT: 0 }
)

Related

Intermittent DB connection timeout in .NET 6 Console Application connecting to Azure SQL

We have a .Net Core Console Application accessing Azure SQL (Gen5, 4 vCores) deployed as a web job in Azure.
We recently upgraded our small console application to ef6(6.0.11)
Since quite some time, the application keeps throwing below exception intermittently for READ operations(highlighted in below code):
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)
We are clueless on Root Cause of this issue. Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings in azure
"ConnectionStrings": { "DBContext": "Server=Trusted_Connection=False;Encrypt=False;" }
Overall code looks something like below:
` var config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.Build();
var builder = new SqlConnectionStringBuilder(config.GetConnectionString("DBContext"));
builder.Password = "";
builder.UserID = "";
builder.DataSource = "";
builder.InitialCatalog = "";
string _connection = builder.ConnectionString;
var sp = new ServiceCollection()
.AddDbContext<DBContext>(x => x.UseSqlServer(_connection, providerOptions => providerOptions.EnableRetryOnFailure()))
.BuildServiceProvider();
var db = sp.GetService<DBContext>();
lock(db)
{
var NewTriggers = db.Triggers.Where(x => x.IsSubmitted == false && x.Error == null).OrderBy(x => x.CreateOn).ToList();
}
`
We tried migrating from EF 3.1 to EF 6.0.11. We were expecting a smooth transition

Permission level for setting Log level without being root

I have a user with the roles userAdminAnyDatabase and readWriteAnyDatabase. But this does not seem to be enough to set log level for my database. So what permissions do I need to set without having to make my user root?
This is the error I get:
[thread1] Error: setLogLevel failed:{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { setParameter: 1.0, logComponentVerbosity: { verbosity: 3.0 } }",
"code" : 13
}
TL;DR Before use administrative command you need select admin database:
use admin
Also you need hostManager role or permission setParameter for cluster resource.
Read more about administrative command, admin database and setParameter
setLogLevel use administrative command setParameter:
function (logLevel, component) {
componentNames = [];
if (typeof component === "string") {
componentNames = component.split(".");
} else if (component !== undefined) {
throw Error("setLogLevel component must be a string:" + tojson(component));
}
var vDoc = {verbosity: logLevel};
// nest vDoc
for (var key, obj; componentNames.length > 0;) {
obj = {};
key = componentNames.pop();
obj[key] = vDoc;
vDoc = obj;
}
var res = this.adminCommand({setParameter: 1, logComponentVerbosity: vDoc});
if (!res.ok)
throw _getErrorWithCode(res, "setLogLevel failed:" + tojson(res));
return res;
}

Google Cloud SQL or node-mysql answers a long time

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.
I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

one node of a cluster does not show up in Ganglia web portal

In Ganglia, I have configured a 2 clusters. cluster A has 2 nodes, cluster B has 13 nodes respectively. cluster B works well, while cluster A only has 1 node shown. The other node has exactly the same gmond.conf file, which is shown below:
globals {
daemonize = yes
setuid = yes
user = ganglia
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0
}
cluster {
#name = "unspecified"
name = "rpt"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
host {
location = "unspecified"
}
udp_send_channel {
#mcast_join = 239.2.11.71
host = qt-dw-master
port = 8557
ttl = 1
}
/*
udp_recv_channel {
#mcast_join = 239.2.11.71
port = 8557
#bind = 239.2.11.71
#bind = qt-dw-master
}
*/
tcp_accept_channel {
port = 8557
}
gmetad.conf on qt-dw-master is shown below:
data_source "rpt" 60 rpt0:8557 rpt1-db:8557
I have tried using multicasting, but does not work. I also want to find log files of gmond, but failed. Anyone can help on this problem?
Are all gmond running in the cluster A? Use commond service gmond status to confirm it

Can´t bind mongodb service on Appfog

Hi i´m trying to bind mongodb service on my expressjs app with Appfog.
I have a config file like this:
config.js
var config = {}
config.dev = {};
config.prod = {};
//DEV
config.dev.host = "localhost";
config.dev.port = 3000;
config.dev.mdbhost = "localhost";
config.dev.mdbport = 27017;
config.dev.db = "detysi";
//PROD
config.prod.service_type = "mongo-1.8";
config.prod.json = process.env.VCAP_SERVICES ? JSON.parse(process.env.VCAP_SERVICES) : '';
config.prod.credentials = process.env.VCAP_SERVICES ? config.prod.json[config.prod.service_type][0]["credentials"] : null;
config.prod.mdbhost = config.prod.credentials["host"];
config.prod.mdbport = config.prod.credentials["port"];
config.prod.db = config.prod.credentials["db"];
config.prod.port = process.env.VCAP_APP_PORT || process.env.PORT;
module.exports = config;
And this is my mongodb conf depending of the environment
app.js
if ( process.env.VCAP_SERVICES ) {
server = new Server(config.prod.mdbhost, config.prod.mdbport, { auto_reconnect: true });
db = new Db(config.prod.db, server);
} else {
server = new Server(config.dev.mdbhost, config.dev.mdbport, { auto_reconnect: true });
db = new Db(config.dev.db, server);
}
I bind the service manually from https://console.appfog.com, my app is using the infra AWS Virginia. I also use MongoHQ addon to create one collection with two documents.
When i go to Windows console and write af update myapp it throws me next error:
Cannot read property '0' of undefined
This is cause process.env.VCAP_SERVICES is undefined.
I was investigating that and it can be that my mongodb service is incorrectly binded.
After that i tried to bind mongodb service from the windows console like below:
af bind-service mongodb myapp
But it throws me next error:
Service mongodb and App myapp are not on the same infra
At this point i don´t know what can i do.
I had the same problem.
What fixed it for me:
Go to: console>services>(re)start mongodb service
Run the same command again.
Push again.
It should work.