Google Cloud SQL or node-mysql answers a long time - google-cloud-sql

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.

I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

Related

Intermittent DB connection timeout in .NET 6 Console Application connecting to Azure SQL

We have a .Net Core Console Application accessing Azure SQL (Gen5, 4 vCores) deployed as a web job in Azure.
We recently upgraded our small console application to ef6(6.0.11)
Since quite some time, the application keeps throwing below exception intermittently for READ operations(highlighted in below code):
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)
We are clueless on Root Cause of this issue. Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings in azure
"ConnectionStrings": { "DBContext": "Server=Trusted_Connection=False;Encrypt=False;" }
Overall code looks something like below:
` var config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.Build();
var builder = new SqlConnectionStringBuilder(config.GetConnectionString("DBContext"));
builder.Password = "";
builder.UserID = "";
builder.DataSource = "";
builder.InitialCatalog = "";
string _connection = builder.ConnectionString;
var sp = new ServiceCollection()
.AddDbContext<DBContext>(x => x.UseSqlServer(_connection, providerOptions => providerOptions.EnableRetryOnFailure()))
.BuildServiceProvider();
var db = sp.GetService<DBContext>();
lock(db)
{
var NewTriggers = db.Triggers.Where(x => x.IsSubmitted == false && x.Error == null).OrderBy(x => x.CreateOn).ToList();
}
`
We tried migrating from EF 3.1 to EF 6.0.11. We were expecting a smooth transition

"EADDRINUSE: address already in use" On every change even when port is changes

I have a basic server with socket.io, but every time I do a change on the server, it throws the error:
Error: listen EADDRINUSE: address already in use :::3000
But, even if I change the port, it will throw the error anyways. I used sudo kill -9 and that allowed me to run the server with no problems the first time, but if I make a change, or if I reset it, it will throw the error again.
SERVER CODE
var express = require('express');
var app = express();
var http = require('http').createServer(app);
var io = require('socket.io')(http);
app.get("/", function(request, result) {
result.send("== SERVIDOR ==");
});
var port = 3000
http.listen(port, function() {
console.log("-- Servidor iniciado")
console.log("-- Escuchando puerto "+ port +"...")
})

How can I check the connection of Mongoid

Does Mongoid has any method like ActiveRecord::Base.connected??
I want to check if the connection that's accessible.
We wanted to implement a health check for our running Mongoid client that tells us whether the established connection is still alive. This is what we came up with:
Mongoid.default_client.database_names.present?
Basically it takes your current client and tries to query the databases on its connected server. If this server is down, you will run into a timeout, which you can catch.
My solution:
def check_mongoid_connection
mongoid_config = File.read("#{Rails.root}/config/mongoid.yml")
config = YAML.load(mongoid_config)[Rails.env].symbolize_keys
host, db_name, user_name, password = config[:host], config[:database], config[:username], config[:password]
port = config[:port] || Mongo::Connection::DEFAULT_PORT
db_connection = Mongo::Connection.new(host, port).db(db_name)
db_connection.authenticate(user_name, password) unless (user_name.nil? || password.nil?)
db_connection.collection_names
return { status: :ok }
rescue Exception => e
return { status: :error, data: { message: e.to_s } }
end
snrlx's answer is great.
I use following in my puma config file, FYI:
before_fork do
begin
# load configuration
Mongoid.load!(File.expand_path('../../mongoid.yml', __dir__), :development)
fail('Default client db check failed, is db connective?') unless Mongoid.default_client.database_names.present?
rescue => exception
# raise runtime error
fail("connect to database failed: #{exception.message}")
end
end
One thing to remind is the default server_selection_timeout is 30 seconds, which is too long for db status check at least in development, you can modify this in your mongoid.yml.

Can´t bind mongodb service on Appfog

Hi i´m trying to bind mongodb service on my expressjs app with Appfog.
I have a config file like this:
config.js
var config = {}
config.dev = {};
config.prod = {};
//DEV
config.dev.host = "localhost";
config.dev.port = 3000;
config.dev.mdbhost = "localhost";
config.dev.mdbport = 27017;
config.dev.db = "detysi";
//PROD
config.prod.service_type = "mongo-1.8";
config.prod.json = process.env.VCAP_SERVICES ? JSON.parse(process.env.VCAP_SERVICES) : '';
config.prod.credentials = process.env.VCAP_SERVICES ? config.prod.json[config.prod.service_type][0]["credentials"] : null;
config.prod.mdbhost = config.prod.credentials["host"];
config.prod.mdbport = config.prod.credentials["port"];
config.prod.db = config.prod.credentials["db"];
config.prod.port = process.env.VCAP_APP_PORT || process.env.PORT;
module.exports = config;
And this is my mongodb conf depending of the environment
app.js
if ( process.env.VCAP_SERVICES ) {
server = new Server(config.prod.mdbhost, config.prod.mdbport, { auto_reconnect: true });
db = new Db(config.prod.db, server);
} else {
server = new Server(config.dev.mdbhost, config.dev.mdbport, { auto_reconnect: true });
db = new Db(config.dev.db, server);
}
I bind the service manually from https://console.appfog.com, my app is using the infra AWS Virginia. I also use MongoHQ addon to create one collection with two documents.
When i go to Windows console and write af update myapp it throws me next error:
Cannot read property '0' of undefined
This is cause process.env.VCAP_SERVICES is undefined.
I was investigating that and it can be that my mongodb service is incorrectly binded.
After that i tried to bind mongodb service from the windows console like below:
af bind-service mongodb myapp
But it throws me next error:
Service mongodb and App myapp are not on the same infra
At this point i don´t know what can i do.
I had the same problem.
What fixed it for me:
Go to: console>services>(re)start mongodb service
Run the same command again.
Push again.
It should work.

fulljid is empty after connection to BOSH service with XMPHP

I am trying to pre-bind an XMPP session via XMPHP and pass the rid/sid/jid to a strophe client to attach to the session.
connection code here:
$conn = new CIRCUIT_BOSH('server.com', 7070, $username, $pass, $resource, 'server.com', $printlog=true, $loglevel=XMPPHP_Log::LEVEL_VERBOSE);
$conn->autoSubscribe();
try{
$conn->connect('http://xmpp.server.com/http-bind', 1, true);
$log->lwrite('Connected!');
}catch(XMPPHP_Exception $e){
die($e->getMessage());
}
I am getting the rid and sid but the fulljid in the $conn object stays empty and I cant see a session started on my openfire admin console.
If I create the jid manually by using the given resource and passing jid/rid/sid to strophe to use in attach, I get the ATTACHED status and I see calls from the client to the BOSH ip but I still dont see a session and I cant use the connection.
Strophe Client Code:
Called on document ready:
var sid = $.cookie('sid');
var rid = $.cookie('rid');
var jid = $.cookie('jid');
$(document).trigger('attach', {
sid: sid,
rid: rid,
jid: jid,
});
$(document).bind('attach', function (ev, data) {
var conn = new Strophe.Connection(
"http://xmpp.server.com/http-bind");
conn.attach(data.jid, data.sid, data.rid, function (status) {
if (status === Strophe.Status.CONNECTED) {
$(document).trigger('connected');
} else if (status === Strophe.Status.DISCONNECTED) {
$(document).trigger('disconnected');
} else if (status === Strophe.Status.ATTACHED){
$(document).trigger('attached');
}
});
Object.connection = conn;
});
I think the problem starts on the XMPPHP side which is not creating the session properly.
'attached' is triggered but never 'connected', is status 'connected' supposed to be sent?
What am I missing?
Ok, solved, I saw that XMPPHP lib didn't create a session at all on the openfire server, so I wrote a simple test for the XMPP class which was good and created the session, and for the XMPP_BOSH class that didn't manage create one. Then I saw the issue report here: http://code.google.com/p/xmpphp/issues/detail?id=47 comment no.9 worked, it fixed the issue by copying the processUntil() function from the XMLStream.php to BOSH.php, still can't figure out why this is working. Then I found I had an overlapping bug also with some of the passwords set for users on the openfire server. These passwords contained these ! # % ^ characters, for some reason the XMPP_BOSH is sending the password corrupted or changed so I got Auth Failed exception. Changing the password fixed the issue and I can now attach to the session XMPPHP created with the Strophe.js library.