Make promise calls with a fixed timeout - mongodb

I am currently trying to make it check for a database connection. but it seems like the result, ie connection is pending.
I am looking to impliment a system where i can send an timeout input, where the promise would be rejected due to a fixed timeout.
Something like;
try {
start(timeout: 6000) // 6 secs timeout on promise. default, ie no params: 3sec
} catch(e) {
// failed due to, in this case, timeout, since there is no connection running, and the database is pending their promise.
}
How can i accomplish such timeout?
current running example gives the following:
connected to mongo Promise { <pending> }
starting server code:
const start = async () => {
console.log("connecting to database");
try {
console.log("Attempting to establish connection....");
var result = await mongoose.connect('mongodb://localhost/db',{useNewUrlParser: true});
console.log("connected to mongo",result);
return result;
} catch (error) {
console.log("failed to connect to mongo",error);
}
}
try {
start();
} catch(e) {
console.log("failed to start server",e);
throw new Error("failed to start server",e);
}

Related

How to solve 504 Timeout Error on Netlify and Vercel with MongoDB Connection in Next.js"

Anytime my site is idle for a while, and I load it up, it shows a 504 timeout error from netlify and vercel. I'm aware this is due to the timeout limit set by both platforms, then on reload it works fine
I know this is due to the database connection, it takes a while to connect on initial connection
Is there a way to keep it connected always or how do you suggest I handle this in my next js project to prevent it from taking so long to connect after being idle for a while?
This is my db connect function
import mongoose from 'mongoose'
const MONGODB_URI = process.env.MONGO_URL
if (!MONGODB_URI) {
throw new Error(
'Please define the MONGODB_URI environment variable inside .env.local'
)
}
let cached = global.mongoose
if (!cached) {
cached = global.mongoose = { conn: null, promise: null }
}
async function db() {
if (cached.conn) {
return cached.conn
}
if (!cached.promise) {
const opts = {
bufferCommands: false,
}
cached.promise = mongoose.connect(MONGODB_URI, opts).then((mongoose) => {
return mongoose
})
}
try {
cached.conn = await cached.promise
} catch (e) {
cached.promise = null
throw e
}
return cached.conn
}
export default db
and I call it this way: await db()
and this is how the db url looks like mongodb+srv://*********#cluster0.7xbw7v5.mongodb.net/?retryWrites=true&w=majority
If your issue is caused by cold start, then a good fix would be to warmup the server with a periodic ping at your preferred time interval
I used checkly at https://www.checklyhq.com/
So you can create a separate endpoint for the warmup and checkly should run it at your set time interval

How to deal with HTTP connection timed out crashes in Flutter

So I have a method that uses the Flutter HTTP library and is responsible for calling HTTP requests to the server with code like this:
Future<List<DataModel>> fetchData() async {
try {
var url = Uri.parse('${baseUrlParse}myapipath');
var request = await http.get(url);
var data = jsonDecode(request.body);
return data;
} catch (e) {
print('Catch ${e}');
rethrow;
}
}
This code runs fine and has no issues.
It got to the point where when I have no internet connection or server connection fails, the app freezes, and an error file appears (if you're debugging in VS Code), called http_impl.dart, and the error snippet goes something like this:
onError: (error) {
// When there is a timeout, there is a race in which the connectionTask
// Future won't be completed with an error before the socketFuture here
// is completed with a TimeoutException by the onTimeout callback above.
// In this case, propagate a SocketException as specified by the
// HttpClient.connectionTimeout docs.
if (error is TimeoutException) {
assert(connectionTimeout != null);
_connecting--;
_socketTasks.remove(task);
task.cancel();
throw SocketException(
"HTTP connection timed out after $connectionTimeout, "
"host: $host, port: $port");
}
_socketTasks.remove(task);
_checkPending();
throw error;
});
I have tried to implement from this source and this, but when I make a request but have no connection, this error still occurs.
How to deal with this problem?
What I want is, if there is a problem with HTTP either there is no connection, or it fails to contact the server, then I can make a notification..
Is there something wrong with my code?
Please help, thank you
You re throw the exception in your code,
You need to catch exception where you call to this method like this.
try {
await fetchData();
} catch (e) {
// TODO: handle exception
}
You can stop VS Code catching unhandled exceptions from this way
https://superuser.com/a/1609472

How do I gracefully disconnect MongoDB in Google Functions? Behavior of "normal" Cloud Run and "Functions Cloud Run" seems to be different

In a normal Cloud Run something like the following seems to properly close a Mongoose/MongoDB connection.
const cleanup = async () => {
await mongoose.disconnect()
console.log('database | disconnected from db')
process.exit()
}
const shutdownSignals = ['SIGTERM', 'SIGINT']
shutdownSignals.forEach((sig) => process.once(sig, cleanup))
But for a Cloud-Functions-managed Cloud Run this seems not to be the case. The instances shut down without waiting the usual 10s that "normal" Cloud Runs give after the SIGTERM is sent, so I never see the database | disconnected from db.
How would one go about this? I don't wanna create a connection for every single Cloud Functions call (very wasteful in my case).
Well, here is what I went with for now:
import mongoose from 'mongoose'
import { Sema } from 'async-sema'
functions.cloudEvent('someCloudFunction', async (event) => {
await connect()
// actual computation here
await disconnect()
})
const state = {
num: 0,
sema: new Sema(1),
}
export async function connect() {
await state.sema.acquire()
if (state.num === 0) {
try {
await mongoose.connect(MONGO_DB_URL)
} catch (e) {
process.exit(1)
}
}
state.num += 1
state.sema.release()
}
export async function disconnect() {
await state.sema.acquire()
state.num -= 1
if (state.num === 0) {
await mongoose.disconnect()
}
state.sema.release()
}
As one can see I used kind of a "reference counting" of the processes which want to use the connection, and ensured proper concurrency with async-sema.
I should note that this works well with the setup I have; I allow many concurrent requests to one of my Cloud Functions instances. In other cases this solution might not improve over just opening up (and closing) a connection every single time the function is called. But as stuff like https://cloud.google.com/functions/docs/writing/write-event-driven-functions#termination seems to imply, everything has to be handled inside the cloudEvent function.

Streaming from mongodb in AWS lambda times out

I have a lambda function which connects to a mongodb database and streams some records from the database.
exports.handler = (event, context, callback) => {
let url = event.mongodbUrl;
let collectionName = event.collectionName;
MongoClient.connect(url, (error, db) => {
if (error) {
console.log("Error connecting to mongodb: ${error}");
callback(error);
} else {
console.log("Connected to mongodb");
let events = [];
console.log("Streaming data from mongodb...");
let mongoStream = db.collection(collectionName).find().sort({ _id : -1 }).limit(500).stream();
mongoStream.on("data", data => {
events.push(data);
});
mongoStream.once("end", () => {
console.log("Stream ended");
db.close(() => {
console.log("Database connection closed");
callback(null, "Lambda function succeeded!!");
});
});
}
});
};
When the stream is ended I close the database connection and call the callback function which should end the lambda function. This works locally using node-lambda, but when I try to run it in AWS lambda I get all of the logs, including console.log("Database connection closed"); coming through, but the callback doesn't seem to be called, so the function always times out, despite the last log occurring a few seconds before the time out.
I can force it to end using context.succeed(), but that seems to be deprecated when using node version 4, so I want to avoid using it. How can I stop this function from timing out in AWS lambda?
Add the following line at the beginning of your handler function:
context.callbackWaitsForEmptyEventLoop = false
Try following:
mongoStream.once("end", callback);
This is also calling back with err and result but will not lose the context.

NodeJS Node-apn implementation as daemon

I have a node-apn nodejs script running as a daemon on AmazonWS. The daemon runs fine and the script stays up and comes back when it goes down but I believe I am having a synchronous execution and exiting issue with node.js. When I release the process with process.exit(); even though all console.logs output saying they have sent my messages, they never are received on the phone. I decided to remove the exit and let the process "hang" after execution and all messages were sent successfully. This led me to do the following implementation using an ASYNC function, but the same result seems to be happening. Can anyone provide insight to this? There are no errors being thrown from APN or anywhere else.
function closeDB()
{
connection.end(function(err) {
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
process.exit(1);
}
console.log("APNS-PUSH: COMPLETED.");
});
setTimeout(function(){process.exit();}, 50);
} // End of closeDB()
function apnsError(err, notification)
{
console.log(err);
console.log(notification);
closeDB();
}
function async(arg, callback)
{
apnsConnection.sendNotification(arg);
console.log(arg);
setTimeout(function() { callback(1); }, 100);
}
/**
* Our MySQL query callback.
*/
function queryCB(err, results)
{
//error in our all, report and exit
if (err) {
console.log("ERROR: " + util.inspect(err, false, 5));
closeDB();
}
if(results.length == 0)
{
closeDB();
}
var notes = [];
var count = 0;
try {
for( var i = 0; i < results.length; i++ ) {
var myDevice = new apns.Device(results[i]['udid']);
var note = new apns.Notification();
note.expiry = Math.floor(Date.now() / 1000) + 3600; // Expires 1 hour from now.
note.badge = results[i]["notification_count"];
note.sound = "ping.aiff";
note.alert = results[i]["message"];
note.device = myDevice;
connection.query('UPDATE `tbl_notifications` SET `sent`=1 WHERE `id`=' + results[i]["id"] , function(err, results) {
if(err)
{
console.log("ERROR: " + util.inspect(err, false, 5));
}
});
notes.push(note);
}
} catch( err ) {
console.log('error: ' + err)
}
console.log(notes.length);
notes.forEach(function(nNode) {
async(nNode, function(result) {
count++;
if(count == notes.length) {
closeDB();
}
})
});
} // End of queryCB()
I had the same problem where killing the process also killed the open socket connections and didn't allow the notifications to be sent. The solution I came up with isn't an an ideal solution but it will work in your situation as well. I looked into the node-apn code and found that the Connection object inherited from EventEmitter so you can monitor events on the object like so:
var apnsConnection = new apn.Connection(options)
apnsConnection.sendNotification(notification)
apnsConnection.on('transmitted', function(){
console.log("Transmitted")
callback()
})
apnsConnection.on('error', function(){
console.log("Error")
callback()
})
This is monitoring the socket that the notification is sent through so I don't know how accurate it is at determining when a notification has successfully been passed off to Apple's APNS servers but it has worked pretty well for me.
The reason you are seeing this problem is that when you use #pushNotification it buffers the notification inside the module and handles sending it asynchronously.
Listening for "transmitted" is valid and this is emitted when the notification has been written to the socket. However, if your objective is to close the socket after all notifications have been sent then the easiest way to accomplish this is using the connectionTimeout property when creating your connection.
Simply set connectionTimeout to something around 1000 (milliseconds) and assuming you have no other connections open then the process will exit automatically. Or you can set an event listener on the timeout event and call process.exit() from there.