Keep MongoDB connection open in serverless framework - mongodb

I want to access a MongoDB from within a AWS lambda function deployed with the serverless framework (serverless.com)
There is an example by the framework on how they open it (https://github.com/serverless/examples/blob/master/aws-node-rest-api-mongodb/handler.js)
But if I understand the code correctly they open and close the connection for every request (relevant code of the example):
const mongoString = ''; // MongoDB Url
const dbExecute = (db, fn) => db.then(fn).finally(() => db.close());
function dbConnectAndExecute(dbUrl, fn) {
return dbExecute(mongoose.connect(dbUrl, { useMongoClient: true }), fn);
}
module.exports.createUser = (event, context, callback) => {
dbConnectAndExecute(mongoString, () => (
user
.save()
.then(() => callback(null, {
statusCode: 200,
body: JSON.stringify({ id: user.id }),
}))
.catch(err => callback(null, createErrorResponse(err.statusCode, err.message)))
));
};
Is my assumption wrong and the connection is staying alive? If not, how would a correct pattern for keeping the connection open look like. I know that in AWS lambda there can be a global state, but apparently the serverless framework is removing everything after a single run as no state I set globally is persisted.

If you want the connection to remain after for subsequent requests to teh warmed Lambda function, you should open the connection outside of the Lambda function itself, within the same scope as the const's you declare at the top of your example. The Serverless Framework is not the one closing anything, this is a feature of Lambda itself.
In addition, you are using a function called, dbConnectAndExecute, where it would probably be more useful to connect outside the function and then execute within; i.e. break up that function. This way your connection remains open but the execution is discarded after the Lambda executes.
One word of caution. Watch out for too many open connections on your MongoDB cluster. This is one of the reasons I prefer to use a service like DynamoDB where connections just don't exist; everything is performed over an API call. If you have 1000 simultaneously executing Lambda functions, each will have its own connection and may cause failure in queries later.

Related

How to do actions when MongoDB Realm Web SDK change stream closes or times out?

I want to delete all of a user's inserts in a collection when they stop watching a change stream from a React client. I'm using the Realm Web SDK for this.
Here's a summary of my code with what I want to do at the end of it:
import * as Realm from "realm-web";
const realmApp: Realm.App = new Realm.App({ id: realmAppId });
const credentials = Realm.Credentials.anonymous();
const user: Realm.User = await realmApp.logIn(credentials);
const mongodb = realmApp?.currentUser?.mongoClient("mongodb-atlas");
const users = mongodb?.db("users").collection("users");
const changeStream = users.watch();
for await (const change of changeStream) {
switch (change.operationType) {
case "insert": {
...
break;
}
case ...
}
}
// This pseudo-code shows what I want to do
changeStream.on("close", () => // delete all user's inserts)
changeStream.on("timeout", () => // delete all user's inserts)
changeStream.on("user closes app thus also closing stream", () => ... )
Realm Web SDK patterns seem rather different from the NodeJS ones and do not seem to include a method for closing a stream or for running a callback when it closes. In any case, they don't fit my use case.
These MongoDB Realm Web docs lead to more docs about Realm. Unless I'm missing it, both sets don't talk about how to monitor for closing and timing out of a change stream watcher instantiated from the Realm Web SDK, and how to do something when it happens.
I thought another way to do this would be in Realm's Triggers. But it doesn't seem likely from their docs.
Can this even be done from a front end client? Is there a way to do this on MongoDB itself in a "serverless" way?
If you want to delete the inserts specifically when a (client-)listener of a change-stream stops listening you have to implement some logic on client side. There is currently no way to get notified of such even within Mongodb Realm.
Sice a watcher could be closed because the app / browser is closed I would recommend against running the deletion logic on your client. Instead notify a server (or call a Mongodb Realm function / http endpoint) to make the deletions.
You can use the Beacon API to reliably send a request to trigger the delete, even when the window unloads.
Client side
const inserts = [];
for await (const change of changeStream) {
switch (change.operationType) {
case 'insert': inserts.push(change);
}
}
// This point is only reached if the generator returns / stream closes
navigator.sendBeacon('url/to/endpoint', JSON.stringify(inserts));
// Might also add a handler to catch users closing the app.
window.addEventListener('unload', sendBeacon);
Note that the unload event is not reliable MDN. But there are some alternatives which maybe be good enough for your use case.
Inside a realm function you could delete the documents.
That being said, maybe there is a better way to do what you want to achieve. Is it really the timeout of the change stream listener that has to trigger the delete or some other userevent?

What's a good way to use MongoDB Atlas in Firebase functions?

I'm using MongoDB Atlas connected in Firebase functions.
Currently I am using it in the following way.
const functions = require("firebase-functions")
const { MongoClient } = require('mongodb')
const uri = "mongodb+srv://-----"
const mongodb = new MongoClient(uri)
exports.myfunc = functions.https.onCall( () => {
return mongodb.connect().then(()=>{
const collection = mongodb.db("db_name").collection("col_name")
return collection
.find({/* query */}).toArray()
.finally(() => mongodb.close() )
})
})
Is it a good way to connect and close to mongodb every time I call myfunc?
I am concerned that this method is putting unnecessary load on the server.
I tried to find a better way, but I couldn't find it.
Its a good practice close the connection, to close the resources after they are used and before exiting the application. The MongoClient#close() method API documentation says:
Close the client, which will close all underlying cached resources,
including, for example, sockets and background monitoring threads.
Closing the connection before exiting the application.
click here more documentation.

[ 'Parse error: Can\'t wait without a fiber' ]' When trying to do find within Metor

When receiving JSON data via websockets, I'm trying to feed this data into a mongodb within meteor. I'm getting the JSON data fine, but when trying to find whether the data already exists in the database, I keep getting the error: "[ 'Parse error: Can\'t wait without a fiber' ]'.
binance.websockets.miniTicker(markets => {
//we've got the live information from binance
if (db.Coins.find({}).count() === 0) {
//if there's nothing in the database right now
markets.forEach(function(coin) {
//for each coin in the JSON file, create a new document
db.Coins.insert(coin);
});
}
});
Can anyone point me in the right direction to get this cleared up?
Many thanks,
Rufus
You execute a mongo operation within an async function's callback. This callback is not bound to the running fiber anymore. In order to connect the callback to a fiber you need to use Meteor.bindEnvironment which binds the fiber to the callback.
binance.websockets.miniTicker(Meteor.bindEnvironment((markets) => {
//we've got the live information from binance
if (db.Coins.find({}).count() === 0) {
//if there's nothing in the database right now
markets.forEach(function(coin) {
//for each coin in the JSON file, create a new document
db.Coins.insert(coin);
});
}
}));
You should not require to bind to the function within the forEach as they are not async.
Related posts on SO:
Meteor.Collection with Meteor.bindEnvironment
Meteor: Calling an asynchronous function inside a Meteor.method and returning the result
Meteor wrapAsync or bindEnvironment without standard callback signature
What's going on with Meteor and Fibers/bindEnvironment()?

Sails pubsub how to subscribe to a model instance?

I am struggling to receive pubsub events in my client. The client store (reflux) gets the data from a project using its id. As I understand it this automatically subscribes the Sails socket for realtime events (from version 0.10), but I don't see it happening.
Here's my client store getting data from sails
(this is ES6 syntax)
onLoadProject(id) {
var url = '/api/projects/' + id;
io.socket.get(url, (p, jwres) => {
console.log('loaded project', id);
this.project = p;
this.trigger(p);
});
io.socket.on("project", function(event){
console.log('realtime event', event);
});
},
Then I created a test "touch" action in my project controller, just to have the modifiedAt field updated.
touch: function(req, res){
var id = req.param('id');
Project.findOne(id)
.then(function(project) {
if (!project) throw new Error('No project with id ' + id);
return Project.update({id: id}, {touched: project.touched+1});
})
.then(function(){
// this should not be required right?
return Project.publishUpdate(id);
})
.done(function() {
sails.log('touched ok');
res.ok();
}, function(e) {
sails.log("touch failed", e.message, e.stack);
res.serverError(e.message);
});
}
This doesn't trigger any realtime event in my client code. I also added a manual Project.publishUpdate(), but this shouldn't be required right?
What am I missing?
-------- edit ----------
There was a complication a result of my model touched attribute, since I set it to 'number' instead of 'integer' and the ORM exception wasn't caught by the promise error handling without a catch() part. So the code above works, hurray! But the realtime events are received for every instance of Project.
So let me rephrase my question:
How can I subscribe the client socket to an instance instead of a model? I could check the id on the client side and retrieve the updated instance data but that seems inefficient since every client receives a notification about every project even though they only should care about a single one.
----- edit again ------
So nevermind. The reason I was getting updates from every instance is simply because at the start of my application I triggered a findAll to get a list of available projects. As a result my socket got subscribed for all of them. The workaround would be to either initiate that call via plain http instead of a socket, or use a separate controller action for retrieving the list (therefor bypassing the blueprint route). I picked the second option because in my case it's silly to fetch all project data prior to picking one.
So to answer my own question. The reason I was getting updates from every instance is simply because at the start of my application I triggered a findAll to get a list of available projects. As a result my socket got subscribed for all of them.
The workaround would be to either initiate that call via plain http instead of a socket, or use a separate controller action for retrieving the list (therefor bypassing the blueprint route). I picked the second option because in my case it's silly to fetch all resources data prior to selecting one.
Here's the function I used to list all resources, where I filter part of the data which is not relevant for browsing the list initially.
list: function(req, res) {
Project.find()
.then(function(projects) {
var keys = [
'id',
'name',
'createdAt',
'updatedAt',
'author',
'description',
];
return projects.map(function(project){
return _.pick(project, keys);
});
})
.catch(function (e){
res.serverError(e.message);
})
.done(function(list){
res.json(list);
}, function(e) {
res.serverError(e.message);
});
},
Note that when the user loads a resource (project in my case) and then switches to another resource, the client is will be subscribed to both resources. I believe it requires a request to an action where you unsubscribe the socket explicitly to prevent this. In my case this isn't such a problem, but I plan to solve that later.
I hope this is helpful to someone.

node.js and socket.io: different connections for different "sessions"

I've got a node.js application that 'streams' tweets to users. At the moment, it just searches Twitter for a hard-coded string, but I'd like to allow users to configure this in the URL (eg. by visiting /?q=stackoverflow).
At the moment, my code looks a bit like this:
app.get('/', function (req, res) {
// page rendering skipped
io.sockets.on('connection', function (socket) {
twit.stream('user', {track: 'stackoverflow'}, function(stream) {
stream.on('data', function (data) {
socket.volatile.emit('tweet', data);
}
});
});
});
The question is, how do I make it so that each user can see a different stream of tweets simultaneously? At the moment, it works fine in a single browser tab, but it falls over as soon as a second one is opened - and the error is fairly deep down inside socket.io. Am I misusing it?
I haven't fully got my head around socket.io yet, so that could be the issue.
Thanks in advance!
Every time a new request comes in, you are redefining the connection callback with io.sockets.on - you should move that block of code outside of app.get, after your initialization statement of the io object.