I'm using MongoDB Atlas connected in Firebase functions.
Currently I am using it in the following way.
const functions = require("firebase-functions")
const { MongoClient } = require('mongodb')
const uri = "mongodb+srv://-----"
const mongodb = new MongoClient(uri)
exports.myfunc = functions.https.onCall( () => {
return mongodb.connect().then(()=>{
const collection = mongodb.db("db_name").collection("col_name")
return collection
.find({/* query */}).toArray()
.finally(() => mongodb.close() )
})
})
Is it a good way to connect and close to mongodb every time I call myfunc?
I am concerned that this method is putting unnecessary load on the server.
I tried to find a better way, but I couldn't find it.
Its a good practice close the connection, to close the resources after they are used and before exiting the application. The MongoClient#close() method API documentation says:
Close the client, which will close all underlying cached resources,
including, for example, sockets and background monitoring threads.
Closing the connection before exiting the application.
click here more documentation.
Related
I want to delete all of a user's inserts in a collection when they stop watching a change stream from a React client. I'm using the Realm Web SDK for this.
Here's a summary of my code with what I want to do at the end of it:
import * as Realm from "realm-web";
const realmApp: Realm.App = new Realm.App({ id: realmAppId });
const credentials = Realm.Credentials.anonymous();
const user: Realm.User = await realmApp.logIn(credentials);
const mongodb = realmApp?.currentUser?.mongoClient("mongodb-atlas");
const users = mongodb?.db("users").collection("users");
const changeStream = users.watch();
for await (const change of changeStream) {
switch (change.operationType) {
case "insert": {
...
break;
}
case ...
}
}
// This pseudo-code shows what I want to do
changeStream.on("close", () => // delete all user's inserts)
changeStream.on("timeout", () => // delete all user's inserts)
changeStream.on("user closes app thus also closing stream", () => ... )
Realm Web SDK patterns seem rather different from the NodeJS ones and do not seem to include a method for closing a stream or for running a callback when it closes. In any case, they don't fit my use case.
These MongoDB Realm Web docs lead to more docs about Realm. Unless I'm missing it, both sets don't talk about how to monitor for closing and timing out of a change stream watcher instantiated from the Realm Web SDK, and how to do something when it happens.
I thought another way to do this would be in Realm's Triggers. But it doesn't seem likely from their docs.
Can this even be done from a front end client? Is there a way to do this on MongoDB itself in a "serverless" way?
If you want to delete the inserts specifically when a (client-)listener of a change-stream stops listening you have to implement some logic on client side. There is currently no way to get notified of such even within Mongodb Realm.
Sice a watcher could be closed because the app / browser is closed I would recommend against running the deletion logic on your client. Instead notify a server (or call a Mongodb Realm function / http endpoint) to make the deletions.
You can use the Beacon API to reliably send a request to trigger the delete, even when the window unloads.
Client side
const inserts = [];
for await (const change of changeStream) {
switch (change.operationType) {
case 'insert': inserts.push(change);
}
}
// This point is only reached if the generator returns / stream closes
navigator.sendBeacon('url/to/endpoint', JSON.stringify(inserts));
// Might also add a handler to catch users closing the app.
window.addEventListener('unload', sendBeacon);
Note that the unload event is not reliable MDN. But there are some alternatives which maybe be good enough for your use case.
Inside a realm function you could delete the documents.
That being said, maybe there is a better way to do what you want to achieve. Is it really the timeout of the change stream listener that has to trigger the delete or some other userevent?
I want to access a MongoDB from within a AWS lambda function deployed with the serverless framework (serverless.com)
There is an example by the framework on how they open it (https://github.com/serverless/examples/blob/master/aws-node-rest-api-mongodb/handler.js)
But if I understand the code correctly they open and close the connection for every request (relevant code of the example):
const mongoString = ''; // MongoDB Url
const dbExecute = (db, fn) => db.then(fn).finally(() => db.close());
function dbConnectAndExecute(dbUrl, fn) {
return dbExecute(mongoose.connect(dbUrl, { useMongoClient: true }), fn);
}
module.exports.createUser = (event, context, callback) => {
dbConnectAndExecute(mongoString, () => (
user
.save()
.then(() => callback(null, {
statusCode: 200,
body: JSON.stringify({ id: user.id }),
}))
.catch(err => callback(null, createErrorResponse(err.statusCode, err.message)))
));
};
Is my assumption wrong and the connection is staying alive? If not, how would a correct pattern for keeping the connection open look like. I know that in AWS lambda there can be a global state, but apparently the serverless framework is removing everything after a single run as no state I set globally is persisted.
If you want the connection to remain after for subsequent requests to teh warmed Lambda function, you should open the connection outside of the Lambda function itself, within the same scope as the const's you declare at the top of your example. The Serverless Framework is not the one closing anything, this is a feature of Lambda itself.
In addition, you are using a function called, dbConnectAndExecute, where it would probably be more useful to connect outside the function and then execute within; i.e. break up that function. This way your connection remains open but the execution is discarded after the Lambda executes.
One word of caution. Watch out for too many open connections on your MongoDB cluster. This is one of the reasons I prefer to use a service like DynamoDB where connections just don't exist; everything is performed over an API call. If you have 1000 simultaneously executing Lambda functions, each will have its own connection and may cause failure in queries later.
Yes, I know I should call it from server side. But the purpose is to invoke MongoDB strait from the react-redux app. It's like firebase serverless apps do.
I write
import mongoose from 'mongoose';
let mongoDB = 'mongodb://127.0.0.1/my_database';
mongoose.connect(mongoDB);
mongoose.Promise = global.Promise;
let db = mongoose.connection;
db.on('error', console.error.bind(console, 'MongoDB connection error:'));
And I get:
TypeError: __
WEBPACK_IMPORTED_MODULE_6_mongoose___default.a.connect is not a function
How to solve this problem?
From the comment here
Mongoose won't work in the frontend because it relies on functionality from Node which isn't present in browser JS implementations. You can't import Mongoose into frontend code.
Try importing mongoose in your react app
import mongoose from "mongoose";
and iterating through its properties:
Object.getOwnPropertyNames(mongoose).forEach(prop => {
console.log(prop);
});
You'll get
Promise
PromiseProvider
Error
Schema
Types
VirtualType
SchemaType
utils
Document
The methods that you need to work with MongoDB, such as connect, are not being imported.
mongoDB has to be initialized from the server. it is not like firebase that you can connect directly from the server. If you wanna do an operation from the client, you have to define an endpoint on the server, make a request from the client to this endpoint and handle this request on this endpoint. Since you are not clear what exactly you are trying to do, I will give you a simple example.
Let's say in one component you are fetching songs from the mongodb and rendering them to the screen and you wanna add a clear button to clear up the lists.
<a href="/delete">
<button>Clear the Screen</button>
</a>
let's say I have a Song model defined in mongoose.
app.use("/delete", async (req, res) => {
await Song.deleteMany();
res.redirect("/");
});
I sent a request from the client, and server handled this CRUD operation.
NOTE that since you are making a request from the client to the server you have to set up proxy. Or you can use webpack-dev-middleware so express will serve the webpack files.
You need MongoDB Realm
Pseudo-Code Example :
import * as Realm from "realm-web";
const REALM_APP_ID = "<Your App ID>"; // e.g. myapp-abcde
const app = new Realm.App({ id: REALM_APP_ID });
Just to make it clear, I'm using the MongoDB, Express, React and Node stack.
I'm trying to learn react.js right now. I got the basics right and I am able to code a simple react app with a router. I've also tried server-side rendering a simple react app and it also works perfectly. However, I'm kind of stuck now that I want to make a full app with a rest api and server-side rendering.
1) I don't know how I should go about separating the api and the react code in the server file. Would starting off by listing the api calls and then do the server-side rendering work?
Like so:
app.get('/api/whatever', function(req, res) {
//get whatever
});
app.get('*', function(req, res) {
//math routes and renderToString React
});
2) Also, the reason I couldn't even test the above, is that when I try to run the server with nodemon it throws an error because it doesn't understand the react code, how should I go about this? Should I somehow configure nodemon to read es6 or ignore it or configure webpack to run the express server ?
3) The final question that could clear this whole story quite easily. I've tried finding an answer but got many conflicting ones instead. Are the google crawlers capable of crawling a React app? I'm learning server-side rendering for SEO, is that all really necessary?
Sorry for the long question, looking forward to reading your answers.
I do it the same way you do in your code example in the project I'm currently working on – I match * and then use React Router to render different pages. I wrote a blog article about this, with code examples.
in the setup I have, I use webpack to compile my backend code, just like I do with the frontend code. I use the watch mechanism to listen for code changes and automatically restart the node server after recompiling. No need for nodemon.
#!/usr/bin/env node
const path = require('path');
const webpack = require('webpack');
const spawn = require('child_process').spawn;
const serverConfig = require('webpack.config.server');
const compiler = webpack(serverConfig);
const watchConfig = {
aggregateTimeout: 300,
poll: 1000,
ignored: '**/*.scss'
};
let serverControl;
compiler.watch(watchConfig, (err, stats) => {
if (err) {
console.error(err.stack || err);
if (err.details) {
console.error(err.details);
}
return;
}
const info = stats.toJson();
if (stats.hasErrors()) {
info.errors.forEach(message => console.log(message));
return;
}
if (stats.hasWarnings()) {
info.warnings.forEach(message => console.log(message));
}
if (serverControl) {
serverControl.kill();
}
serverControl = spawn('node', [path.resolve(__dirname, '../../dist/polly-server.js')]);
serverControl.stdout.on('data', data => console.log(`${new Date().toISOString()} [LOG] ${data}`));
serverControl.stderr.on('data', data => console.error(`${new Date().toISOString()} [ERROR] ${data}`));
});
yes, Google crawls client-side React code, but server-side rendering is still a good idea, because crawl results may be inconsistent, especially if you load parts of the page dynamically after Ajax calls
Do I have a wrong understanding of "reactive" or is something wrong in my example?
I did a small code sample in Vertx: In a REST service I read data from mongodb and returning as JSON.
...........
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.get("/gilders").handler(this::listAll);
vertx.createHttpServer().requestHandler(router::accept).listen(8080);
}
private void listAll(RoutingContext routingContext) {
mongoClient.find("gliders", new JsonObject(), results -> {
List<JsonObject> objects = results.result();
/* is this non blocking?!
mongoClient.find return immediately, but the rest client just
gets results, after mongo delivered all results
*/
List<Glider> gilder = objects.stream()
.map(res -> {
Glider g = new Glider();
g.setName(res.getString("name"));
g.setPrice(res.getString("price"));
return g;
})
.collect(Collectors.toList());
routingContext.response()
.putHeader("content-type", "application/json; charset=utf-8")
.end(Json.encodePrettily(gilder));
});
}
OK, its not blocking, I could compute something else meanwhile waiting for mongo.
But somehow I thought about "reactive" is that the REST client will get already the first chunks of the mongo results even mongo is still not ready finding all by that time (HTTP Streaming). But like this, the callback is just invoked, when mongo found all results.
Reactive is not the same as streaming. Reactive is a concept around data flows, your application will react to events, e.g.: data returned from mongoDB. You can now implement streaming on top of it by asking the mongo client to start pumping data asap as it arrives from the network. However in a blocking API you could do streaming by blocking the application for data and then pass it one by one to a consumer.