I have a database of activities in MongoDB, had all the basic CRUD operations working fine but now at the point in developing the front end of the app where I need to do a GET request for a single activity in the database. I have working PUT and DELETE requests for single activities but for some reason the GET one just isn't playing ball - it's returning an array of objects rather than a single object with that ID.
I'm currently using Postman to make the requests while I iron this problem out. Mongoose version is 5.12.13.
router.get('/:id', async (req, res) => {
try {
const activity = await Activities.findById(req.params.id)
res.json(activity).send()
} catch (error) {
res.send(error.message)
}
})
Then making a request using Postman to http://localhost:5000/api/activities?id=60968e3369052d084cb6abbf (the id here is just one I've copied & pasted from an entry in the database for the purposes of troubleshooting)
I'm really stumped by this because I can't understand why it's not working! The response I get in Postman is an array of objects, like I said, which seems to be the entire contents of the database rather than just one with the queried ID...
Try calling exec on your findById, findById returns a query, you need to call exec to execute your query.
Without the call to the exec function, your 'activity' variable is a mongoose query object.
router.get('/:id', async (req, res) => {
try {
const activity = await Activities.findById(req.params.id).exec();
res.json(activity).send()
} catch (error) {
res.send(error.message)
}
});
Docs for findById
https://mongoosejs.com/docs/api.html#model_Model.findById
Edit:
As righty spotted by Dang, given your code is inspecting req.params the URL you're calling needs updating to:
http://localhost:5000/api/activities/60968e3369052d084cb6abbf
Related
I am using json-server to fake the Api for the FrontEnd team.
We would like to have a feature to create the multiple objects (Eg. products) in one call.
In WebApi2 or actual RestApis, it can be done like the following:
POST api/products //For Single Creation
POST api/productCollections //For Multiple Creation
I don't know how I can achieve it by using json-server. I tried to POST the following data to api/products by using the postman, but it does not split the array and create items individually.
[
{
"id": "ff00feb6-b1f7-4bb0-b09c-7b88d984625d",
"code": "MM",
"name": "Product 2"
},
{
"id": "1f4492ab-85eb-4b2f-897a-a2a2b69b43a5",
"code": "MK",
"name": "Product 3"
}
]
It treats the whole array as the single item and append to the existing json.
Could you pls suggest how I could mock bulk insert in json-server? Or Restful Api should always be for single object manipulation?
This is not something that json-server supports natively, as far as I know, but it can be accomplished through a workaround.
I am assuming that you have some prior knowledge of node.js
You will have to create a server.js file which you will then run using node.js.
The server.js file will then make use of the json-server module.
I have included the code for the server.js file in the code snippet below.
I made use of lodash for my duplicate check. You will thus need to install lodash. You can also replace it with your own code if you do not want to use lodash, but lodash worked pretty well in my opinion.
The server.js file includes a custom post request function which accesses the lowdb instance used in the json-server instance. The data from the POST request is checked for duplicates and only new records are added to the DB where the id does not already exist. The write() function of lowdb persists the data to the db.json file. The data in memory and in the file will thus always match.
Please note that the API endpoints generated by json-server (or the rewritten endpoints) will still exist. You can thus use the custom function in conjunction with the default endpoints.
Feel free to add error handling where needed.
const jsonServer = require('json-server');
const server = jsonServer.create();
const _ = require('lodash')
const router = jsonServer.router('./db.json');
const middlewares = jsonServer.defaults();
const port = process.env.PORT || 3000;
server.use(middlewares);
server.use(jsonServer.bodyParser)
server.use(jsonServer.rewriter({
'/api/products': '/products'
}));
server.post('/api/productcollection', (req, res) => {
const db = router.db; // Assign the lowdb instance
if (Array.isArray(req.body)) {
req.body.forEach(element => {
insert(db, 'products', element); // Add a post
});
}
else {
insert(db, 'products', req.body); // Add a post
}
res.sendStatus(200)
/**
* Checks whether the id of the new data already exists in the DB
* #param {*} db - DB object
* #param {String} collection - Name of the array / collection in the DB / JSON file
* #param {*} data - New record
*/
function insert(db, collection, data) {
const table = db.get(collection);
if (_.isEmpty(table.find(data).value())) {
table.push(data).write();
}
}
});
server.use(router);
server.listen(port);
If you have any questions, feel free to ask.
The answer marked as correct didn't actually work for me. Due to the way the insert function is written, it will always generate new documents instead of updating existing docs. The "rewriting" didn't work for me either (maybe I did something wrong), but creating an entirely separate endpoint helped.
This is my code, in case it helps others trying to do bulk inserts (and modifying existing data if it exists).
const jsonServer = require('json-server');
const server = jsonServer.create()
const _ = require('lodash');
const router = jsonServer.router('./db.json');
const middlewares = jsonServer.defaults()
server.use(middlewares)
server.use(jsonServer.bodyParser)
server.post('/addtasks', (req, res) => {
const db = router.db; // Assign the lowdb instance
if (Array.isArray(req.body)) {
req.body.forEach(element => {
insert(db, 'tasks', element);
});
}
else {
insert(db, 'tasks', req.body);
}
res.sendStatus(200)
function insert(db, collection, data) {
const table = db.get(collection);
// Create a new doc if this ID does not exist
if (_.isEmpty(table.find({_id: data._id}).value())) {
table.push(data).write();
}
else{
// Update the existing data
table.find({_id: data._id})
.assign(_.omit(data, ['_id']))
.write();
}
}
});
server.use(router)
server.listen(3100, () => {
console.log('JSON Server is running')
})
On the frontend, the call will look something like this:
axios.post('http://localhost:3100/addtasks', tasks)
It probably didn't work at the time when this question was posted but now it does, call with an array on the /products endpoint for bulk insert.
I am trying to use Mongoose pre and post hooks in my MongoDB backend in order to compare the document in its pre and post-saved states, in order to trigger some other events depending on what's changed. So far however I'm having trouble getting the document via the Mongoose pre hook.
According to the docs "pre hooks work for both doc.save() and doc.update(). In both cases this refers to the document itself... ". So I here's what I tried. First in my model/schema I have the following code:
let Schema = mongoose
.Schema(CustomerSchema, {
timestamps: true
})
.pre("findOneAndUpdate", function(next) {
trigger.preSave(next);
})
// other hooks
}
... And then in my triggers file I have the following code:
exports.preSave = function(next) {
console.log("this: ", this);
}
};
But this is what logs to the console:
this: { preSave: [Function], postSave: [AsyncFunction] }
So clearly this didn't work. This didn't log out the document as I was hoping for. Why is this not the document itself here, as the docs themselves appear to indicate? And is there a way I can get a hold of the document with a pre hook? If not, is there another approach people have used to accomplish this?
You can't retrieve the document in the pre hook.
According to the documentation pre is a query middleware and this refers to the query and not the document being updated.
The confusion arises due to the difference in the this context within each of the kinds of middleware functions. During document pre or post middleware, you can use this to access the document model, but not in the other hooks.
There are three kinds of middleware functions, all of which have pre and post stages.
In document middleware functions, this refers to the document (model).
init, validate, save, remove
In query middleware functions, this refers to the query.
count,find,findOne,findOneAndRemove,findOneAndUpdate,update
In aggregate middleware, this refers to the aggregation object.
aggregate
It's explained here https://mongoosejs.com/docs/middleware.html#types-of-middleware
Therefore you can simply access the document during pre("init"), pre("init"), pre("validate"), post("validate"), pre("save"), post("save"), pre("remove"), post("remove"), but not in any of the others.
I've seen examples of people doing more queries within the other middleware hooks, to find the model again, but that sounds pretty dangerous to me.
The short answer seems to be, you need to change the original query to be document oriented, not query or aggregate style. It does seem like an odd limitation.
As per documentation you pre hook cannot get the document in function, but it can get the query as follow
schema.pre('findOneAndUpdate', async function() {
const docToUpdate = await this.model.findOne(this.getQuery());
console.log(docToUpdate); // The document that findOneAndUpdate() will modify
});
If you really want to access document (or id) in query middleware functions
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (!user.password) {
const userID = user._conditions?._id
const foundUser = await user.model.findById(userID)
...
}
If someone needs the function to hash password when user password changes
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (user.password) {
if (user.isModified('password')) {
user.password = await getHashedPassword(user.password)
}
return next()
}
const { password } = user.getUpdate()?.$set
if (password) {
user._update.password = await getHashedPassword(password)
}
next()
})
user.password exists when "save" is the trigger
user.getUpdate() will return props that changes in "update" triggers
In Mongoose, I can use a query populate to populate additional fields after a query. I can also populate multiple paths, such as
Person.find({})
.populate('books movie', 'title pages director')
.exec()
However, this would generate a lookup on book gathering the fields for title, pages and director - and also a lookup on movie gathering the fields for title, pages and director as well. What I want is to get title and pages from books only, and director from movie. I could do something like this:
Person.find({})
.populate('books', 'title pages')
.populate('movie', 'director')
.exec()
which gives me the expected result and queries.
But is there any way to have the behavior of the second snippet using a similar "single line" syntax like the first snippet? The reason for that, is that I want to programmatically determine the arguments for the populate function and feed it in. I cannot do that for multiple populate calls.
After looking into the sourcecode of mongoose, I solved this with:
var populateQuery = [{path:'books', select:'title pages'}, {path:'movie', select:'director'}];
Person.find({})
.populate(populateQuery)
.execPopulate()
you can also do something like below:
{path:'user',select:['key1','key2']}
You achieve that by simply passing object or array of objects to populate() method.
const query = [
{
path:'books',
select:'title pages'
},
{
path:'movie',
select:'director'
}
];
const result = await Person.find().populate(query).lean();
Consider that lean() method is optional, it just returns raw json rather than mongoose object and makes code execution a little bit faster! Don't forget to make your function (callback) async!
This is how it's done based on the Mongoose JS documentation http://mongoosejs.com/docs/populate.html
Let's say you have a BookCollection schema which contains users and books
In order to perform a query and get all the BookCollections with its related users and books you would do this
models.BookCollection
.find({})
.populate('user')
.populate('books')
.lean()
.exec(function (err, bookcollection) {
if (err) return console.error(err);
try {
mongoose.connection.close();
res.render('viewbookcollection', { content: bookcollection});
} catch (e) {
console.log("errror getting bookcollection"+e);
}
//Your Schema must include path
let createdData =Person.create(dataYouWant)
await createdData.populate([{path:'books', select:'title pages'},{path:'movie', select:'director'}])
I'm using the request library to make calls from one sails app to another one which exposes the default blueprint endpoints. It works fine when I query by non-id fields, but I need to run some queries by passing id arrays. The problem is that the moment you provide an id, only the first id is considered, effectively not allowing this kind of query.
Is there a way to get around this? I could switch over to another attribute if all else fails but I need to know if there is a proper way around this.
Here's how I'm querying:
var idArr = [];//array of ids
var queryParams = { id: idArr };
var options: {
//headers, method and url here
json: queryParams
};
request(options, function(err, response, body){
if (err) return next(err);
return next(null, body);
});
Thanks in advance.
Sails blueprint APIs allow you to use the same waterline query langauge that you would otherwise use in code.
You can directly pass the array of id's in the get call to receive the objects as follows
GET /city?where={"id":[1, 2]}
Refer here for more.
Have fun!
Alright, I switched to a hacky solution to get moving.
For all models that needed querying by id arrays, I added a secondary attribute to the model. Let's call it code. Then, in afterCreate(), I updated code and set it equal to the id. This incurs an additional database call, but it's fine since it's called just once - when the object is created.
Here's the code.
module.exports = {
attributes: {
code: {
type: 'string'//the secondary attribute
},
// other attributes
},
afterCreate: function (newObj, next) {
Model.update({ id: newObj.id }, { code: newObj.id }, next);
}
}
Note that newObj isn't a Model object as even I was led to believe. So we cannot simply update its code and call newObj.save().
After this, in the queries having id arrays, substituting id with code makes them work as expected!
I'm looking to learn node.js and mongodb which look suitable for something I'd like to make. As a little project to help me learn I thought I'd copy the "posts" table from a phpbb3 forum I have into a mongodb table so I did something like this where db is mongodb database connection, and client is a mysql database connection.
db.collection('posts', function (err, data) {
client.query('select * from phpbb_posts", function(err, rs) {
data.insert(rs);
});
this works ok when I do it on small tables, but my posts table has about 100000 rows in and this query doesn't return even when I leave it running for an hour. I suspect that it's trying to load the entire database table into memory and then insert it.
So what I would like to do is read a chunk of rows at a time and insert them. However I can't see how to read a subset of the rows in node.js, and even more of a problem, I can't understand how I can iterate through the queries one at a time when I only get notification via a callback that it's finished.
Any ideas how I can best do this? (I'm looking for solutions using node.js as I'd like to know how to solve this kind of problem, I could no doubt do it easily some other way)
You could try using the asnyc library by caolan. The library implements some async flow control methods to handle the caveats of a callback-oriented programming style as it is in node.js.
For your case, using the whilst method could work out, using LIMIT queries against mysql and inserting them into mongodb.
Example (not tested, as i have no testdata available, but i think you'll get the idea)
var insertCount = 0;
var offset = 0;
// set this to the overall recordcound from mysql
var recordCount = 0;
async.whilst(
// test condition callback
function () { return insertCount < recordCount; },
// actual worker callback
function (callback) {
db.collection('posts', function (err, data) {
client.query('select * from phpbb_posts LIMIT ' + insertCount + ',1000', function(err, rs) {
data.insert(rs);
// increment by actually fetched recordcount (res.length?)
insertCount += res.length;
// trigger flow callback
callback();
});
});
},
// finished callback
function (err) {
// finished inserting data, maybe check record count in mongodb here
}
});
As i already mentioned, this code is just adapted from an example of the async library readme. But maybe it is an option for adding such amounts of database records from mysql to mongo.