Query sailsjs blueprint endpoints by id array using request - rest

I'm using the request library to make calls from one sails app to another one which exposes the default blueprint endpoints. It works fine when I query by non-id fields, but I need to run some queries by passing id arrays. The problem is that the moment you provide an id, only the first id is considered, effectively not allowing this kind of query.
Is there a way to get around this? I could switch over to another attribute if all else fails but I need to know if there is a proper way around this.
Here's how I'm querying:
var idArr = [];//array of ids
var queryParams = { id: idArr };
var options: {
//headers, method and url here
json: queryParams
};
request(options, function(err, response, body){
if (err) return next(err);
return next(null, body);
});
Thanks in advance.

Sails blueprint APIs allow you to use the same waterline query langauge that you would otherwise use in code.
You can directly pass the array of id's in the get call to receive the objects as follows
GET /city?where={"id":[1, 2]}
Refer here for more.
Have fun!

Alright, I switched to a hacky solution to get moving.
For all models that needed querying by id arrays, I added a secondary attribute to the model. Let's call it code. Then, in afterCreate(), I updated code and set it equal to the id. This incurs an additional database call, but it's fine since it's called just once - when the object is created.
Here's the code.
module.exports = {
attributes: {
code: {
type: 'string'//the secondary attribute
},
// other attributes
},
afterCreate: function (newObj, next) {
Model.update({ id: newObj.id }, { code: newObj.id }, next);
}
}
Note that newObj isn't a Model object as even I was led to believe. So we cannot simply update its code and call newObj.save().
After this, in the queries having id arrays, substituting id with code makes them work as expected!

Related

Using Mongoose `pre` hook to get document before findOneAndUpdate()

I am trying to use Mongoose pre and post hooks in my MongoDB backend in order to compare the document in its pre and post-saved states, in order to trigger some other events depending on what's changed. So far however I'm having trouble getting the document via the Mongoose pre hook.
According to the docs "pre hooks work for both doc.save() and doc.update(). In both cases this refers to the document itself... ". So I here's what I tried. First in my model/schema I have the following code:
let Schema = mongoose
.Schema(CustomerSchema, {
timestamps: true
})
.pre("findOneAndUpdate", function(next) {
trigger.preSave(next);
})
// other hooks
}
... And then in my triggers file I have the following code:
exports.preSave = function(next) {
console.log("this: ", this);
}
};
But this is what logs to the console:
this: { preSave: [Function], postSave: [AsyncFunction] }
So clearly this didn't work. This didn't log out the document as I was hoping for. Why is this not the document itself here, as the docs themselves appear to indicate? And is there a way I can get a hold of the document with a pre hook? If not, is there another approach people have used to accomplish this?
You can't retrieve the document in the pre hook.
According to the documentation pre is a query middleware and this refers to the query and not the document being updated.
The confusion arises due to the difference in the this context within each of the kinds of middleware functions. During document pre or post middleware, you can use this to access the document model, but not in the other hooks.
There are three kinds of middleware functions, all of which have pre and post stages.
In document middleware functions, this refers to the document (model).
init, validate, save, remove
In query middleware functions, this refers to the query.
count,find,findOne,findOneAndRemove,findOneAndUpdate,update
In aggregate middleware, this refers to the aggregation object.
aggregate
It's explained here https://mongoosejs.com/docs/middleware.html#types-of-middleware
Therefore you can simply access the document during pre("init"), pre("init"), pre("validate"), post("validate"), pre("save"), post("save"), pre("remove"), post("remove"), but not in any of the others.
I've seen examples of people doing more queries within the other middleware hooks, to find the model again, but that sounds pretty dangerous to me.
The short answer seems to be, you need to change the original query to be document oriented, not query or aggregate style. It does seem like an odd limitation.
As per documentation you pre hook cannot get the document in function, but it can get the query as follow
schema.pre('findOneAndUpdate', async function() {
const docToUpdate = await this.model.findOne(this.getQuery());
console.log(docToUpdate); // The document that findOneAndUpdate() will modify
});
If you really want to access document (or id) in query middleware functions
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (!user.password) {
const userID = user._conditions?._id
const foundUser = await user.model.findById(userID)
...
}
If someone needs the function to hash password when user password changes
UserSchema.pre<User>(/^(updateOne|save|findOneAndUpdate)/, async function (next) {
const user: any = this
if (user.password) {
if (user.isModified('password')) {
user.password = await getHashedPassword(user.password)
}
return next()
}
const { password } = user.getUpdate()?.$set
if (password) {
user._update.password = await getHashedPassword(password)
}
next()
})
user.password exists when "save" is the trigger
user.getUpdate() will return props that changes in "update" triggers

Mongoose - populate return _id only instead of a Object [duplicate]

In Mongoose, I can use a query populate to populate additional fields after a query. I can also populate multiple paths, such as
Person.find({})
.populate('books movie', 'title pages director')
.exec()
However, this would generate a lookup on book gathering the fields for title, pages and director - and also a lookup on movie gathering the fields for title, pages and director as well. What I want is to get title and pages from books only, and director from movie. I could do something like this:
Person.find({})
.populate('books', 'title pages')
.populate('movie', 'director')
.exec()
which gives me the expected result and queries.
But is there any way to have the behavior of the second snippet using a similar "single line" syntax like the first snippet? The reason for that, is that I want to programmatically determine the arguments for the populate function and feed it in. I cannot do that for multiple populate calls.
After looking into the sourcecode of mongoose, I solved this with:
var populateQuery = [{path:'books', select:'title pages'}, {path:'movie', select:'director'}];
Person.find({})
.populate(populateQuery)
.execPopulate()
you can also do something like below:
{path:'user',select:['key1','key2']}
You achieve that by simply passing object or array of objects to populate() method.
const query = [
{
path:'books',
select:'title pages'
},
{
path:'movie',
select:'director'
}
];
const result = await Person.find().populate(query).lean();
Consider that lean() method is optional, it just returns raw json rather than mongoose object and makes code execution a little bit faster! Don't forget to make your function (callback) async!
This is how it's done based on the Mongoose JS documentation http://mongoosejs.com/docs/populate.html
Let's say you have a BookCollection schema which contains users and books
In order to perform a query and get all the BookCollections with its related users and books you would do this
models.BookCollection
.find({})
.populate('user')
.populate('books')
.lean()
.exec(function (err, bookcollection) {
if (err) return console.error(err);
try {
mongoose.connection.close();
res.render('viewbookcollection', { content: bookcollection});
} catch (e) {
console.log("errror getting bookcollection"+e);
}
//Your Schema must include path
let createdData =Person.create(dataYouWant)
await createdData.populate([{path:'books', select:'title pages'},{path:'movie', select:'director'}])

Meteor: Increment DB value server side when client views page

I'm trying to do something seemingly simple, update a views counter in MongoDB every time the value is fetched.
For example I've tried it with this method.
Meteor.methods({
'messages.get'(messageId) {
check(messageId, String);
if (Meteor.isServer) {
var message = Messages.findOne(
{_id: messageId}
);
var views = message.views;
// Increment views value
Messages.update(
messageId,
{ $set: { views: views++ }}
);
}
return Messages.findOne(
{_id: messageId}
);
},
});
But I can't get it to work the way I intend. For example the if(Meteor.isServer) code is useless because it's not actually executed on the server.
Also the value doesn't seem to be available after findOne is called, so it's likely async but findOne has no callback feature.
I don't want clients to control this part, which is why I'm trying to do it server side, but it needs to execute everytime the client fetches the value. Which sounds hard since the client has subscribed to the data already.
Edit: This is the updated method after reading the answers here.
'messages.get'(messageId) {
check(messageId, String);
Messages.update(
messageId,
{ $inc: { views: 1 }}
);
return Messages.findOne(
{_id: messageId}
);
},
For example the if(Meteor.isServer) code is useless because it's not
actually executed on the server.
Meteor methods are always executed on the server. You can call them from the client (with callback) but the execution happens server side.
Also the value doesn't seem to be available after findOne is called,
so it's likely async but findOne has no callback feature.
You don't need to call it twice. See the code below:
Meteor.methods({
'messages.get'(messageId) {
check(messageId, String);
var message = Messages.findOne({_id:messageId});
if (message) {
// Increment views value on current doc
message.views++;
// Update by current doc
Messages.update(messageId,{ $set: { views: message.views }});
}
// return current doc or null if not found
return message;
},
});
You can call that by your client like:
Meteor.call('messages.get', 'myMessageId01234', function(err, res) {
if (err || !res) {
// handle err, if res is empty, there is no message found
}
console.log(res); // your message
});
Two additions here:
You may split messages and views into separate collections for sake of scalability and encapsulation of data. If your publication method does not restrict to public fields, then the client, who asks for messages also receives the view count. This may work for now but may violate on a larger scale some (future upcoming) access rules.
views++ means:
Use the current value of views, i.e. build the modifier with the current (unmodified) value.
Increment the value of views, which is no longer useful in your case because you do not use that variable for anything else.
Avoid these increment operator if you are not clear how they exactly work.
Why not just using a mongo $inc operator that could avoid having to retrieve the previous value?

Subscribing to Meteor.Users Collection

// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
I want to now get the directory listings queried from the client like directory.findOne() from the browser's console. //Testing purposes
Doing directory=Meteor.subscribe('directory')/directory=Meteor.Collection('directory') and performing directory.findOne() doesn't work but when I do directory=new Meteor.Collection('directory') it works and returns undefined and I bet it CREATES a mongo collection on the server which I don't like because USER collection already exists and it points to a new Collection rather than the USER collection.
NOTE: I don't wanna mess with how Meteor.users collection handles its function... I just want to retrieve some specific data from it using a different handle that will only return the specified fields and not to override its default function...
Ex:
Meteor.users.findOne() // will return the currentLoggedIn users data
directory.findOne() // will return different fields taken from Meteor.users collection.
If you want this setup to work, you need to do the following:
Meteor.publish('thisNameDoesNotMatter', function () {
var self = this;
var handle = Meteor.users.find({}, {
fields: {emails: 1, profile: 1}
}).observeChanges({
added: function (id, fields) {
self.added('thisNameMatters', id, fields);
},
changed: function (id, fields) {
self.changed('thisNameMatters', id, fields);
},
removed: function (id) {
self.removed('thisNameMatters', id);
}
});
self.ready();
self.onStop(function () {
handle.stop();
});
});
No on the client side you need to define a client-side-only collection:
directories = new Meteor.Collection('thisNameMatters');
and subscribe to the corresponding data set:
Meteor.subscribe('thisNameDoesNotMatter');
This should work now. Let me know if you think this explanation is not clear enough.
EDIT
Here, the self.added/changed/removed methods act more or less as an event dispatcher. Briefly speaking they give instructions to every client who called
Meteor.subscribe('thisNameDoesNotMatter');
about the updates that should be applied on the client's collection named thisNameMatters assuming that this collection exists. The name - passed as the first parameter - can be chosen almost arbitrarily, but if there's no corresponding collection on the client side all the updates will be ignored. Note that this collection can be client-side-only, so it does not necessarily have to correspond to a "real" collection in your database.
Returning a cursor from your publish method it's only a shortcut for the above code, with the only difference that the name of an actual collection is used instead of our theNameMatters. This mechanism actually allows you to create as many "mirrors" of your datasets as you wish. In some situations this might be quite useful. The only problem is that these "collections" will be read-only (which totally make sense BTW) because if they're not defined on the server the corresponding `insert/update/remove' methods do not exist.
The collection is called Meteor.users and there is no need to declare a new one on neither the server nor the client.
Your publish/subscribe code is correct:
// in server.js
Meteor.publish("directory", function () {
return Meteor.users.find({}, {fields: {emails: 1, profile: 1}});
});
// in client.js
Meteor.subscribe("directory");
To access documents in the users collection that have been published by the server you need to do something like this:
var usersArray = Meteor.users.find().fetch();
or
var oneUser = Meteor.users.findOne();

Mongoose - update after populate (Cast Exception)

I am not able to update my mongoose schema because of a CastERror, which makes sence, but I dont know how to solve it.
Trip Schema:
var TripSchema = new Schema({
name: String,
_users: [{type: Schema.Types.ObjectId, ref: 'User'}]
});
User Schema:
var UserSchema = new Schema({
name: String,
email: String,
});
in my html page i render a trip with the possibility to add new users to this trip, I retrieve the data by calling the findById method on the Schema:
exports.readById = function (request, result) {
Trip.findById(request.params.tripId).populate('_users').exec(function (error, trip) {
if (error) {
console.log('error getting trips');
} else {
console.log('found single trip: ' + trip);
result.json(trip);
}
})
};
this works find. In my ui i can add new users to the trip, here is the code:
var user = new UserService();
user.email = $scope.newMail;
user.$save(function(response){
trip._users.push(user._id);
trip.$update(function (response) {
console.log('OK - user ' + user.email + ' was linked to trip ' + trip.name);
// call for the updated document in database
this.readOne();
})
};
The Problem is that when I update my Schema the existing users in trip are populated, means stored as objects not id on the trip, the new user is stored as ObjectId in trip.
How can I make sure the populated users go back to ObjectId before I update? otherwise the update will fail with a CastError.
see here for error
I've been searching around for a graceful way to handle this without finding a satisfactory solution, or at least one I feel confident is what the mongoosejs folks had in mind when using populate. Nonetheless, here's the route I took:
First, I tried to separate adding to the list from saving. So in your example, move trip._users.push(user._id); out of the $save function. I put actions like this on the client side of things, since I want the UI to show the changes before I persist them.
Second, when adding the user, I kept working with the populated model -- that is, I don't push(user._id) but instead add the full user: push(user). This keeps the _users list consistent, since the ids of other users have already been replaced with their corresponding objects during population.
So now you should be working with a consistent list of populated users. In the server code, just before calling $update, I replace trip._users with a list of ObjectIds. In other words, "un-populate" _users:
user_ids = []
for (var i in trip._users){
/* it might be a good idea to do more validation here if you like, to make
* sure you don't have any naked userIds in this array already, as you would
*/in your original code.
user_ids.push(trip._users[i]._id);
}
trip._users = user_ids;
trip.$update(....
As I read through your example code again, it looks like the user you are adding to the trip might be a new user? I'm not sure if that's just a relic of your simplification for question purposes, but if not, you'll need to save the user first so mongo can assign an ObjectId before you can save the trip.
I have written an function which accepts an array, and in callback returns with an array of ObjectId. To do it asynchronously in NodeJS, I am using async.js. The function is like:
let converter = function(array, callback) {
let idArray;
async.each(array, function(item, itemCallback) {
idArray.push(item._id);
itemCallback();
}, function(err) {
callback(idArray);
})
};
This works totally fine with me, and I hope should work with you as well