Mongodb in operator in find Query - mongodb

I have a query as mentioned below.
var projstat = ['A' , 'B'];
Post.native(function(err, collection) {
if (err)
console.log(err);
collection.find({
'status': {
"$in": projstat
}
}, {multi: true}, function(err, result) {
console.log(result);
if (req.isSocket) {
return res.json(result);
}
});
});
Please correct me if i am wrong as it doesnot return any results. Please help.

You're not using the native find correctly; rather than using a callback as an argument (like Waterline does), you chain the call to toArray and use the callback as its argument:
collection.find({
'status': {
"$in": projstat
}
}).toArray(function(err, results) {...});
Docs for the native Mongo driver are here.
However, the more important point in this case is that you don't need native at all. You can use the regular Waterline find, which automatically does an in query when an attribute is set to an array:
Post.find({status: projstat}).exec(function(err, results) {...});

Related

Why should the $search be the first in pipeline stages in mongoDB?

This is my code which searches the whole collection and returns the documents that the value of their name fields are either Dexter or Prison Break or Breaking bad.
Why should $search be at the top of stages; otherwise, I will get an error. Plus, I read on MongoDB doc that "The $match stage that includes a $text must be the first stage in the pipeline."
Here
app.get('/', (req, res) => {
db.collection('subs')
.aggregate([
{ $match: { $text: { $search: 'honey' } } },
{ $match: { name: { $in: ['Dexter', 'Prison Break', 'Breaking Bad'] } } },
])
.toArray((err, result) => {
if (err) {
throw new err();
}
res.json({
length: result.length,
body: { result },
});
});
});
I suppose the second line which filters the documents based on their name should come first; in order to reduce time and get a quick result because in this case, MongoDB will not have to search the whole collection and just search a few documents and returns the result.
why is that? is there any way for optimization?
I think it's because text search is required a special "text" index. Moving the "$search" operator to the second place makes impossible to use this index.

Use lean in mongoose with callback

I am trying to use lean in my mongoDB query but the problem I am facing is that I don't use .exec() method. I use callback implementation like below
model.find({user_id: mobile_no }, {'_id':0, 'type':1}, {sort: {dateTime: -1}, skip: page*page_size, limit: page_size + 1}, function(err, docs) {
if (err) {
} else {
}
});
but in most documentation,everyone uses lean with .exec() like below
.lean().exec()
Can anyone please tell me how can I use lean using my callback implementation or I would have to use .exec() implementation in order to use it.
You can use both .exec() and callbacks:
model.find(...).lean().exec(function(err, docs) {
...
});
See also the documentation, where one of the examples does the same.
model.find({user_id: mobile_no }, {'_id':0, 'type':1}, {sort: {dateTime: -1}, skip: page*page_size, limit: page_size + 1}, function(err, docs) {
if (err) {
} else {
}
}).lean();
You have to write .lean() at the end of the query :3

Strong Loopback group_by aggregation

I have searched a lot to find a way for aggregation using loopback mongodb, unfortunately no perfect solution found. One of them is here
But can't implement this, any one to help me solve this problem, with any new solution, or describing above link.
Loopback doesn't provide a way to do an aggregation query, but you can find another solution in: https://github.com/strongloop/loopback/issues/890
//Using the datasource we are making a direct request to MongoDB instead of use the PersistedModel of Loopback
var bookCollection = Book.getDataSource().connector.collection(Book.modelName);
bookCollection.aggregate({
$group: {
_id: { category: "$category", author: "$author" },
total: { $sum: 1 }
}
}, function(err, groupByRecords) {
if(err) {
next(err);
} else {
next();
}
});

Mongo aggregation and MongoError: exception: BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit

I'm trying to aggregate data from my Mongo collection to produce some statistics for FreeCodeCamp by making a large json file of the data to use later.
I'm running into the error in the title. There doesn't seem to be a lot of information about this, and the other posts here on SO don't have an answer. I'm using the latest version of MongoDB and drivers.
I suspect there is probably a better way to run this aggregation, but it runs fine on a subset of my collection. My full collection is ~7GB.
I'm running the script via node aggScript.js > ~/Desktop/output.json
Here is the relevant code:
MongoClient.connect(secrets.db, function(err, database) {
if (err) {
throw err;
}
database.collection('user').aggregate([
{
$match: {
'completedChallenges': {
$exists: true
}
}
},
{
$match: {
'completedChallenges': {
$ne: ''
}
}
},
{
$match: {
'completedChallenges': {
$ne: null
}
}
},
{
$group: {
'_id': 1, 'completedChallenges': {
$addToSet: '$completedChallenges'
}
}
}
], {
allowDiskUse: true
}, function(err, results) {
if (err) { throw err; }
var aggData = results.map(function(camper) {
return _.flatten(camper.completedChallenges.map(function(challenges) {
return challenges.map(function(challenge) {
return {
name: challenge.name,
completedDate: challenge.completedDate,
solution: challenge.solution
};
});
}), true);
});
console.log(JSON.stringify(aggData));
process.exit(0);
});
});
Aggregate returns a single document containing all the result data, which limits how much data can be returned to the maximum BSON document size.
Assuming that you do actually want all this data, there are two options:
Use aggregateCursor instead of aggregate. This returns a cursor rather than a single document, which you can then iterate over
add a $out stage as the last stage of your pipeline. This tells mongodb to write your aggregation data to the specified collection. The aggregate command itself returns no data and you then query that collection as you would any other.
It just means that the result object you are building became too large. This kind of issue should not be impacted by the version. The fix implemented for 2.5.0 only prevents the crash from occurring.
You need to filter ($match) properly to have the data which you need in result. Also group with proper fields. The results are put into buffer of 64MB. So reduce your data. $project only the columns you require in result. Not whole documents.
You can combine your 3 $match objects to single to reduce pipelines.
{
$match: {
'completedChallenges': {
$exists: true,
$ne: null,
$ne: ""
}
}
}
I had this issue and I couldn't debug the problem so I ended up abandoning the aggregation approach. Instead I just iterated through each entry and created a new collection. Here's a stripped down shell script which might help you see what I mean:
db.new_collection.ensureIndex({my_key:1}); //for performance, not a necessity
db.old_collection.find({}).noCursorTimeout().forEach(function(doc) {
db.new_collection.update(
{ my_key: doc.my_key },
{
$push: { stuff: doc.stuff, other_stuff: doc.other_stuff},
$inc: { thing: doc.thing},
},
{ upsert: true }
);
});
I don't imagine that this approach would suit everyone, but hopefully that helps anyone who was in my particular situation.

How to get string value inside a Mongodb document?

I have this document stored in mongodb document:
{
"_id":ObjectId("4eb7642ba899edcc31000001")
"hash":"abcd123"
"value":"some_text_here"
}
I am using NodeJS to access the database:
collection.findOne({'hash' : req.param('query') },
function(err, result){
console.log(res);
});
The result of this query is the whole document, however I need to get only the "value" text: "some_text_here"
How can this be done?
You can specify the fields that you are interested in (_id will always be returned, though):
collection.findOne({'hash': req.param('query') },
{fields: ['value'] },
callbackFunction );
You can do it such way:
collection.findOne({'hash' : req.param('query') },
function(err, result){
console.log(result.value);
});