Result from "aggregate with unwind" is different from the "find with count"? - mongodb

Here is a few documents from my collections:
{"make":"Lenovo", "model":"Thinkpad T430"},
{"make":"Lenovo", "model":"Thinkpad T430", "problems":["Battery"]},
{"make":"Lenovo", "model":"Thinkpad T430", "problems":["Battery","Brakes"]}
As you can see some documents have no problems, some have only one problem and some have few problems in a list.
I want to calculate how many reviews have a specific "problem" (like "Battery") in problems list.
I have tried to use the following aggregate command:
{ $match : { model : "Thinkpad T430"} },
{ $unwind : "$problems" },
{ $group: {
_id: '$problems',
count: { $sum: 1 }
}}
And for battery problem the count was 382. I also decided to double check this result with find() and count():
db.reviews.find({model:"Thinkpad T430",problems:"Battery"}).count()
Result was 362.
Why do I have this difference? And what is the right way to calculate it?

You likely have documents in the collection where problems contains more than one "Battery" string in the array.
When using $unwind, these will each result in their own doc, so the subsequent $group operation will count them separately.

Related

How to retrieve the total cases in my data set on a MongoDB query?

I want the total number of cases in all my documents,
This is the query I tried to use:
db.coviddatajson.aggregate([
{ $group: { _id: null, total: { $sum: "$total_cases"} } }
])
For some reason the result is 0 which does not make sense, as it's supposed to be 1000+ at least and the expected result anything that is not zero will make sense but it's supposed to be a few thousands or something like that.
This is the dataset I am using:
https://covid.ourworldindata.org/data/owid-covid-data.json
What am I doing wrong here?
Any ideas on how to fix this query?
The total_cases field is inside data array, and $sum requires field type as number in $group stage, so before we need to do total($sum) of data.total_cases in current document and then pass it to $group stage and count total sum,
db.coviddatajson.aggregate([
{
$project: { total_cases: { $sum: "$data.total_cases" } }
},
{
$group: {
_id: null,
total: { $sum: "$total_cases" }
}
}
])
Playground
The data set has some issues.
The document size is bigger than 16MiB, you cannot load documents >16MiB into MongoDB. This in an internal limitation. You would need to split the document into sub-documents.
The document contains data for each country but also summarized data for "World". Do you have to exclude the "World" data? Can you use it, instead of manual summary?
The data is not consistent. For example some countries do not provide a number of male/female smokers or median age. Not all countries provide all data for each date, you may have missing values. How to deal with them?
Do you like a simple sum of all total_cases? If yes, the query would be easy, however the result would be pointless (15'773'189'214 total cases, twice population of the world).

getting the latest xx records with mongoose, How to order them?

I'm trying to get the last 20 records of user collection with mongoose:
User.find({'owner': req.params.id}).
sort(date:'-1').
limit(20).
exec(.....)
This works well, show the last 20 items.
But the items inside the array are sorted from the most recent to the oldest, Is there any way to reverse this with mongoose?
Thanks
You can certainly do this with an aggregation, such as this:
db.user.aggregate[(
{ $match : {"owner" : req.params.id}},
{ $sort : {"date" : -1}},
{ $limit : 20},
{ $sort : {"date" : 1}}
])
Notes on this aggregation:
The first three parts do the same job as the Find in your question
The fourth part applies a further sort, which re-orders the returned 20 records from oldest to most recent
I have written it in native MongoDB aggregation syntax; you will need to adjust the code to generate the same aggregation from Mongoose.
Update: I think this is not possible with a find() with cursor methods, because you would need two different sort() operations. But, MongoDB does not treat them as a sequence of independent operations; the docs give an example of methods written in one order — sort().limit() — being equivalent to the opposite order — limit().sort(), showing that the order cannot be relied upon as meaningful.
Find total and select only latest 20 , may be this is not effective way you found , but this will solve your problem.
User.count({'owner': req.params.id},function(err,count){
if(count){
var skipItem=count-20;
User.find({'owner': req.params.id}).
.skip(skipItem)
.limit(20)
.sort(date:'1').
exec(.....)
}
});
db.users.aggregate([
{ $match: {
'owner': req.params.id
}},
{ $unwind: '[arrayFieldName]' },
{ $sort: {
'[arrayFieldName]': -1/1,
'date':-1
}}
])

Meteor collection get last document of each selection

Currently I use the following find query to get the latest document of a certain ID
Conditions.find({
caveId: caveId
},
{
sort: {diveDate:-1},
limit: 1,
fields: {caveId: 1, "visibility.visibility":1, diveDate: 1}
});
How can I use the same using multiple ids with $in for example
I tried it with the following query. The problem is that it will limit the documents to 1 for all the found caveIds. But it should set the limit for each different caveId.
Conditions.find({
caveId: {$in: caveIds}
},
{
sort: {diveDate:-1},
limit: 1,
fields: {caveId: 1, "visibility.visibility":1, diveDate: 1}
});
One solution I came up with is using the aggregate functionality.
var conditionIds = Conditions.aggregate(
[
{"$match": { caveId: {"$in": caveIds}}},
{
$group:
{
_id: "$caveId",
conditionId: {$last: "$_id"},
diveDate: { $last: "$diveDate" }
}
}
]
).map(function(child) { return child.conditionId});
var conditions = Conditions.find({
_id: {$in: conditionIds}
},
{
fields: {caveId: 1, "visibility.visibility":1, diveDate: 1}
});
You don't want to use $in here as noted. You could solve this problem by looping through the caveIds and running the query on each caveId individually.
you're basically looking at a join query here: you need all caveIds and then lookup last for each.
This is a problem of database schema/denormalization in my opinion: (but this is only an opinion!):
You could as mentioned here, lookup all caveIds and then run the single query for each, every single time you need to look up last dives.
However I think you are much better off recording/updating the last dive inside your cave document, and then lookup all caveIds of interest pulling only the lastDive field.
That will give you immediately what you need, rather than going through expensive search/sort queries. This is at the expense of maintaining that field in the document, but it sounds like it should be fairly trivial as you only need to update the one field when a new event occurs.

MongoDB - Aggregation Framework (Total Count)

When running a normal "find" query on MongoDB I can get the total result count (regardless of limit) by running "count" on the returned cursor. So, even if I limit to result set to 10 (for example) I can still know that the total number of results was 53 (again, for example).
If I understand it correctly, the aggregation framework, however, doesn't return a cursor but simply the results. And so, if I used the $limit pipeline operator, how can I know the total number of results regardless of said limit?
I guess I could run the aggregation twice (once to count the results via $group, and once with $limit for the actual limited results), but this seems inefficient.
An alternative approach could be to attach the total number of results to the documents (via $group) prior to the $limit operation, but this also seems inefficient as this number will be attached to every document (instead of just returned once for the set).
Am I missing something here? Any ideas? Thanks!
For example, if this is the query:
db.article.aggregate(
{ $group : {
_id : "$author",
posts : { $sum : 1 }
}},
{ $sort : { posts: -1 } },
{ $limit : 5 }
);
How would I know how many results are available (before $limit)? The result isn't a cursor, so I can't just run count on it.
There is a solution using push and slice: https://stackoverflow.com/a/39784851/4752635 (#emaniacs mentions it here as well).
But I prefer using 2 queries. Solution with pushing $$ROOT and using $slice runs into document memory limitation of 16MB for large collections. Also, for large collections two queries together seem to run faster than the one with $$ROOT pushing. You can run them in parallel as well, so you are limited only by the slower of the two queries (probably the one which sorts).
First for filtering and then grouping by ID to get number of filtered elements. Do not filter here, it is unnecessary.
Second query which filters, sorts and paginates.
I have settled with this solution using 2 queries and aggregation framework (note - I use node.js in this example):
var aggregation = [
{
// If you can match fields at the begining, match as many as early as possible.
$match: {...}
},
{
// Projection.
$project: {...}
},
{
// Some things you can match only after projection or grouping, so do it now.
$match: {...}
}
];
// Copy filtering elements from the pipeline - this is the same for both counting number of fileter elements and for pagination queries.
var aggregationPaginated = aggregation.slice(0);
// Count filtered elements.
aggregation.push(
{
$group: {
_id: null,
count: { $sum: 1 }
}
}
);
// Sort in pagination query.
aggregationPaginated.push(
{
$sort: sorting
}
);
// Paginate.
aggregationPaginated.push(
{
$limit: skip + length
},
{
$skip: skip
}
);
// I use mongoose.
// Get total count.
model.count(function(errCount, totalCount) {
// Count filtered.
model.aggregate(aggregation)
.allowDiskUse(true)
.exec(
function(errFind, documents) {
if (errFind) {
// Errors.
res.status(503);
return res.json({
'success': false,
'response': 'err_counting'
});
}
else {
// Number of filtered elements.
var numFiltered = documents[0].count;
// Filter, sort and pagiante.
model.request.aggregate(aggregationPaginated)
.allowDiskUse(true)
.exec(
function(errFindP, documentsP) {
if (errFindP) {
// Errors.
res.status(503);
return res.json({
'success': false,
'response': 'err_pagination'
});
}
else {
return res.json({
'success': true,
'recordsTotal': totalCount,
'recordsFiltered': numFiltered,
'response': documentsP
});
}
});
}
});
});
Assaf, there's going to be some enhancements to the aggregation framework in the near future that may allow you to do your calculations in one pass easily, but right now, it is best to perform your calculations by running two queries in parallel: one to aggregate the #posts for your top authors, and another aggregation to calculate the total posts for all authors. Also, note that if all you need to do is a count on documents, using the count function is a very efficient way of performing the calculation. MongoDB caches counts within btree indexes allowing for very quick counts on queries.
If these aggregations turn out to be slow there are a couple of strategies. First off, keep in mind that you want start the query with a $match if applicable to reduce the result set. $matches can also be speed up by indexes. Secondly, you can perform these calculations as pre-aggregations. Instead of possible running these aggregations every time a user accesses some part of your app, have the aggregations run periodically in the background and store the aggregations in a collection that contains pre-aggregated values. This way, your pages can simply query the pre-calculated values from this collection.
$facets aggregation operation can be used for Mongo versions >= 3.4.
This allows to fork at a particular stage of a pipeline in multiple sub-pipelines allowing in this case to build one sub pipeline to count the number of documents and another one for sorting, skipping, limiting.
This allows to avoid making same stages multiple times in multiple requests.
If you don't want to run two queries in parallel (one to aggregate the #posts for your top authors, and another aggregation to calculate the total posts for all authors) you can just remove $limit on pipeline and on results you can use
totalCount = results.length;
results.slice(number of skip,number of skip + number of limit);
ex:
db.article.aggregate([
{ $group : {
_id : "$author",
posts : { $sum : 1 }
}},
{ $sort : { posts: -1 } }
//{$skip : yourSkip}, //--remove this
//{ $limit : yourLimit }, // remove this too
]).exec(function(err, results){
var totalCount = results.length;//--GEt total count here
results.slice(yourSkip,yourSkip+yourLimit);
});
I got the same problem, and solved with $project, $slice and $$ROOT.
db.article.aggregate(
{ $group : {
_id : '$author',
posts : { $sum : 1 },
articles: {$push: '$$ROOT'},
}},
{ $sort : { posts: -1 } },
{ $project: {total: '$posts', articles: {$slice: ['$articles', from, to]}},
).toArray(function(err, result){
var articles = result[0].articles;
var total = result[0].total;
});
You need to declare from and to variable.
https://docs.mongodb.com/manual/reference/operator/aggregation/slice/
in my case, we use $out stage to dump result set from aggeration into a temp/cache table, then count it. and, since we need to sort and paginate results, we add index on the temp table and save table name in session, remove the table on session closing/cache timeout.
I get total count with aggregate().toArray().length

Mongo: Ensuring latest nested attribute has a value between given arguments

I have a mongo collection 'books'. Here's a typical book:
BOOK
name: 'Test Book'
author: 'Joe Bloggs'
print_runs: [
{publisher: 'OUP', year: 1981},
{publisher: 'Penguin', year: 1987},
{publisher: 'Harper-Collins', year: 1992}
]
I'd like to be able to filter books to return only books whose last print run was after a given date, and/or before a given date...and I've been struggling to find a feasible query. Any suggestions appreciated.
There are a few options, as getting access to the "last" element in the array and only filtering on that is difficult/impossible with the normal find options in MongoDB queries. (Unfortunately, you can't $slice with find).
Store the most recent published publisher and year in the print_runs array and in a special (denormalized/copy) of the data directly on the book object. Book.last_published_by and Book.last_published_date for example. Queries would be simple and super fast.
MapReduce. This would be simple enough to emit the last element in the array and then "reduce" it to just that. You'd need to do incremental updates on the MapReduce to keep it accurate.
Write a relatively complex aggregation framework expression
The aggregation might look like:
db.so.aggregate({ $project :
{ _id: 1, "print_run_year" : "$print_runs.year" }},
{ $unwind: "$print_run_year" },
{ $group : { _id : "$_id", "newest" : { $max : "$print_run_year" }}},
{ $match : { "newest" : { $gt : 1991, $lt: 2000 } }
})
As it may require a bit of explanation:
It projects and unwinds the year of the print runs for each book.
Then, group on the _id (of the book, and create a new computed field called, newest which contains the highest print run year (from the projection).
Then, filter on newest using a $gt and $lt
I'd suggest option #1 above would be the best from an efficiency perspective, followed by the MapReduce, and then a distant third, option #3.