Mongoose Aggregate pagination and total number [duplicate] - mongodb

I am interested in optimizing a "pagination" solution I'm working on with MongoDB. My problem is straight forward. I usually limit the number of documents returned using the limit() functionality. This forces me to issue a redundant query without the limit() function in order for me to also capture the total number of documents in the query so I can pass to that to the client letting them know they'll have to issue an additional request(s) to retrieve the rest of the documents.
Is there a way to condense this into 1 query? Get the total number of documents but at the same time only retrieve a subset using limit()? Is there a different way to think about this problem than I am approaching it?

Mongodb 3.4 has introduced $facet aggregation
which processes multiple aggregation pipelines within a single stage
on the same set of input documents.
Using $facet and $group you can find documents with $limit and can get total count.
You can use below aggregation in mongodb 3.4
db.collection.aggregate([
{ "$facet": {
"totalData": [
{ "$match": { }},
{ "$skip": 10 },
{ "$limit": 10 }
],
"totalCount": [
{ "$group": {
"_id": null,
"count": { "$sum": 1 }
}}
]
}}
])
Even you can use $count aggregation which has been introduced in mongodb 3.6.
You can use below aggregation in mongodb 3.6
db.collection.aggregate([
{ "$facet": {
"totalData": [
{ "$match": { }},
{ "$skip": 10 },
{ "$limit": 10 }
],
"totalCount": [
{ "$count": "count" }
]
}}
])

No, there is no other way. Two queries - one for count - one with limit. Or you have to use a different database. Apache Solr for instance works like you want. Every query there is limited and returns totalCount.

MongoDB allows you to use cursor.count() even when you pass limit() or skip().
Lets say you have a db.collection with 10 items.
You can do:
async function getQuery() {
let query = await db.collection.find({}).skip(5).limit(5); // returns last 5 items in db
let countTotal = await query.count() // returns 10-- will not take `skip` or `limit` into consideration
let countWithConstraints = await query.count(true) // returns 5 -- will take into consideration `skip` and `limit`
return { query, countTotal }
}

Here's how to do this with MongoDB 3.4+ (with Mongoose) using $facets. This examples returns a $count based on the documents after they have been matched.
const facetedPipeline = [{
"$match": { "dateCreated": { $gte: new Date('2021-01-01') } },
"$project": { 'exclude.some.field': 0 },
},
{
"$facet": {
"data": [
{ "$skip": 10 },
{ "$limit": 10 }
],
"pagination": [
{ "$count": "total" }
]
}
}
];
const results = await Model.aggregate(facetedPipeline);
This pattern is useful for getting pagination information to return from a REST API.
Reference: MongoDB $facet

Times have changed, and I believe you can achieve what the OP is asking by using aggregation with $sort, $group and $project. For my system, I needed to also grab some user info from my users collection. Hopefully this can answer any questions around that as well. Below is an aggregation pipe. The last three objects (sort, group and project) are what handle getting the total count, then providing pagination capabilities.
db.posts.aggregate([
{ $match: { public: true },
{ $lookup: {
from: 'users',
localField: 'userId',
foreignField: 'userId',
as: 'userInfo'
} },
{ $project: {
postId: 1,
title: 1,
description: 1
updated: 1,
userInfo: {
$let: {
vars: {
firstUser: {
$arrayElemAt: ['$userInfo', 0]
}
},
in: {
username: '$$firstUser.username'
}
}
}
} },
{ $sort: { updated: -1 } },
{ $group: {
_id: null,
postCount: { $sum: 1 },
posts: {
$push: '$$ROOT'
}
} },
{ $project: {
_id: 0,
postCount: 1,
posts: {
$slice: [
'$posts',
currentPage ? (currentPage - 1) * RESULTS_PER_PAGE : 0,
RESULTS_PER_PAGE
]
}
} }
])

there is a way in Mongodb 3.4: $facet
you can do
db.collection.aggregate([
{
$facet: {
data: [{ $match: {} }],
total: { $count: 'total' }
}
}
])
then you will be able to run two aggregate at the same time

By default, the count() method ignores the effects of the
cursor.skip() and cursor.limit() (MongoDB docs)
As the count method excludes the effects of limit and skip, you can use cursor.count() to get the total count
const cursor = await database.collection(collectionName).find(query).skip(offset).limit(limit)
return {
data: await cursor.toArray(),
count: await cursor.count() // this will give count of all the documents before .skip() and limit()
};

It all depends on the pagination experience you need as to whether or not you need to do two queries.
Do you need to list every single page or even a range of pages? Does anyone even go to page 1051 - conceptually what does that actually mean?
Theres been lots of UX on patterns of pagination - Avoid the pains of pagination covers various types of pagination and their scenarios and many don't need a count query to know if theres a next page. For example if you display 10 items on a page and you limit to 13 - you'll know if theres another page..

MongoDB has introduced a new method for getting only the count of the documents matching a given query and it goes as follows:
const result = await db.collection('foo').count({name: 'bar'});
console.log('result:', result) // prints the matching doc count
Recipe for usage in pagination:
const query = {name: 'bar'};
const skip = (pageNo - 1) * pageSize; // assuming pageNo starts from 1
const limit = pageSize;
const [listResult, countResult] = await Promise.all([
db.collection('foo')
.find(query)
.skip(skip)
.limit(limit),
db.collection('foo').count(query)
])
return {
totalCount: countResult,
list: listResult
}
For more details on db.collection.count visit this page

It is possible to get the total result size without the effect of limit() using count() as answered here:
Limiting results in MongoDB but still getting the full count?
According to the documentation you can even control whether limit/pagination is taken into account when calling count():
https://docs.mongodb.com/manual/reference/method/cursor.count/#cursor.count
Edit: in contrast to what is written elsewhere - the docs clearly state that "The operation does not perform the query but instead counts the results that would be returned by the query". Which - from my understanding - means that only one query is executed.
Example:
> db.createCollection("test")
{ "ok" : 1 }
> db.test.insert([{name: "first"}, {name: "second"}, {name: "third"},
{name: "forth"}, {name: "fifth"}])
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 5,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
> db.test.find()
{ "_id" : ObjectId("58ff00918f5e60ff211521c5"), "name" : "first" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c6"), "name" : "second" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c7"), "name" : "third" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c8"), "name" : "forth" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c9"), "name" : "fifth" }
> db.test.count()
5
> var result = db.test.find().limit(3)
> result
{ "_id" : ObjectId("58ff00918f5e60ff211521c5"), "name" : "first" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c6"), "name" : "second" }
{ "_id" : ObjectId("58ff00918f5e60ff211521c7"), "name" : "third" }
> result.count()
5 (total result size of the query without limit)
> result.count(1)
3 (result size with limit(3) taken into account)

Try as bellow:
cursor.count(false, function(err, total){ console.log("total", total) })
core.db.users.find(query, {}, {skip:0, limit:1}, function(err, cursor){
if(err)
return callback(err);
cursor.toArray(function(err, items){
if(err)
return callback(err);
cursor.count(false, function(err, total){
if(err)
return callback(err);
console.log("cursor", total)
callback(null, {items: items, total:total})
})
})
})

Thought of providing a caution while using the aggregate for the pagenation. Its better to use two queries for this if the API is used frequently to fetch data by the users. This is atleast 50 times faster than getting the data using aggregate on a production server when more users are accessing the system online. The aggregate and $facet are more suited for Dashboard , reports and cron jobs that are called less frequently.

We can do it using 2 query.
const limit = parseInt(req.query.limit || 50, 10);
let page = parseInt(req.query.page || 0, 10);
if (page > 0) { page = page - 1}
let doc = await req.db.collection('bookings').find().sort( { _id: -1 }).skip(page).limit(limit).toArray();
let count = await req.db.collection('bookings').find().count();
res.json({data: [...doc], count: count});

I took the two queries approach, and the following code has been taken straight out of a project I'm working on, using MongoDB Atlas and a full-text search index:
return new Promise( async (resolve, reject) => {
try {
const search = {
$search: {
index: 'assets',
compound: {
should: [{
text: {
query: args.phraseToSearch,
path: [
'title', 'note'
]
}
}]
}
}
}
const project = {
$project: {
_id: 0,
id: '$_id',
userId: 1,
title: 1,
note: 1,
score: {
$meta: 'searchScore'
}
}
}
const match = {
$match: {
userId: args.userId
}
}
const skip = {
$skip: args.skip
}
const limit = {
$limit: args.first
}
const group = {
$group: {
_id: null,
count: { $sum: 1 }
}
}
const searchAllAssets = await Models.Assets.schema.aggregate([
search, project, match, skip, limit
])
const [ totalNumberOfAssets ] = await Models.Assets.schema.aggregate([
search, project, match, group
])
return await resolve({
searchAllAssets: searchAllAssets,
totalNumberOfAssets: totalNumberOfAssets.count
})
} catch (exception) {
return reject(new Error(exception))
}
})

I had the same problem and came across this question. The correct solution to this problem is posted here.

You can do this in one query. First you run a count and within that run the limit() function.
In Node.js and Express.js, you will have to use it like this to be able to use the "count" function along with the toArray's "result".
var curFind = db.collection('tasks').find({query});
Then you can run two functions after it like this (one nested in the other)
curFind.count(function (e, count) {
// Use count here
curFind.skip(0).limit(10).toArray(function(err, result) {
// Use result here and count here
});
});

Related

optimize indexes in MongoDB

I have a Order collection with records looking like this:
{
"_id": ObjectId,
"status": String Enum,
"products": [{
"sku": String UUID,
...
}, ...],
...
},
My goal is to find find what products user buy together. Given an sku, i would like to browse the past order and find, for orders that contains more than 1 product AND of course the product with the looked up sku, what other products were bought along.
So I created a aggregation pipeline that works :
[
// exclude cancelled orders
{
'$match': {
'status': {
'$nin': [
'CANCELLED', 'CHECK_OUT'
]
}
}
},
// add a fields with product size and just the products sku
{
'$addFields': {
'size': {
'$size': '$products'
},
'skus': '$products.sku'
}
},
// limit to orders with 2 products or more including the looked up SKU
{
'$match': {
'size': {
'$gte': 2
},
'skus': {
'$elemMatch': {
'$eq': '3516215049767'
}
}
}
},
// group by skus
{
'$unwind': {
'path': '$skus'
}
}, {
'$group': {
'_id': '$skus',
'count': {
'$sum': 1
}
}
},
// sort by count, exclude the looked up sku, limit to 4 results
{
$sort': {
'count': -1
}
}, {
'$match': {
'_id': {
'$ne': '3516215049767'
}
}
}, {
'$limit': 4
}
]
Althought this works, this collection contains more than 10K docs and I have an alert on my MongoDB instance telling me than the ratio Scanned Objects / Returned has gone above 1000.
So my question is, how can my query be improve? and what indexes can I add to improve this?
db.Orders.stats();
{
size: 14329835,
count: 10571,
avgObjSize: 1355,
storageSize: 4952064,
freeStorageSize: 307200,
capped: false
nindexes: 2,
indexBuilds: [],
totalIndexSize: 466944,
totalSize: 5419008,
indexSizes: { _id_: 299008, status_1__created_at_1: 167936 },
scaleFactor: 1,
ok: 1,
operationTime: Timestamp({ t: 1635415716, i: 1 })
}
Let's start with rewriting the query a little bit to make it more efficient.
Currently you're matching all the orders with a certain status and after that you're starting with data manipulations, this means every single stage is doing work on a larger than needed data set.
What we can do is move all the queries into the first stage, this is made possible using Mongo's dot notation, like so:
{
'$match': {
'status': {
'$nin': [
'CANCELLED', 'CHECK_OUT',
],
},
'products.sku': '3516215049767', // mongo allows you to do this using the dot notation.
'products.1': { $exists: true }, // this requires the array to have at least two elements.
},
},
Now this achieves two things:
We start the pipeline only with relevant results, no need to calculate the $size of the array anymore to many unrelevant documents. This already will boost your performance greatly.
Now we can create a compound index that will support this specific query, before we couldn't do that as index usage is limited to the first step and that only included the status field. ( just as an anecdote is that Mongo actually does optimize pipelines, but in this specific case no optimization was possible to to the usage of $addFields )
The index that I recommend building is:
{ status: 1, "products.sku": 1 }
This will allow the best match to start off your pipeline.

Fetch immediate next and previous documents based on conditions in MongoDB

Background
I have the following collection:
article {
title: String,
slug: String,
published_at: Date,
...
}
MongoDB version: 4.4.10
The problem
Given an article, I want to fetch the immediate next and previous articles depending on the published_at field of that article.
Let's say I have an article with published_at as 100. And there are a lot of articles with published_at less than 100 and a lot having published_at more than 100. I want the pipeline/query to fetch only the articles with published_at values of 99 or 101 or the nearest possible.
Attempts
Here's my aggregation pipeline:
const article = await db.article.findOne({ ... });
const nextAndPrev = db.article.aggregate([
{
$match: {
$or: [
{
published_at: { $lt: article.published_at },
published_at: { $gt: article.published_at },
},
],
},
},
{
$project: { slug: 1, title: 1 },
},
{
$limit: 2,
},
]);
It gives the wrong result (two articles after the provided article), which is expected as I know it's incorrect.
Possible solutions
I can do this easily using two separate findOne queries like the following:
const next = await db.article.findOne({ published_at: { $gt: article.published_at } });
const prev = await db.article.findOne({ published_at: { $lt: article.published_at } });
But I was curious to know of any available methods to do it in a single trip to the database.
If I sort all the articles, offset it to the timestamp, and pull out the previous and next entries, that might work. I don't know the syntax.
Starting from MongoDB v5.0,
you can use $setWindowFields to fetch immediate prev/next documents according to certain sorting/ranking.
You can get the _id of current and next document through manipulating the documents: [<prev offset>, <next offset>] field. Similarly, for OP's scenario, it would be [-1, 1] to get the prev, current and next documents at once. Perform $lookup to fetch back the documents through the _id stored in the nearIds array.
{
"$setWindowFields": {
"partitionBy": null,
"sortBy": {
"published_at": 1
},
"output": {
nearIds: {
$addToSet: "$_id",
window: {
documents: [
-1,
1
]
}
}
}
}
}
Here is the Mongo playground for your reference.

MongoDB - count by field, and sort by count

I am new to MongoDB, and new to making more than super basic queries and i didn't succeed to create a query that does as follows:
I have such collection, each document represents one "use" of a benefit (e.g first row states the benefit "123" was used once):
[
{
"id" : "1111",
"benefit_id":"123"
},
{
"id":"2222",
"benefit_id":"456"
},
{
"id":"3333",
"benefit_id":"456"
},
{
"id":"4444",
"benefit_id":"789"
}
]
I need to create q query that output an array. at the top is the most top used benefit and how many times is was used.
for the above example the query should output:
[
{
"benefit_id":"456",
"cnt":2
},
{
"benefit_id":"123",
"cnt": 1
},
{
"benefit_id":"789",
"cnt":1
}
]
I have tried to work with the documentation and with $sortByCount but with no success.
$group
$group by benefit_id and get count using $sum
$sort by count descending order
db.collection.aggregate([
{
$group: {
_id: "$benefit_id",
count: { $sum: 1 }
}
},
{ $sort: { count: -1 } }
])
Playground
$sortByCount
Same operation using $sortByCount operator
db.collection.aggregate([
{ $sortByCount: "$benefit_id" }
])
Playground

Group by, count and stream individual results from mongodb query

In mongodb, having a collection with sessionIds and labels, I would like to group by the sessionId where label equals 'view_item' and accomplish:
Get the count of sessionId groups.
Be able to stream each sessionId to the consumer (assuming I have limited memory resources and a large number individual sessionIds)
Assume following documents in a collection:
{ "label" : "view_item", "sessionId" : "01e5dnnpsczgfq58rmp0cjtjm0" }
{ "label" : "view_category", "sessionId" : "01e5dnnpsczgfq58rmp0cjtjm0" }
{ "label" : "view_item", "sessionId" : "01e5dnnpsczgfq58rmp0cjtjm0" }
{ "label" : "view_item", "sessionId" : "01e5g7vzx5dh0mv8m6g1zbdrnj" }
{ "label" : "view_item", "sessionId" : "01e5g7vzx5dh0mv8m6g1zbdrnj" }
{ "label" : "view_category", "sessionId" : "01e5g7vzx5dh0mv8m6g1zbdrnj" }
{ "label" : "view_item", "sessionId" : "01e5g7vzx5dh0mv8m6g1zbdrnj" }
The expected result would be something like this:
Get results somehow and...
result.count() // 2 (or some other way of getting the count)
await result.next() // { sessionId: '01e5dnnpsczgfq58rmp0cjtjm0' }
await result.next() // { sessionId: '01e5g7vzx5dh0mv8m6g1zbdrnj' }
await result.next() // null
I've been fiddling with the aggregation framework and manage to group and count. In theory I could do two queries to first get count and then the groups, but in a frequent write scenario I'm worried that doing two separate queries could lead to inconsistencies, especially since I haven't figured out how to include any start / end ids in the result from the count query, which could be used to confine the results from the groups query.
What I have so far is:
const result = collection.aggregate([
{ $match: { label: 'view_item' } },
{ $group : { _id: { sessionId: '$sessionId' } } },
]);
await result.next() // { _id: { sessionId: '01e5g7vzx5dh0mv8m6g1zbdrnj' } }
await result.next() // { _id: { sessionId: '01e5dnnpsczgfq58rmp0cjtjm0' } }
await result.next() // null
and
const result = collection.aggregate([
{ $match: { label: 'view_item' } },
{ $group : { _id: { sessionId: '$sessionId' } } },
{ $facet: { count: [{ $count: 'count' }] } }
]);
await result.next() // { count: [ { count: 2 } ] }
await result.next() // null
Question
How can the two queries above be combined to reliably get the count and a result with the grouped sessionId that can be streamed? (I assume any solution relying on result.toArray().length needs to load the whole result in memory, which is ruled out).
Is it possible to do in one single query or more likely to get the count and start / end ids in one query and then do a second query to get the groups confined by the start / end ids?
Thanks!
If I understand you requirements clearly, you need to gather all the sessions that have been assigned to each label in one array, and count that sessions
if so, we may use the $group to group the sessions assigned to each label,
and $size to calculate that array length
we may do something like that
db.collection.aggregate([
{
$match: {} // if you need the 'View_Item' labels only, than add it here
},
{
$group: {
_id: "$label", // make the _id of the results is the label
sessionsIds: { // array of sessions
$push: "$sessionId"
}
}
},
{
$project: { // use the $project as $size is available only in the $project stage
_id: 1,
sessionsIds: 1,
sessionsCount: {
$size: "$sessionsIds"
}
}
}
])
you could try that here in Mongo Playground
Update, If you need to get the number of unique sessions Ids, and no duplication in sessionsIds array, use $addToSet instead of $push
update 2: If we need to group by the sessionId and count how many documents have this sessionId, we can do something like
db.collection.aggregate([
{
$match: {} // if you need the 'View_Item' labels only, than add it here
},
{
$group: {
_id: "$sessionId",
count: {
$sum: 1
}
}
}
])
this will return a result
[
{
"_id": "01e5dnnpsczgfq58rmp0cjtjm0",
"count": 3
},
{
"_id": "01e5g7vzx5dh0mv8m6g1zbdrnj",
"count": 4
}
]
if you need to make the _id of the result be an object rather than ObjectId, we could do something like
db.collection.aggregate([
{
$match: {} // if you need the 'View_Item' labels only, than add it here
},
{
$group: {
_id: {
sessionId: "$sessionId"
},
count: {
$sum: 1
}
}
}
])
this will result in
[
{
"_id": {
"sessionId": "01e5dnnpsczgfq58rmp0cjtjm0"
},
"count": 3
},
{
"_id": {
"sessionId": "01e5g7vzx5dh0mv8m6g1zbdrnj"
},
"count": 4
}
]
you can try all of that here Mongo_Playground 2

mongodb - aggregate failed with memory error

I'm trying to find duplicates in my sharded collection using the id field, which is of this pattern -
"id" : {
"idInner" : {
"k1" : "v1",
"k2" : "v2",
"k3" : "v3",
"k4" : "v4"
}
}
I used the below query, but received the "exception: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in." error, even though I used "allowDiskUse : true" in my query.
db.collection.aggregate([
{ $group: {
_id: { id: "$id" },
uniqueIds: { $addToSet: "$_id" },
count: { $sum: 1 }
} },
{ $match: {
count: { $gte: 2 }
} },
{ $sort : { count : -1} },
{ $limit : 10 }
],
{
allowDiskUse : true
});
Is there another way to get what I want, or something else I should pass in the above query? Thanks.
Please use allowDiskTrue in run command.
db.runCommand(
{ aggregate: "collection",
pipeline: [
{ $group: {
_id: { id: "$id" },
uniqueIds: { $addToSet: "$_id" },
count: { $sum: 1 }
} },
{ $match: {
count: { $gte: 2 }
} },
{ $sort : { count : -1} },
{ $limit : 10 }
],
allowDiskUse: true
}
)
Let me know if this works for you.
Run a $match first in the pipeline to keep only documents of let's say id.idiInner.k1 that are between a range, so that you will take results for that range only. Since you are interested in duplicates on the id key, all the duplicated documents will satisfy this criteria. See how much you should narrow down that range and run it next for the next range etc. until you cover all documents.
If it is something you must do frequently, automate, by declaring the ranges, feed them in a loop, keep the duplicates of every run and merge the results in the end.
Another fast hack/trick would be to bypass the mongos and run the aggregation directly in each shard. Doing so will limit your docs roughly (assuming well balanced shards) to docs/number_of_shards and you may overcome the memory limit. In this second approach I assume that your shard key is the id key, however if it is not then this approach will not work since the same duplicated documents will be scattered among the shards.