I have a schema with subdocs.
// Schema
var company = {
_id: ObjectId,
publish: Boolean,
divisions: {
employees: [ObjectId]
}
};
I need to find all the subdocs (divisions) that match my query. It appears that I have to use 2 matches - one to filter out initial docs and a second one to filter out the matching subdocs from the resulting $unwind operation. Is there a more efficient way?
// Query
this.aggregate({
$match: {
'publish': 1,
'divisions.employees': new ObjectId(userid)
}
}, {
$unwind: '$divisions'
}, {
$match: {
'divisions.employees': new ObjectId(userid)
}
}
I found this ticket but I am unsure this does what I need.
Doing both matches is the right thing here. You could eliminate the first match stage and just unwind, but having an initial $match allows you to narrow down the pipeline to exactly those documents that will produce at least one output document (i.e. those documents for which publish : true and some employees ObjectId matches the given ObjectId). You will be able to use indexes, like an index on { publish : 1, divisions.employees : 1 }, to perform the first match stage quickly.
However, you should ask yourself why you are using the aggregation pipeline here and if it is appropriate. Will you commonly be querying for a given employee that's part of a company with publish : 1? Is this one of the main queries for your use case? If it's infrequent or not critical then the aggregation is a good way to do it. Otherwise, you should reconsider the schema. You could make this query easy with a schema like
{
"_id" : ObjectId,
"publish" : Boolean,
"company" : (unique identifier, possibly a String or ObjectId)
}
that models employees as documents and denormalizes company information into the employee document. Then your query is as easy as
db.employees.find({ "_id" : ObjectId(userid), "publish" : true })
which will be much quicker than the aggregation (not that the aggregation will be slow; this is just relatively quicker). I'm not telling you to do it this way - use your own knowledge of your use case to make the right call.
Related
I am building a website using Next.js and MongoDB. On one of my website page, I have implemented filters to help search for products. To retrieve and update the filters (update item count each time a filter is changing), I have an api endpoint which query my MongoDB Collection. This specific collection contains ~200.000 items. Each item have several fields such as brand, model, place etc...
I have 9 fields which I use to filter and thus must fetch through my api each time there's a change. Therefore I have 9 queries running through my api, on for each field/filter and the query on MongoDB looks like :
var models = await db_collection
.aggregate([
{
$match: {
$and: [filter],
},
},
{
$group: { _id: '$model', count: { $sum: 1 } },
},
{ $sort: { _id: 1 } },
])
.toArray();
The problem is that, as 9 queries are running, the update of the page (mainly due to the queries) takes ~4secs which is too long. I would like to reach <1sec. I would like to now if there is a good practice I am missing such as doing one query instead of one for each filter or maybe a database optimization on my database.
Thank you,
I have tried using a $project argument before $groupon aggregate pipeline for the query to reduce the number of field returned, using distinct and then sorting instead of aggregate but none of these solutions seem to improve efficiency.
EDIT :
As suggested by R2D2, I am posting the structure of a document on MongoDB in my collection :
{
_id : ObjectId('example_id')
source : string
date : date
brand : string
family : string
model : string
size : string
color : string
condition : string
contact : string
SKU : string
}
Depending on the pages, I query unique values of each field of interest (source, date, brand, family, model, size, color, condition, contact) and their count depending on filters (e.g. Number for each unique values of model for selected brands, I also query documents based on specific values of these fields.
As mentioned, you indexes are important and if you are querying by those field I recomand to create compound indexes, see here for indexes optimisation : https://learnmongodbthehardway.com/schema/indexes/
As far as the aggregation pipeline goes, nothing is out of the ordinary, but this specific aggregation just return the number of items per model matching the criteria, not the matching document. If it is all the data you need you might find it usefull to create a new collection when you perform pre-caculation for common search daily (how many items have the color black, ...) this way, when the page loads, you don't have to look in you 200k+ items, but just in your pre-calculated statistical collection. Schedule a cron task or use a lambda function to invoke a route on your api that will calculate all your stats once a day and upsert them in a new collection.
Also I believe the "and" is useless useless since you can use the implicit $and. You can look for an object like :
{
color : {$in : ['BLACK', 'BLUE']},
size : 3
}
rather than :
[{color : 'BLACK'}, {color : 'BLUE'}, {size : 3}]
Reserve the explicit $and for when you really need it.
select records using aggregate:
db.getCollection('stock_records').aggregate(
[
{
"$project": {
"info.created_date": 1,
"info.store_id": 1,
"info.store_name": 1,
"_id": 1
}
},
{
"$match": {
"$and": [
{
"info.store_id": "563dcf3465512285781608802a"
},
{
"info.created_date": {
$gt: ISODate("2021-07-18T21:07:42.313+00:00")
}
}
]
}
}
])
select records using find:
db.getCollection('stock_records').find(
{
'info.store_id':'563dcf3465512285781608802a',
'info.created_date':{ $gt:ISODate('2021-07-18T21:07:42.313+00:00')}
})
What is difference between these queries and which is best for select by id and date condition?
I think your question should be rephrased to "what's the difference between find and aggregate".
Before I dive into that I will say that both commands are similar and will perform generally the same at scale. If you want specific differences is that you did not add a project option to your find query so it will return the full document.
Regarding which is better, generally speaking unless you need a specific aggregation operator it's best to use find instead, it performs better
Now why is the aggregation framework performance "worse"? it's simple. it just does "more".
Any pipeline stage needs aggregation to fetch the BSON for the document then convert them to internal objects in the pipeline for processing - then at the end of the pipeline they are converted back to BSON and sent to the client.
This, especially for large queries has a very significant overhead compared to a find where the BSON is just sent back to the client.
Because of this, if you could execute your aggregation as a find query, you should.
Aggregation is slower than find.
In your example, Aggregation
In the first stage, you are returning all the documents with projected fields
For example, if your collection has 1000 documents, you are returning all 1000 documents each having specified projection fields. This will impact the performance of your query.
Now in the second stage, You are filtering the documents that match the query filter.
For example, out of 1000 documents from the stage 1 you select only few documents
In your example, find
First, you are filtering the documents that match the query filter.
For example, if your collection has 1000 documents, you are returning only the documents that match the query condition.
Here You did not specify the fields to return in the documents that match the query filter. Therefore the returned documents will have all fields.
You can use projection in find, instead of using aggregation
db.getCollection('stock_records').find(
{
'info.store_id': '563dcf3465512285781608802a',
'info.created_date': {
$gt: ISODate('2021-07-18T21:07:42.313+00:00')
}
},
{
"info.created_date": 1,
"info.store_id": 1,
"info.store_name": 1,
"_id": 1
}
)
I have a single collection named assets that contains documents in 2+ formats, ParentObject and ChildObject. I am currently associating ParentObject to ChildObject with two queries. Can this be done with an aggregate query?
ParentObject
{
"_id" : {
"oid" : "ParentFooABC",
"brand" : "acme"
},
"type": "com.ParentClass",
"title": "Title1234",
"availableDate": Date,
"expirationDate": Date
}
ChildObject
{
"_id" : {
"oid" : "ChildFoo",
"brand" : "acme"
},
"type": "com.ChildClass",
"parentObject": "ParentFooABC",
"title": "Title1234",
"modelNumber": "8HE56",
"modelLine": "Metro",
"availableDate": Date,
"expirationDate": Date,
"IDRequired": true
}
Currently I filter data like this
val parent = db.asset.find(MongoDBObject("_id.brand": MongoDBObject($eq: "acme")),MongoDBObject("type":"com.ParentClass"))
val children = db.asset.find(MongoDBObject("_id.brand": MongoDBObject($eq: "acme")),MongoDBObject("type":"com.ChildClass"), MongoDBObject("parentObject": "${parent._id.oid}"))
if(childs.nonEmpty) {
//I have confirmed this parent has a child associated and should be returned
val childModelNumbers = childs.map(child -> child.modelNumber)
val response = ResponseObject(parent, childModelNumbers)
}
Can I do this in an aggregate query?
Updated:
Mongo Version: db version v2.6.11
Language: Scala
Driver: Casbah 2.8.1
Technically yes, however what you're doing now is standard practice with mongodb. If you need to join collections of data frequently you probably should use an RDBMS. However if you occasionally need to aggregate data from 2 separate collection then there is $lookup. In general you'll find yourself populating data into your document from another document pretty frequently, but that's typically how a document database works. Using $lookup when you're really looking for an equivalent to a SQL JOIN isn't something I would recommend. It's not really the intended purpose, and you'd be better off doing exactly what you're doing now or moving to an RDBMS.
Addition:
Since the data is in the same collection you can use an aggregation to $group that data together. You just have to group them based on _id.brand.
A quick example(in mongodb shell):
db.asset.aggregate([
{ //start with a match to narrow down
$match: {"_id.brand": "acme"}
},
{ //group by `_id.brand`
$group: {
_id: "$_id.brand",
"parentObject": {$first: "$parentObject"},
title: {$first: "$title"},
modelNumer: {$addToSet: "$modelNumer"}, //accumulate child model # as array
modelLine: {$addToSet: "$modelLine"}, //accumulate child model line as array
availableDate: {$first: "$availableDate"},
expirationDate: {$first: "$expirationDate"}
}
}
]);
This should return documents where all the children's properties have been grouped into the parent document as an array(modelNumber,modelLine). It's probably better to do what you're doing above, it might even be better yet if you put the children into their own collection instead of keeping track of their type with a type field in the document. This way you know that the documents in each collection represent a given data structure. However, if you were to do that you would not be able to perform the aggregation in the example.
I've got a question on the design of documents in order to be able to efficiently perform aggregation. I will take a dummy example of document :
{
product: "Name of the product",
description: "A new product",
comments: [ObjectId(xxxxx), ObjectId(yyyy),....]
}
As you could see, I have a simple document which describes a product and wraps some comments on it. Imagine this product is very popular so that it contains millions of comments. A comment is a simple document with a date, a text and eventually some other features. The probleme is that such a product can easily be larger than 16MB so I need not to embed comments in the product but in a separate collection.
What I would like to do now, is to perform aggregation on the product collection, a first step could be for example to select various products and sort the comments by date. It is a quite easy operation with embedded documents, but how could I do with such a design ? I only have the ObjectId of the comments and not their content. Of course, I'd like to perform this aggregation in a single operation, i.e. I don't want to have to perform the first part of the aggregation, then query the results and perform another aggregation.
I dont' know if that's clear enough ? ^^
I would go about it this way: create a temp collection that is the exact copy of the product collection with the only exception being the change in the schema on the comments array, which would be modified to include a comment object instead of the object id. The comment object will only have the _id and the date field. The above can be done in one step:
var comments = [];
db.product.find().forEach( function (doc){
doc.comments.forEach( function(x) {
var obj = {"_id": x };
var comment = db.comment.findOne(obj);
obj["date"] = comment.date;
comments.push(obj);
});
doc.comments = comments;
db.temp.insert(doc);
});
You can then run your aggregation query against the temp collection:
db.temp.aggregate([
{
$match: {
// your match query
}
},
{
$unwind: "$comments"
},
{
$sort: { "comments.date": 1 } // sort the pipeline by comments date
}
]);
I have a mongo collection 'books'. Here's a typical book:
BOOK
name: 'Test Book'
author: 'Joe Bloggs'
print_runs: [
{publisher: 'OUP', year: 1981},
{publisher: 'Penguin', year: 1987},
{publisher: 'Harper-Collins', year: 1992}
]
I'd like to be able to filter books to return only books whose last print run was after a given date, and/or before a given date...and I've been struggling to find a feasible query. Any suggestions appreciated.
There are a few options, as getting access to the "last" element in the array and only filtering on that is difficult/impossible with the normal find options in MongoDB queries. (Unfortunately, you can't $slice with find).
Store the most recent published publisher and year in the print_runs array and in a special (denormalized/copy) of the data directly on the book object. Book.last_published_by and Book.last_published_date for example. Queries would be simple and super fast.
MapReduce. This would be simple enough to emit the last element in the array and then "reduce" it to just that. You'd need to do incremental updates on the MapReduce to keep it accurate.
Write a relatively complex aggregation framework expression
The aggregation might look like:
db.so.aggregate({ $project :
{ _id: 1, "print_run_year" : "$print_runs.year" }},
{ $unwind: "$print_run_year" },
{ $group : { _id : "$_id", "newest" : { $max : "$print_run_year" }}},
{ $match : { "newest" : { $gt : 1991, $lt: 2000 } }
})
As it may require a bit of explanation:
It projects and unwinds the year of the print runs for each book.
Then, group on the _id (of the book, and create a new computed field called, newest which contains the highest print run year (from the projection).
Then, filter on newest using a $gt and $lt
I'd suggest option #1 above would be the best from an efficiency perspective, followed by the MapReduce, and then a distant third, option #3.