I had ran into a pretty big issue in my project where I use MongoDB as the database. I have a growing collection, with 6 million documents on which I would like to find duplicates in the past 90 days. The issue is that the optimal, logical solution that I came up with doesn't work because of a bottleneck, and everything that I had tried after that is suboptimal.
The documents: Each document contains 26 fields, but the relevant ones are created_at, and fingerprint. The first one is obvious, the second one is a field composed of some multiple fields - in function of this one I detect duplicate documents.
The problem: I made a function that needs to remove duplicate documents in the past 90 days.
Solutions that I've tried:
The aggregation pipeline: first $search using range, with gt: today - 90 days; second step I $project the document's _id, fingerprint, and posted_at fields, third step I $group by fingerprint, count it, and push into an array the items, last step $match the documents that have a count more than 1.
Here I found out that the $group phase is really slow, and I've read that the it can be improved, by sorting by the grouped field before. But the problem is that sorting after the search phase is a bottleneck, but if I remove the search phase I get even more data, which makes the calculations even worse. Also, I have noticed that not pushing the items in the group phase significantly improves the performance, but I need that.
If someone had this problem before, or knows the solution to it, please advise me on how to handle it.
I'm using the MongoDB C# driver, but that shouldn't matter, I believe.
First you should know that group is not very fast but index can help it by using COUNT_SCAN search.
I think you need to create a compound index for the created_at and fingerprint
schema.index({created_at: 1, fingerprint: 1});
then you need to sort your aggregate pipeline to be like this
const time = new Date(new Date().getTime() - (90 * 24 * 60 * 60 * 1000));
db.collection.aggregate([
{ $match: { created_at: {$gt: time}}, // the index will be used
{ $group: { _id: "$fingerprint", count: {$sum: 1}}}, // the index will be used
{ $match: {count: {$gt: 1}},
])
After getting the names that have duplicate fingerprint value you can make requests to delete duplicate and only keep one because $push in group will slow down the group step.
Note: the code here is in JS language.
Related
I've been working with this error for some time on my App here and was hoping someone can lend a hand finding the error of this aggregation query.
I'm using a docker container running MongoDB shell version v4.2.8. The app uses an Express.js backend with Mongoose middleware to interface with the database.
I want to make an aggregation pipeline that first matches by an indexed field called 'platform_number'. We then sort that by the indexed field 'date' (stored as an ISODate type). The remaining pipeline does not seem to influence the performance, its just some projections and filtering.
{$sort: {date: -1}} bottlenecks the entire aggregate, even though there are only around 250 documents returned. I do have an unindexed key called 'cycle_number' that correlates directly with the 'date' field. Replacing {date: -1} with {cycle_number: -1} speeds up the query, but then I get an out of memory error. Sorting has a max 100MB cap on Ram and this sort fails with 250 documents.
A possible solution would be to include the additional option { "allowDiskUse": true }. But before I do, I want to know why 'date' isn't sorting properly in the first place. Another option would be to index 'cycle_number' but again, why does 'date' throw up its hands?
The aggregation pipeline is provided below. It is first a match, followed by the sort and so on. I'm happy to explain what the other functions are doing, but they don't make much difference when I comment them out.
let agg = [ {$match: {platform_number: platform_number}} ] // indexed number
agg.push({$sort: {date: -1}}) // date is indexed in decending order
if (xaxis && yaxis) {
agg.push(helper.drop_missing_bgc_keys([xaxis, yaxis]))
agg.push(helper.reduce_bgc_meas([xaxis, yaxis]))
}
const query = Profile.aggregate(agg)
query.exec(function (err, profiles) {
if (err) return next(err)
if (profiles.length === 0) { res.send('platform not found') }
else {
res.json(profiles)
}
})
Once again, I've been tiptoeing around this issue for some time. Solving the issue would be great, but understanding the issue better is also awesome, Thank you for your help!
The query executor is not able to use a different index for the second stage. MongoDB indexes map the key values to the location of documents in the data files.
Once the $match stage has completed, the documents are in the pipeline, so no further index use is possible.
However, if you create a compound index on {platform_number:1, date:-1} the query planner can combine the $match and $sort stages into a single stage that will not require a blocking sort, which should greatly improve the performance of this pipeline.
I have a collection named "questions" with around 15 fields. There is an indexed field called "user". Another field is "response_api" which is a subdocument of around 60KB. And there are around 40000 documents in this collection.
When I run aggreate query with only $match stage on user field, it is very slow and takes around 11 seconds to complete.
The query is:
db.questions.aggregate([{$match: {user: ObjectId("5c9a19abc89b2d09740ccd1d")} }])
But when I run this same query with $project stage, it returns pretty fast in less than 10 millis. The query is:
db.questions.aggregate([{$match: {user: ObjectId("5c9a19abc89b2d09740ccd1d")} }, {$project: {_id: 1, user: 1, subject: 1}}])
Note: This particular user has 5000 documents in questions collection.
When I copied this collection without "response_api" field and created index on user field, then both these queries on the copied collection were pretty fast and took less than 10 millis.
Can somebody explain What's going on here? Is it because of that large field?
According to this documentation provided by MongoDB, you should keep a check on the size of indexes. Basically, if index size is more than what your RAM can accomadate, then MongoDB will be reading them from the disk and thus your queries will be a lot slower.
I have a collection with 100 million documents. I want to safely update a number of the documents (by safely I mean update a document only if it hasn't already been updated). Is there an efficient way to do it in Mongo?
I was planning to use the $isolated operator with a limit clause but it appears mongo doesn't support limiting on updates.
This seems simple but I'm stuck. Any help would be appreciated.
Per Sammaye, it doesn't look like there is a "proper" way to do this.
My workaround was to create a sequence as outlined on the mongo site and simply add a 'seq' field to every record in my collection. Now I have a unique field which is reliably sortable to update on.
Reliably sortable is important here. I was going to just sort on the auto-generated _id but I quickly realized that natural order is NOT the same as ascending order for ObjectId's (from this page it looks like the string value takes precedence over the object value which matches the behavior I observed in testing). Also, it is entirely possible for a record to be relocated on disk which makes the natural order unreliable for sorting.
So now I can query for the record with the smallest 'seq' which has NOT already been updated to get an inclusive starting point. Next I query for records with 'seq' greater than my starting point and skip (it is important to skip since the 'seq' may be sparse if you remove documents, etc...) the number of records I want to update. Put a limit of 1 on that query and you've got a non-inclusive endpoint. Now I can issue an update with a query of 'updated' = 0, 'seq' >= my starting point and < my endpoint. Assuming no other thread has beat me to the punch the update should give me what I want.
Here are the steps again:
create an auto-increment sequence using findAndModify
add a field to your collection which uses the auto-increment sequence
query to find a suitable starting point: db.xx.find({ updated: 0 }).sort({ seq: 1 }).limit(1)
query to find a suitable endpoint: db.xx.find({ seq: { $gt: startSeq }}).sort({ seq: 1 }).skip(updateCount).limit(1)
update the collection using the starting and ending points: db.xx.update({ updated: 0, seq: { $gte: startSeq }, seq: { $lt: endSeq }, $isolated: 1}, { updated: 1 },{ multi: true })
Pretty painful but it gets the job done.
Given: Connection is Safe=True so Update's return will contain update information.
Say I have a documents that look like:
[{'a': [1]}, {'a': [2]}, {'a': [1,2]}]
And I issue:
coll.update({}, {'$addToSet': {'a':1}}, multi=True)
The result would be:
{u'connectionId': 28,
u'err': None,
u'n': 3,
u'ok': 1.0,
u'updatedExisting': True
}
Even when come documents already have that value. To avoid this I could issue a command.
coll.update({'a': {'$ne': 1}}, {'$push': {'a':1}}, multi=True)
What's the Time Complexity Comparison for $addToSet vs. $push with a $ne check ?
Looks like $addToSet is doing the same thing as your command: $push with a $ne check. Both would be O(N)
https://github.com/mongodb/mongo/blob/master/src/mongo/db/ops/update_internal.cpp
if speed is really important then why not use a hash:
instead of:
{'$addToSet': {'a':1}}
{'$addToSet': {'a':10}}
use:
{$set: {'a.1': 1}
{$set: {'a.10': 1}
Edit
Ok since I read your question wrong all along it turns out that actually you are looking at two different queries and judging the time complexity between them.
The first query being:
coll.update({}, {'$addToSet': {'a':1}}, multi=True)
And the second being:
coll.update({'a': {'$ne': 1}}, {'$push': {'a':1}}, multi=True)
First problem springs to mind here, no indexes. $addToSet, being an update modifier, I do not believe it uses an index as such you are doing a full table scan to accomplish what you need.
In reality you are looking for all documents that do not have 1 in a already and looking to $push the value 1 to that a array.
So 2 points to the second query even before we get into time complexity here because the first query:
Does not use indexes
Would be a full table scan
Would then do a full array scan (with no index) to $addToSet
So I have pretty much made my mind up here that the second query is what your looking for before any of the Big O notation stuff.
There is a problem to using big O notation to explain the time complexity of each query here:
I am unsure of what perspective you want, whether it is per document or for the whole collection.
I am unsure of indexes as such. Using indexes will actually create a Log algorithm on a however not using indexes does not.
However the first query would look something like: O(n) per document since:
The $addToSet would need to iterate over each element
The $addToSet would then need to do an O(1) op to insert the set if it does not exist. I should note I am unsure whether the O(1) is cancelled out or not (light reading suggests my version), I have cancelled it out here.
Per collection, without the index it would be: O(2n2) since the complexity of iterating a will expodentially increase with every new document.
The second query, without indexes, would look something like: O(2n2) (O(n) per document) I believe since $ne would have the same problems as $addToSet without indexes. However with indexes I believe this would actually be O(log n log n) (O(log n) per document) since it would first find all documents with a in then all documents without 1 in their set based upon the b-tree.
So based upon time complexity and the notes at the beginning I would say query 2 is better.
If I am honest I am not used to explaining in "Big O" Notation so this is experimental.
Hope it helps,
Adding my observation in difference between addToSet and push from bulk update of 100k documents.
when you are doing bulk update. addToSet will be executed separately.
for example,
bulkInsert.find({x:y}).upsert().update({"$set":{..},"$push":{ "a":"b" } , "$setOnInsert": {} })
will first insert and set the document. And then it executes addToSet query.
I saw clear difference of 10k between
db.collection_name.count() #gives around 40k
db.collection_name.count({"a":{$in:["b"]}}) # it gives only around 30k
But when replaced $addToSet with $push. both count query returned same value.
note: when you're not concerned about duplicate entry in array. you can go with $push.
I have an interesting problem. I have a working M/R version of this but it's not really a viable solution in a small-scale environment since it's too slow and the query needs to be executed real-time.
I would like to iterate over each element in a collection and score it, sort by descending, limit to top 10 and return the results to the applications.
Here is the function I'd like applied to each document in pseudo code.
var score = 0;
foreach(tag in document.Tags) {
score += someMap[tag];
}
return score;
Since your someMap is changing each time, I don't see any alternative other than to score all the documents and return the highest-scoring ones. Whatever method you adopt for this type of operation, you'll have to consider all the documents in the collection, which is going to be slow, and will become more and more costly as the collection you're scanning grows.
One issue with map reduce is that each mongod instance can only run one concurrent map reduce. This is a limitation of the javascript engine, which is single-threaded. Multiple map reduces will be interleaved, but they cannot run concurrently with one another. This means that if you're relying on map reduce for "real-time" uses, that is, if your web page has to run a map reduce to render, you'll eventually hit a limit where page load times become unacceptably slow.
You can work around this by querying all the documents into your application, and doing the scoring, sorting, and limiting in your application code. Queries in MongoDB can run concurrently, unlike map reduce, though of course this means that your application servers will have to do a lot of work.
Finally, if you are willing to wait for MongoDB 2.2 to be released (which should be within a few months), you can use the new aggregation framework in place of map reduce. You'll have to massage the someMap to generate the correct pipeline steps. Here's an example of what this might look like if someMap were {"a": 5, "b": 2}:
db.runCommand({aggregate: "foo",
pipeline: [
{$unwind: "$tags"},
{$project: {
tag1score: {$cond: [{$eq: ["$tags", "a"]}, 5, 0]},
tag2score: {$cond: [{$eq: ["$tags", "b"]}, 3, 0]}}
},
{$project: {score: {$add: ["$tag1score", "$tag2score"]}}},
{$group: {_id: "$_id", score: {$sum: "$score"}}},
{$sort: {score: -1}},
{$limit: 10}
]})
This is a little complicated, and bears explaining:
First, we "unwind" the tags array, so that the following steps in the pipeline process documents where "tags" is a scalar -- the value of the tag from the array -- and all the other document fields (notably _id) are duplicated for each unwound element.
We use a projection operator to convert from tags to named score fields. The $cond/$eq expression for each roughly means (for the tag1score example) "if the value in the document in the 'tags' field id equal to 'a', then return 5 and assign that value to a new field tag1score, else return 0 and assign that". This expression would be repeated for each tag/score combination in your someMap. At this point in the pipeline, each document will nave N tagNscore fields, but at most one of them will have a non-zero value.
Next we use another projection operator to create a score field whose value is the sum of the tagNscore fields in the document.
Next we group the documents by their _id, and sum up the value of the score field from the previous step across all documents in each group.
We sort by score, descending (i.e. greatest scores first)
We limit to only the top 10 scores.
I'll leave it as an exercise to the reader how to convert someMap into the correct set of projections in step 2, and the correct set of fields to add in step 3.
This is essentially the same set of steps that your application code or map reduce would go through, but has the following distinct advantages: instead of map reduce, the aggregation framework is fully implemented in C++ and is faster and more concurrent than map reduce; and unlike querying all the documents to your application, the aggregation framework works with the data on the server side, saving network load. But like the other two approaches, this will still have to consider each document, and can only limit the result set once the score has been calculated for all of them.