Robomongo : Exceeded memory limit for $group - mongodb

I`m using a script to remove duplicates on mongo, it worked in a collection with 10 items that I used as a test but when I used for the real collection with 6 million documents, I get an error.
This is the script which I ran in Robomongo (now known as Robo 3T):
var bulk = db.getCollection('RAW_COLLECTION').initializeOrderedBulkOp();
var count = 0;
db.getCollection('RAW_COLLECTION').aggregate([
// Group on unique value storing _id values to array and count
{ "$group": {
"_id": { RegisterNumber: "$RegisterNumber", Region: "$Region" },
"ids": { "$push": "$_id" },
"count": { "$sum": 1 }
}},
// Only return things that matched more than once. i.e a duplicate
{ "$match": { "count": { "$gt": 1 } } }
]).forEach(function(doc) {
var keep = doc.ids.shift(); // takes the first _id from the array
bulk.find({ "_id": { "$in": doc.ids }}).remove(); // remove all remaining _id matches
count++;
if ( count % 500 == 0 ) { // only actually write per 500 operations
bulk.execute();
bulk = db.getCollection('RAW_COLLECTION').initializeOrderedBulkOp(); // re-init after execute
}
});
// Clear any queued operations
if ( count % 500 != 0 )
bulk.execute();
This is the error message:
Error: command failed: {
"errmsg" : "exception: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.",
"code" : 16945,
"ok" : 0
} : aggregate failed :
_getErrorWithCode#src/mongo/shell/utils.js:23:13
doassert#src/mongo/shell/assert.js:13:14
assert.commandWorked#src/mongo/shell/assert.js:266:5
DBCollection.prototype.aggregate#src/mongo/shell/collection.js:1215:5
#(shell):1:1
So I need to set allowDiskUse:true to work? Where do I do that in the script and is there any problem doing this?

{ allowDiskUse: true }
Should be placed right after the aggregation pipeline.
In your code this should go like this:
db.getCollection('RAW_COLLECTION').aggregate([
// Group on unique value storing _id values to array and count
{ "$group": {
"_id": { RegisterNumber: "$RegisterNumber", Region: "$Region" },
"ids": { "$push": "$_id" },
"count": { "$sum": 1 }
}},
// Only return things that matched more than once. i.e a duplicate
{ "$match": { "count": { "$gt": 1 } } }
], { allowDiskUse: true } )
Note: Using { allowDiskUse: true } may introduce issues related to performance as aggregation pipeline will access data from temporary files on disk. Also depends on disk performance and the size of your working set. Test performance for your use case

It is always better to use match before group when you have large data.
If you are using match before group, you won't get into this problem.
db.getCollection('sample').aggregate([
{$match:{State:'TAMIL NADU'}},
{$group:{
_id:{DiseCode:"$code", State:"$State"},
totalCount:{$sum:1}
}},
{
$project:{
Code:"$_id.code",
totalCount:"$totalCount",
_id:0
}
}
])
If you really overcome this issue without match, then solution is { allowDiskUse: true }

Here is a simple undocumented trick that can help in a lot of case to avoid disk usage.
You can use a intermediate $project stage to reduce the size of the records passed in the $sort stage.
In this exemple it will drive to :
var bulk = db.getCollection('RAW_COLLECTION').initializeOrderedBulkOp();
var count = 0;
db.getCollection('RAW_COLLECTION').aggregate([
// here is the important stage
{ "$project": { "_id": 1, "RegisterNumber": 1, "Region": 1 } }, // this will reduce the records size
{ "$group": {
"_id": { RegisterNumber: "$RegisterNumber", Region: "$Region" },
"ids": { "$push": "$_id" },
"count": { "$sum": 1 }
}},
{ "$match": { "count": { "$gt": 1 } } }
]).forEach(function(doc) {
var keep = doc.ids.shift(); // takes the first _id from the array
bulk.find({ "_id": { "$in": doc.ids }}).remove(); // remove all remaining _id matches
count++;
if ( count % 500 == 0 ) { // only actually write per 500 operations
bulk.execute();
bulk = db.getCollection('RAW_COLLECTION').initializeOrderedBulkOp(); // re-init after execute
}
});
see the first $project stage that is here only to avoid the disk usage.
This is especially useful for collection will large records with most of the data unused in the aggregate

From MongoDB Docs
The $group stage has a limit of 100 megabytes of RAM. By default, if
the stage exceeds this limit, $group will produce an error. However,
to allow for the handling of large datasets, set the allowDiskUse
option to true to enable $group operations to write to temporary
files. See db.collection.aggregate() method and the aggregate command
for details.

Related

MongoDB: Aggregation ($sort) on a union of collections very slow

I have a few collections where I need to perform a union on, then query. However, I realise this is very slow for some reason. The explain is not that helpful as it only tells if the 1st $match stage is indexed. I am using a pipeline like:
[
{
"$match": {
"$and": [
{ ... }
]
}
},
// repeat this chunk for each collection
{
"$unionWith": {
"coll": "anotherCollection",
"pipeline": [
{
"$match": {
"$and": [
{ ... }
]
}
},
]
}
},
// Then an overall limit/handle pagination for all the unioned results
// UPDATE: Realised the sort is the culprit
{ "$sort": { "createdAt": -1 } },
{ "$skip": 0},
{ "$limit": 50 }
]
Is there a better way to do such a query? Does mongo do the unions in parallel maybe? Is there a "DB View" I can use to obtain a union of all the collections?
UPDATE: Just realised the runtime increase once I add the sort. I suspect it cannot use indexes because its on a union?
Yes, there is a way. But it's not that trivial, you need to change how you do pagination. It requires more engineering, as you got to keep track of the page not only by number, but also by last elements found
If you paginate by filtering by a unique identifier (usually _id) with a cursor you can do early filtering.
!!! Important !!!
You will need to keep track of the last item found instead of skipping a number of elements. If you don't do so, you will lose track of the pagination, and maybe never return some data, or return some twice, which is way worse than being slow
[
{
"$match": {
"$and": [
{ ... }
],
"_id":{"$gt": lastKnownIdOfCollectionA} // this will filter out everything you already saw, so no skip needed
}
},
{ "$sort": { "createdAt": -1 } }, // this sorting is indexed!
{ "$limit": 50 } // maybe you will take 0 but max 50, you don't care about the rest
// repeat this chunk for each collection
{
"$unionWith": {
"coll": "anotherCollection",
"pipeline": [
{
"$match": {
"$and": [
{ ... }
],
"_id":{"$gt": lastKnownIdOfCollectionB} // this will filter out everything you already saw, so no skip needed
}
},
{ "$sort": { "createdAt": -1 } }, // this sorting is indexed!
{ "$limit": 50 } // maybe you will take 0 but max 50, you don't care about the rest
]
}
},
// At this point you have MAX 100 elements, an index is not needed for sorting :)
{ "$sort": { "createdAt": -1 } },
{ "$skip": 0},
{ "$limit": 50 }
]
In this example, I do the early filter by _id which also contains the createdAt timestamp. If the filtering is not about the creation date you might have to define which identifier will suit the most. Remember the identifier must be a unique identifier, but you can use more than one value combined (eg. createdAt + randomizedId)

MongoDB - count by field, and sort by count

I am new to MongoDB, and new to making more than super basic queries and i didn't succeed to create a query that does as follows:
I have such collection, each document represents one "use" of a benefit (e.g first row states the benefit "123" was used once):
[
{
"id" : "1111",
"benefit_id":"123"
},
{
"id":"2222",
"benefit_id":"456"
},
{
"id":"3333",
"benefit_id":"456"
},
{
"id":"4444",
"benefit_id":"789"
}
]
I need to create q query that output an array. at the top is the most top used benefit and how many times is was used.
for the above example the query should output:
[
{
"benefit_id":"456",
"cnt":2
},
{
"benefit_id":"123",
"cnt": 1
},
{
"benefit_id":"789",
"cnt":1
}
]
I have tried to work with the documentation and with $sortByCount but with no success.
$group
$group by benefit_id and get count using $sum
$sort by count descending order
db.collection.aggregate([
{
$group: {
_id: "$benefit_id",
count: { $sum: 1 }
}
},
{ $sort: { count: -1 } }
])
Playground
$sortByCount
Same operation using $sortByCount operator
db.collection.aggregate([
{ $sortByCount: "$benefit_id" }
])
Playground

Remove all Duplicates except the most recent document

I would like to clear all duplicated of a specific field in a collection. leaving only the earliest entry of the duplicates.
Here is my aggregate query which works great for finding the duplicates:
db.History.aggregate([
{ $group: {
_id: { name: "$sessionId" },
uniqueIds: { $addToSet: "$_id" },
count: { $sum: 1 }
} },
{ $match: {
count: { $gte: 2 }
} },
{ $sort : { count : -1} }
],{ allowDiskUse:true,
cursor:{}});
Only problem is that i need to execute a remove query as well and keep for each of the duplicates the youngest entry (determined by the field 'timeCreated':
"timeCreated" : ISODate("2016-03-07T10:48:43.251+02:00")
How exactly do i do that?
Personally I would take advantage of the fact that the ObjectId values themselves are "monotonic" or therefore "ever increasing in value" which means that the "youngest" or "most recent" would come at the end of a naturally sorted list.
So rather than force the aggregation pipeline to do the sorting, the most logical and efficient thing to do is simply sort the list of unique _id values returned per document as you process each response.
So basically working with the listing that you must have found:
Remove Duplicates from MongoDB
And is actually my answer ( and your the second person to reference this week, and yet no votes received for useful! Hmm! ), where it's just a simple .sort() applied within the cursor iteration for the returned array:
Using the _id Value
var bulk = db.History.initializeOrderedBulkOp(),
count = 0;
// List "all" fields that make a document "unique" in the `_id`
// I am only listing some for example purposes to follow
db.History.aggregate([
{ "$group": {
"_id": "$sessionId",
"ids": { "$push": "$_id" }, // _id values are already unique, so $addToSet adds nothing
"count": { "$sum": 1 }
}},
{ "$match": { "count": { "$gt": 1 } } }
],{ "allowDiskUse": true}).forEach(function(doc) {
doc.ids.sort().reverse(); // <-- this is the only real change
doc.ids.shift(); // remove first match, which is now youngest
bulk.find({ "_id": { "$in": doc.ids } }).remove(); // removes all $in list
count++;
// Execute 1 in 1000 and re-init
if ( count % 1000 == 0 ) {
bulk.execute();
bulk = db.History.initializeOrderedBulkOp();
}
});
if ( count % 1000 != 0 )
bulk.execute();
Using a specific field
If you "really" are set on adding another date value on which to determine which is youngest then just add to the array in $push first, then apply the client side sort function. Again just a really simple change:
var bulk = db.History.initializeOrderedBulkOp(),
count = 0;
// List "all" fields that make a document "unique" in the `_id`
// I am only listing some for example purposes to follow
db.History.aggregate([
{ "$group": {
"_id": "$sessionId",
"ids": { "$push": {
"_id": "$_id",
"created": "$timeCreated"
}},
"count": { "$sum": 1 }
}},
{ "$match": { "count": { "$gt": 1 } } }
],{ "allowDiskUse": true}).forEach(function(doc) {
doc.ids = doc.ids.sort(function(a,b) { // sort dates and just return _id
return a.created.valueOf() < a.created.valueOf()
}).map(function(el) { return el._id });
doc.ids.shift(); // remove first match, which is now youngest
bulk.find({ "_id": { "$in": doc.ids } }).remove(); // removes all $in list
count++;
// Execute 1 in 1000 and re-init
if ( count % 1000 == 0 ) {
bulk.execute();
bulk = db.History.initializeOrderedBulkOp();
}
});
if ( count % 1000 != 0 )
bulk.execute();
So it's a really simple process with no "real" alteration to the original process used to identify the duplicates and then remove all but one of them.
Always the best approach here to just let the server do the job of finding the duplicates, then client side when iterating the cursor you can then work out from the returned array which document is going to be kept and which ones you are going to remove.

Insert field with array size in mongo

I have a documents in mongodb, containing some array. Now I need to have a field containing a quantity of items of this array. So I need to update documents adding this field.
Simply I thought this will work:
db.myDocument.update({
"itemsTotal": {
$exists: false
},
"items": {
$exists: true
}
}, {
$set: {
itemsTotal: {
$size: "$items"
}
}
}, {
multi: true
})
But it completes with "not okForStorage".
Also I tried to make an aggregation, but it throws exception:
"errmsg" : "exception: invalid operator '$size'",
"code" : 15999,
"ok" : 0
What is a best solution and what I do wrong? I'm starting to think about writing java tool for calculation totals and updating documents with it.
You can use the .aggregate() method to $project your documents and return the $size of the items array. After that you will need to loop through your aggregation result using the .forEach loop and $set the itemTotal field for your document using "Bulk" operation for maximum efficiency.
var bulkOp = db.myDocument.initializeUnorderedBulkOp();
var count = 0;
db.myDocument.aggregate([
{ "$match": {
"itemsTotal": { "$exists": false } ,
"items": { "$exists": true }
}},
{ "$project": { "itemsTotal": { "$size": "$items" } } }
]).forEach(function(doc) {
bulkOp.find({ "_id": doc._id }).updateOne({
"$set": { "itemsTotal": doc.itemsTotal }
});
count++;
if (count % 200 === 0) {
// Execute per 200 operations and re-init
bulkOp.execute();
bulkOp = db.myDocument.initializeUnorderedBulkOp();
}
})
// Clean up queues
if (count > 0) {
bulkOp.execute();
}
You could initialise a Bulk() operations builder to update the document in a loop as follows:
var bulk = db.collection.initializeOrderedBulkOp(),
count = 0;
db.collection.find("itemsTotal": { "$exists": false },
"items": {
$exists: true
}
).forEach(function(doc) {
var items_size = doc.items.length;
bulk.find({ "_id": doc._id }).updateOne({
"$set": { "itemsTotal": items_size }
});
count++;
if (count % 100 == 0) {
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
});
if (count % 100 != 0) { bulk.execute(); }
This is much easier starting with MongoDB v3.4, which introduced the $addFields aggregation pipeline operator. We'll also use the $out operator to output the result of the aggregation to the same collection (replacing the existing collection is atomic).
db.myDocuments.aggregate( [
{
$addFields: {
itemsTotal: { $size: "$items" } ,
},
},
{
$out: "myDocuments"
}
] )
WARNING: this solution requires that all documents to have the items field. If some documents don't have it, aggregate will fail with
"The argument to $size must be an array, but was of type: missing"
You might think you could add a $match to the aggregation to filter only documents containing items, but that means all documents not containing items will not be output back to the myDocuments collection, so you'll lose those permanently.

Fastest way to remove duplicate documents in mongodb

I have approximately 1.7M documents in mongodb (in future 10m+). Some of them represent duplicate entry which I do not want. Structure of document is something like this:
{
_id: 14124412,
nodes: [
12345,
54321
],
name: "Some beauty"
}
Document is duplicate if it has at least one node same as another document with same name. What is the fastest way to remove duplicates?
dropDups: true option is not available in 3.0.
I have solution with aggregation framework for collecting duplicates and then removing in one go.
It might be somewhat slower than system level "index" changes. But it is good by considering way you want to remove duplicate documents.
a. Remove all documents in one go
var duplicates = [];
db.collectionName.aggregate([
{ $match: {
name: { "$ne": '' } // discard selection criteria
}},
{ $group: {
_id: { name: "$name"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
doc.dups.forEach( function(dupId){
duplicates.push(dupId); // Getting all duplicate ids
}
)
})
// If you want to Check all "_id" which you are deleting else print statement not needed
printjson(duplicates);
// Remove all duplicates in one go
db.collectionName.remove({_id:{$in:duplicates}})
b. You can delete documents one by one.
db.collectionName.aggregate([
// discard selection criteria, You can remove "$match" section if you want
{ $match: {
source_references.key: { "$ne": '' }
}},
{ $group: {
_id: { source_references.key: "$source_references.key"}, // can be grouped on multiple properties
dups: { "$addToSet": "$_id" },
count: { "$sum": 1 }
}},
{ $match: {
count: { "$gt": 1 } // Duplicates considered as count greater than one
}}
],
{allowDiskUse: true} // For faster processing if set is larger
) // You can display result until this and check duplicates
.forEach(function(doc) {
doc.dups.shift(); // First element skipped for deleting
db.collectionName.remove({_id : {$in: doc.dups }}); // Delete remaining duplicates
})
Assuming you want to permanently delete docs that contain a duplicate name + nodes entry from the collection, you can add a unique index with the dropDups: true option:
db.test.ensureIndex({name: 1, nodes: 1}, {unique: true, dropDups: true})
As the docs say, use extreme caution with this as it will delete data from your database. Back up your database first in case it doesn't do exactly as you're expecting.
UPDATE
This solution is only valid through MongoDB 2.x as the dropDups option is no longer available in 3.0 (docs).
Create collection dump with mongodump
Clear collection
Add unique index
Restore collection with mongorestore
I found this solution that works with MongoDB 3.4:
I'll assume the field with duplicates is called fieldX
db.collection.aggregate([
{
// only match documents that have this field
// you can omit this stage if you don't have missing fieldX
$match: {"fieldX": {$nin:[null]}}
},
{
$group: { "_id": "$fieldX", "doc" : {"$first": "$$ROOT"}}
},
{
$replaceRoot: { "newRoot": "$doc"}
}
],
{allowDiskUse:true})
Being new to mongoDB, I spent a lot of time and used other lengthy solutions to find and delete duplicates. However, I think this solution is neat and easy to understand.
It works by first matching documents that contain fieldX (I had some documents without this field, and I got one extra empty result).
The next stage groups documents by fieldX, and only inserts the $first document in each group using $$ROOT. Finally, it replaces the whole aggregated group by the document found using $first and $$ROOT.
I had to add allowDiskUse because my collection is large.
You can add this after any number of pipelines, and although the documentation for $first mentions a sort stage prior to using $first, it worked for me without it. " couldnt post a link here, my reputation is less than 10 :( "
You can save the results to a new collection by adding an $out stage...
Alternatively, if one is only interested in a few fields e.g. field1, field2, and not the whole document, in the group stage without replaceRoot:
db.collection.aggregate([
{
// only match documents that have this field
$match: {"fieldX": {$nin:[null]}}
},
{
$group: { "_id": "$fieldX", "field1": {"$first": "$$ROOT.field1"}, "field2": { "$first": "$field2" }}
}
],
{allowDiskUse:true})
The following Mongo aggregation pipeline does the deduplication and outputs it back to the same or different collection.
collection.aggregate([
{ $group: {
_id: '$field_to_dedup',
doc: { $first: '$$ROOT' }
} },
{ $replaceRoot: {
newRoot: '$doc'
} },
{ $out: 'collection' }
], { allowDiskUse: true })
My DB had millions of duplicate records. #somnath's answer did not work as is so writing the solution that worked for me for people looking to delete millions of duplicate records.
/** Create a array to store all duplicate records ids*/
var duplicates = [];
/** Start Aggregation pipeline*/
db.collection.aggregate([
{
$match: { /** Add any filter here. Add index for filter keys*/
filterKey: {
$exists: false
}
}
},
{
$sort: { /** Sort it in such a way that you want to retain first element*/
createdAt: -1
}
},
{
$group: {
_id: {
key1: "$key1", key2:"$key2" /** These are the keys which define the duplicate. Here document with same value for key1 and key2 will be considered duplicate*/
},
dups: {
$push: {
_id: "$_id"
}
},
count: {
$sum: 1
}
}
},
{
$match: {
count: {
"$gt": 1
}
}
}
],
{
allowDiskUse: true
}).forEach(function(doc){
doc.dups.shift();
doc.dups.forEach(function(dupId){
duplicates.push(dupId._id);
})
})
/** Delete the duplicates*/
var i,j,temparray,chunk = 100000;
for (i=0,j=duplicates.length; i<j; i+=chunk) {
temparray = duplicates.slice(i,i+chunk);
db.collection.bulkWrite([{deleteMany:{"filter":{"_id":{"$in":temparray}}}}])
}
Here is a slightly more 'manual' way of doing it:
Essentially, first, get a list of all the unique keys you are interested.
Then perform a search using each of those keys and delete if that search returns bigger than one.
db.collection.distinct("key").forEach((num)=>{
var i = 0;
db.collection.find({key: num}).forEach((doc)=>{
if (i) db.collection.remove({key: num}, { justOne: true })
i++
})
});
tips to speed up, when only small portion of your documents are duplicated:
you need an index on the field to detect duplicates.
$group does not use the index, but it can take advantage of $sort and $sort use the index. so you should put a $sort step at the beginning
do inplace delete_many() instead of $out to new collection, this will save lots of IO time and disk space.
if you use pymongo you can do:
index_uuid = IndexModel(
[
('uuid', pymongo.ASCENDING)
],
)
col.create_indexes([index_uuid])
pipeline = [
{"$sort": {"uuid":1}},
{
"$group": {
"_id": "$uuid",
"dups": {"$addToSet": "$_id"},
"count": {"$sum": 1}
}
},
{
"$match": {"count": {"$gt": 1}}
},
]
it_cursor = col.aggregate(
pipeline, allowDiskUse=True
)
# skip 1st dup of each dups group
dups = list(itertools.chain.from_iterable(map(lambda x: x["dups"][1:], it_cursor)))
col.delete_many({"_id":{"$in": dups}})
performance
I test it on a database contain 30M documents and 1TB large.
Without index/sort it takes more than an hour to get the cursor (I do not even have the patient to wait for it).
with index/sort but use $out to output to a new collection. This is safer if your filesystem does not support snapshot. But it requires lots of disk space and takes more than 40mins to finish despite the fact that we are using SSDs. It will be much slower if you are on HDD RAID.
with index/sort and inplace delete_many, it takes around 5mins in total.
The following method merges documents with the same name while only keeping the unique nodes without duplicating them.
I found using the $out operator to be a simple way. I unwind the array and then group it by adding to set. The $out operator allows the aggregation result to persist [docs].
If you put the name of the collection itself it will replace the collection with the new data. If the name does not exist it will create a new collection.
Hope this helps.
allowDiskUse may have to be added to the pipeline.
db.collectionName.aggregate([
{
$unwind:{path:"$nodes"},
},
{
$group:{
_id:"$name",
nodes:{
$addToSet:"$nodes"
}
},
{
$project:{
_id:0,
name:"$_id.name",
nodes:1
}
},
{
$out:"collectionNameWithoutDuplicates"
}
])
Using pymongo this should work.
Add the fields that need to be unique for the collection in unique_field
unique_field = {"field1":"$field1","field2":"$field2"}
cursor = DB.COL.aggregate([{"$group":{"_id":unique_field, "dups":{"$push":"$uuid"}, "count": {"$sum": 1}}},{"$match":{"count": {"$gt": 1}}},{"$group":"_id":None,"dups":{"$addToSet":{"$arrayElemAt":["$dups",1]}}}}],allowDiskUse=True)
slice the dups array depending on the duplications count(here i had only one extra duplicate for all)
items = list(cursor)
removeIds = items[0]['dups']
hold.remove({"uuid":{"$in":removeIds}})
I don't know whether is it going to answer main question, but for others it'll be usefull.
1.Query the duplicate row using findOne() method and store it as an object.
const User = db.User.findOne({_id:"duplicateid"});
2.Execute deleteMany() method to remove all the rows with the id "duplicateid"
db.User.deleteMany({_id:"duplicateid"});
3.Insert the values stored in User object.
db.User.insertOne(User);
Easy and fast!!!!
First, you can find all the duplicates and remove those duplicates in the DB. Here we take the id column to check and remove duplicates.
db.collection.aggregate([
{ "$group": { "_id": "$id", "count": { "$sum": 1 } } },
{ "$match": { "_id": { "$ne": null }, "count": { "$gt": 1 } } },
{ "$sort": { "count": -1 } },
{ "$project": { "name": "$_id", "_id": 0 } }
]).then(data => {
var dr = data.map(d => d.name);
console.log("duplicate Recods:: ", dr);
db.collection.remove({ id: { $in: dr } }).then(removedD => {
console.log("Removed duplicate Data:: ", removedD);
})
})
General idea is to use findOne https://docs.mongodb.com/manual/reference/method/db.collection.findOne/
to retrieve one random id from the duplicate records in the collection.
Delete all the records in the collection other than the random-id that we retrieved from findOne option.
You can do something like this if you are trying to do it in pymongo.
def _run_query():
try:
for record in (aggregate_based_on_field(collection)):
if not record:
continue
_logger.info("Working on Record %s", record)
try:
retain = db.collection.find_one(find_one({'fie1d1': 'x', 'field2':'y'}, {'_id': 1}))
_logger.info("_id to retain from duplicates %s", retain['_id'])
db.collection.remove({'fie1d1': 'x', 'field2':'y', '_id': {'$ne': retain['_id']}})
except Exception as ex:
_logger.error(" Error when retaining the record :%s Exception: %s", x, str(ex))
except Exception as e:
_logger.error("Mongo error when deleting duplicates %s", str(e))
def aggregate_based_on_field(collection):
return collection.aggregate([{'$group' : {'_id': "$fieldX"}}])
From the shell:
Replace find_one to findOne
Same remove command should work.