I'm trying to find duplicates in my sharded collection using the id field, which is of this pattern -
"id" : {
"idInner" : {
"k1" : "v1",
"k2" : "v2",
"k3" : "v3",
"k4" : "v4"
}
}
I used the below query, but received the "exception: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in." error, even though I used "allowDiskUse : true" in my query.
db.collection.aggregate([
{ $group: {
_id: { id: "$id" },
uniqueIds: { $addToSet: "$_id" },
count: { $sum: 1 }
} },
{ $match: {
count: { $gte: 2 }
} },
{ $sort : { count : -1} },
{ $limit : 10 }
],
{
allowDiskUse : true
});
Is there another way to get what I want, or something else I should pass in the above query? Thanks.
Please use allowDiskTrue in run command.
db.runCommand(
{ aggregate: "collection",
pipeline: [
{ $group: {
_id: { id: "$id" },
uniqueIds: { $addToSet: "$_id" },
count: { $sum: 1 }
} },
{ $match: {
count: { $gte: 2 }
} },
{ $sort : { count : -1} },
{ $limit : 10 }
],
allowDiskUse: true
}
)
Let me know if this works for you.
Run a $match first in the pipeline to keep only documents of let's say id.idiInner.k1 that are between a range, so that you will take results for that range only. Since you are interested in duplicates on the id key, all the duplicated documents will satisfy this criteria. See how much you should narrow down that range and run it next for the next range etc. until you cover all documents.
If it is something you must do frequently, automate, by declaring the ranges, feed them in a loop, keep the duplicates of every run and merge the results in the end.
Another fast hack/trick would be to bypass the mongos and run the aggregation directly in each shard. Doing so will limit your docs roughly (assuming well balanced shards) to docs/number_of_shards and you may overcome the memory limit. In this second approach I assume that your shard key is the id key, however if it is not then this approach will not work since the same duplicated documents will be scattered among the shards.
Related
I am new to MongoDB, and new to making more than super basic queries and i didn't succeed to create a query that does as follows:
I have such collection, each document represents one "use" of a benefit (e.g first row states the benefit "123" was used once):
[
{
"id" : "1111",
"benefit_id":"123"
},
{
"id":"2222",
"benefit_id":"456"
},
{
"id":"3333",
"benefit_id":"456"
},
{
"id":"4444",
"benefit_id":"789"
}
]
I need to create q query that output an array. at the top is the most top used benefit and how many times is was used.
for the above example the query should output:
[
{
"benefit_id":"456",
"cnt":2
},
{
"benefit_id":"123",
"cnt": 1
},
{
"benefit_id":"789",
"cnt":1
}
]
I have tried to work with the documentation and with $sortByCount but with no success.
$group
$group by benefit_id and get count using $sum
$sort by count descending order
db.collection.aggregate([
{
$group: {
_id: "$benefit_id",
count: { $sum: 1 }
}
},
{ $sort: { count: -1 } }
])
Playground
$sortByCount
Same operation using $sortByCount operator
db.collection.aggregate([
{ $sortByCount: "$benefit_id" }
])
Playground
In a mongodb collection , i have following documents :
{"id":"1234","name":"John","stateCode":"CA"}
{"id":"1234","name":"Smith","stateCode":"CA"}
{"id":"1234","name":"Tony","stateCode":"GA"}
{"id":"3323", "name":"Neo","stateCode":"OH"}
{"id":"3323", "name":"Sam","stateCode":"US"}
{"id":"4343","name":"Bruce","stateCode":"NV"}
I am trying to write a mongo aggregate query which do following things:
match based on id field
Give more priority to document having values other than "NV" or "GA" in stateCode field.
If all the document have values either "NV" or "GA" then ignore the priority.
If any of the document have stateCode other than "NV" or "GA" , then return those document.
Example 1:
id = "1234"
then return
{"id":"1234","name":"John","stateCode":"CA"}
{"id":"1234","name":"Smith","stateCode":"CA"}
Example 2:
id = "4343"
then return
{"id":"4343","name":"Bruce","stateCode":"NV"}
Could you please help with a query to achieve this.
I tried with a query , but i am stuck with error:
Failed to execute script.
Error: command failed: {
"ok" : 0,
"errmsg" : "input to $filter must be an array not string",
"code" : 28651,
"codeName" : "Location28651"
} : aggregate failed
Query :
db.getCollection('emp').aggregate([{$match:{
'id': "1234"
}
},
{
$project: {
"data": {
$filter: {
input: "$stateCode",
as: "data",
cond: { $ne: [ "$data", "GA" ],$ne: [ "$data", "NV" ] }
}
}
}
}
])
I actually recommend you split this into 2 queries, first try to find documents with a different status code and if that fails then retrieve the rest.
With that said here is a working pipeline that does it in one go, Due to the fact we cant know in advance whether the condition is true or not we need to iterate all the documents who match the id, this fact makes it VERY inefficient in the case the id is shared by many documents, if this is not possible then using this pipeline is fine.
db.getCollection('emp').aggregate([
{
$match: {
'id': "1234"
}
},
{ //we have to group so we can check
$group: {
_id: null,
docs: {$push: "$$ROOT"}
}
},
{
$addFields: {
highPriorityDocs: {
$filter: {
input: "$docs",
as: "doc",
cond: {$and: [{$ne: ["$$doc.stateCode", "NV"]}, {$ne: ["$$doc.stateCode", "GA"]}]}
}
}
}
},
{
$project: {
finalDocs: {
$cond: [ // if size of high priority docs gt 0 return them.
{$gt: [{$ize: "$highPriorityDocs"}, 0]},
"$highPriorityDocs",
"$docs"
]
}
}
},
{
$unwind: "$finalDocs"
},
{
$replaceRoot: {newRoot: "$finalDocs"}
}
])
The last two stages are just to restore the original structure, you can drop them if you don't care about it.
I got a question that I would expect to be pretty simple, but I cannot figure it out. What I want to do is this:
Find all documents in a collection and:
sort the documents by a certain date field
apply distinct on one of its other fields, but return the whole document
Best shown in an example.
This is a mock input:
[
{
"commandName" : "migration_a",
"executionDate" : ISODate("1998-11-04T18:46:14.000Z")
},
{
"commandName" : "migration_a",
"executionDate" : ISODate("1970-05-09T20:16:37.000Z")
},
{
"commandName" : "migration_a",
"executionDate" : ISODate("2005-11-08T11:58:52.000Z")
},
{
"commandName" : "migration_b",
"executionDate" : ISODate("2016-06-02T19:48:34.000Z")
}
]
The expected output is:
[
{
"commandName" : "migration_a",
"executionDate" : ISODate("2005-11-08T11:58:52.000Z")
},
{
"commandName" : "migration_b",
"executionDate" : ISODate("2016-06-02T19:48:34.000Z")
}
]
Or, in other words:
Group the input data by the commandName field
Inside each group sort the documents
Return the newest document from each group
My attempts to write this query have failed:
The distinct() function will only return the value of the field I am distinct-ing on, not the whole document. That makes it unsuitable for my case.
Tried writing an aggregate query, but ran into an issue of how to sort-and-select a single document from inside of each group? The sort aggreation stage will sort the groups among one other, which is not what I want.
I am not too well-versed in Mongo and this is where I hit a wall. Any ideas on how to continue?
For reference, this is the work-in-progress aggregation query I am trying to expand on:
db.getCollection('some_collection').aggregate([
{ $group: { '_id': '$commandName', 'docs': {$addToSet: '$$ROOT'} } },
{ $sort: {'_id.docs.???': 1}}
])
Post-resolved edit
Thank you for the answers. I got what I needed. For future reference, this is the full query that will do what was requested and also return a list of the filtered documents, not groups.
db.getCollection('some_collection').aggregate([
{ $sort: {'executionDate': 1}},
{ $group: { '_id': '$commandName', 'result': { $last: '$$ROOT'} } },
{ $replaceRoot: {newRoot: '$result'} }
])
The query result without the $replaceRoot stage would be:
[
{
"_id": "migration_a",
"result": {
"commandName" : "migration_a",
"executionDate" : ISODate("2005-11-08T11:58:52.000Z")
}
},
{
"_id": "migration_b",
"result": {
"commandName" : "migration_b",
"executionDate" : ISODate("2016-06-02T19:48:34.000Z")
}
}
]
The outer _id and _result are just "group-wrappers" around the actual document I want, which is nested under the result key. Moving the nested document to the root of the result is done using the $replaceRoot stage. The query result when using that stage is:
[
{
"commandName" : "migration_a",
"executionDate" : ISODate("2005-11-08T11:58:52.000Z")
},
{
"commandName" : "migration_b",
"executionDate" : ISODate("2016-06-02T19:48:34.000Z")
}
]
Try this:
db.getCollection('some_collection').aggregate([
{ $sort: {'executionDate': -1}},
{ $group: { '_id': '$commandName', 'doc': {$first: '$$ROOT'} } }
])
I believe this will result in what you're looking for:
db.collection.aggregate([
{
$group: {
"_id": "$commandName",
"executionDate": {
"$last": "$executionDate"
}
}
}
])
You can check it out here
Of course, if you want to match your expected output exactly, you can add a sort (this may not be necessary since your goal is to simply return the newest document from each group):
{
$sort: {
"executionDate": 1
}
}
You can check this version out here.
The use-case the question presents is nearly covered in the $last aggregation operator documentation.
Which summarises:
the $group stage should follow a $sort stage to have the input
documents in a defined order. Since $last simply picks the last
document from a group.
Query: Link
db.collection.aggregate([
{
$sort: {
executionDate: 1
}
},
{
$group: {
_id: "$commandName",
executionDate: {
$last: "$executionDate"
}
}
}
]);
I have a collection with a few million documents for which i need to find at least duplicate document. The duplication criteria is based on 2 keys, not one. So i need to find at least 2 documents which both have { property1 : value1, property2 : value2,}.
For this i am trying to use the aggregate framewotk as in the following example:
db.listings.aggregate({
$group:
{
_id : { property1 : "$property1", property2 : "$property2" },
count: { $sum: 1 }
},},{
$match : {
count: {
$gt : 1
}
}},{
$limit: 1})
I think this should be working, BUT
Mongo returns the following error:
{
"code" : 16390,
"ok" : 0,
"errmsg" : "exception: sharded pipeline failed on shard shard1: { errmsg: \"exception: aggregation result exceeds maximum document size (16MB)\", code: 16389, ok: 0.0}"
I have also tried
db.collection.aggregate( { $group: { _id:
{ $concat: [ "$property1",
": ",
"$property2"
]
},
count: { $sum: 1 }
}
}
)
Got the same result
Does anyone have a better idea how to do this? I am not really a mongo expert, but i have to do this one way or the other.
Thanks in advance
Your idea to shrink the doc as much as possible with $concat is a good one, but $concat is a $project operator, not a $group operator. So try something like this:
db.collection.aggregate(
{ $project: { _id: { $concat: ["$property1", ":", "$property2"] }}},
{ $group: { _id: '$_id', c: { $sum: 1 }}},
{ $match: { c: { $gt: 1 }}})
It still may use too much memory, but it's worth a shot.
Using map-reduce is an alternative. Here you can find examples :
http://docs.mongodb.org/manual/tutorial/map-reduce-examples/
I have the following issue:
this query return 1 result which is what I want:
> db.items.aggregate([ {$group: { "_id": "$id", version: { $max: "$version" } } }])
{
"result" : [
{
"_id" : "b91e51e9-6317-4030-a9a6-e7f71d0f2161",
"version" : 1.2000000000000002
}
],
"ok" : 1
}
this query ( I just added projection so I can later query for the entire document) return multiple results. What am I doing wrong?
> db.items.aggregate([ {$group: { "_id": "$id", version: { $max: "$version" } }, $project: { _id : 1 } }])
{
"result" : [
{
"_id" : ObjectId("5139310a3899d457ee000003")
},
{
"_id" : ObjectId("513931053899d457ee000002")
},
{
"_id" : ObjectId("513930fd3899d457ee000001")
}
],
"ok" : 1
}
found the answer
1. first I need to get all the _ids
db.items.aggregate( [
{ '$match': { 'owner.id': '9e748c81-0f71-4eda-a710-576314ef3fa' } },
{ '$group': { _id: '$item.id', dbid: { $max: "$_id" } } }
]);
2. then i need to query the documents
db.items.find({ _id: { '$in': "IDs returned from aggregate" } });
which will look like this:
db.items.find({ _id: { '$in': [ '1', '2', '3' ] } });
( I know its late but still answering it so that other people don't have to go search for the right answer somewhere else )
See to the answer of Deka, this will do your job.
Not all accumulators are available in $project stage. We need to consider what we can do in project with respect to accumulators and what we can do in group. Let's take a look at this:
db.companies.aggregate([{
$match: {
funding_rounds: {
$ne: []
}
}
}, {
$unwind: "$funding_rounds"
}, {
$sort: {
"funding_rounds.funded_year": 1,
"funding_rounds.funded_month": 1,
"funding_rounds.funded_day": 1
}
}, {
$group: {
_id: {
company: "$name"
},
funding: {
$push: {
amount: "$funding_rounds.raised_amount",
year: "$funding_rounds.funded_year"
}
}
}
}, ]).pretty()
Where we're checking if any of the funding_rounds is not empty. Then it's unwind-ed to $sort and to later stages. We'll see one document for each element of the funding_rounds array for every company. So, the first thing we're going to do here is to $sort based on:
funding_rounds.funded_year
funding_rounds.funded_month
funding_rounds.funded_day
In the group stage by company name, the array is getting built using $push. $push is supposed to be part of a document specified as the value for a field we name in a group stage. We can push on any valid expression. In this case, we're pushing on documents to this array and for every document that we push it's being added to the end of the array that we're accumulating. In this case, we're pushing on documents that are built from the raised_amount and funded_year. So, the $group stage is a stream of documents that have an _id where we're specifying the company name.
Notice that $push is available in $group stages but not in $project stage. This is because $group stages are designed to take a sequence of documents and accumulate values based on that stream of documents.
$project on the other hand, works with one document at a time. So, we can calculate an average on an array within an individual document inside a project stage. But doing something like this where one at a time, we're seeing documents and for every document, it passes through the group stage pushing on a new value, well that's something that the $project stage is just not designed to do. For that type of operation we want to use $group.
Let's take a look at another example:
db.companies.aggregate([{
$match: {
funding_rounds: {
$exists: true,
$ne: []
}
}
}, {
$unwind: "$funding_rounds"
}, {
$sort: {
"funding_rounds.funded_year": 1,
"funding_rounds.funded_month": 1,
"funding_rounds.funded_day": 1
}
}, {
$group: {
_id: {
company: "$name"
},
first_round: {
$first: "$funding_rounds"
},
last_round: {
$last: "$funding_rounds"
},
num_rounds: {
$sum: 1
},
total_raised: {
$sum: "$funding_rounds.raised_amount"
}
}
}, {
$project: {
_id: 0,
company: "$_id.company",
first_round: {
amount: "$first_round.raised_amount",
article: "$first_round.source_url",
year: "$first_round.funded_year"
},
last_round: {
amount: "$last_round.raised_amount",
article: "$last_round.source_url",
year: "$last_round.funded_year"
},
num_rounds: 1,
total_raised: 1,
}
}, {
$sort: {
total_raised: -1
}
}]).pretty()
In the $group stage, we're using $first and $last accumulators. Right, again we can see that as with $push - we can't use $first and $last in project stages. Because again, project stages are not designed to accumulate values based on multiple documents. Rather they're designed to reshape documents one at a time. Total number of rounds is calculated using the $sum operator. The value 1 simply counts the number of documents passed through that group together with each document that matches or is grouped under a given _id value. The project may seem complex, but it's just making the output pretty. It's just that it's including num_rounds and total_raised from the previous document.