My collection name is employee and my collections as follows
{
"Title":"IssueFixingTeam",
"TeamLead":"Mr.Bean",
"workers":["xxx","yyy","zzz"]
},
{
"Title":"DevelopmentTeam",
"TeamLead":"Mr.John Doe",
"workers":["aa","dd","ss"]
}
how to query to find, how many workers are there under TeamLead "Mr.Bean"
Thanks in advance
if you are interested in just one record (otherwise, see the answer by #felix) belonging to "Mr.Bean", then this could give you the required count:
db.employee.findOne({'TeamLead': 'Mr.Bean'}).workers.length
Use Match
to filter TeamLead: Mr.Bean
use Size operator in Project
to get size of array,
db.collection.aggregate([{
$match: {
TeamLead: "Mr.Bean"
}
}, {
$project: {
"TeamLead":1,
workers: {
$size: "$workers"
}
}
}])
You can use the aggregation framework.
In case you are only interested in matching documents of a specific TeamLead and sum per document:
db.foo.aggregate([{$match: {"TeamLead": "Mr.Bean"}},
{$project: {"num_workers": {$size: "$workers"}}}])
Output:
{ "_id" : ObjectId("58c6a5ef9bc86fa5c7e4fa50"), "num_workers" : 3 }
If you want to group documents by TeamLead and get the number of unique workers under each TeamLead:
db.foo.aggregate([{$group: {"_id": "$TeamLead", "workers": {$addToSet: "$workers"}}},
{$unwind: "$workers"},
{$project: {"num_workers": {$size: "$workers"}}}])
Output:
{ "_id" : "Mr.John Doe", "num_workers" : 3 }
{ "_id" : "Mr.Bean", "num_workers" : 3 }
Related
i have a documents as below
{
_id:1234,
userId:90oi,
tag:"self"
},
{
_id:5678,
userId:65yd,
tag:"other"
},
{
_id:9012,
userId:78hy,
tag:"something"
},
{
_id:3456,
userId:60oy,
tag:"self"
},
i needed response like below
[{
tag : "self",
count : 2
},
{
tag : "something",
count : 1
},
{
tag : "other",
count : 1
}
]
i was using $facet to query the documents. but it is returning entire documents not the count. My query is as follows
db.data.aggregate({
$facet: {
categorizedByGrade : [
{ $match: {userId:ObjectId(userId)}},
{$sortByCount: "$tag"}
]
}
})
Let me know what i am doing wrong. Thanks in advance for the help
So you don't need to use $facet for this one - facet is when you really need to process multiple aggregation pipelines in one aggregation query (mongoDB $facet), Please try this :
db.yourCollectionName.aggregate([{$project :{tag :1, _id :0}},{$group :{_id: '$tag',
count: { $sum: 1 }}}, {$project : {tag : '$_id', _id:0, count :1}}])
Explanation :
$project at first point is to retain only needed fields in all documents that way we've less data to process, $group will iterate through all documents to group similar data upon fields specified, While $sum will count the respective number of items getting added through group stage in each set, Finally $project again is used to make the result look like what we needed.
You can retrieve the correct records using facet, please have a look at below query
db.data.aggregate({
$facet: {
categorizedByGrade : [
{
$sortByCount:"$tag"
},
{
$project:{
_id:0,
tag:"$_id",
count:1,
}
}]
}
})
I have a mongodb collection with multiple documents. Each document has an array with multiple subdocuments (or embedded documents i guess?). Each of these subdocuments is in this format:
{
"name": string,
"count": integer
}
Now I want to aggregate these subdocuments to find
The top X counts and their name.
Same as 1. but the names have to match a regex before sorting and limiting.
I have tried the following for 1. already - it does return me the top X but unordered, so I'd have to order them again which seems somewhat inefficient.
[{
$match: {
_id: id
}
}, {
$unwind: {
path: "$array"
}
}, {
$sort: {
'count': -1
}
}, {
$limit: x
}]
Since i'm rather new to mongodb this is pretty confusing for me. Happy for any help. Thanks in advance.
The sort has to include the array name in order to avoid an additional sort later on.
Given the following document to work with:
{
students: [{
count: 4,
name: "Ann"
}, {
count: 7,
name: "Brad"
}, {
count: 6,
name: "Beth"
}, {
count: 8,
name: "Catherine"
}]
}
As an example, the following aggregation query will match any name containing the letters "h" and "e". This needs to happen after the "$unwind" step in order to only keep the ones you need.
db.tests.aggregate([
{$match: {
_id: ObjectId("5c1b191b251d9663f4e3ce65")
}},
{$unwind: {
path: "$students"
}},
{$match: {
"students.name": /[he]/
}},
{$sort: {
"students.count": -1
}},
{$limit: 2}
])
This is the output given the above mentioned input:
{ "_id" : ObjectId("5c1b191b251d9663f4e3ce65"), "students" : { "count" : 8, "name" : "Catherine" } }
{ "_id" : ObjectId("5c1b191b251d9663f4e3ce65"), "students" : { "count" : 6, "name" : "Beth" } }
Both names contain the letters "h" and "e", and the output is sorted from high to low.
When setting the limit to 1, the output is limited to:
{ "_id" : ObjectId("5c1b191b251d9663f4e3ce65"), "students" : { "count" : 8, "name" : "Catherine" } }
In this case only the highest count has been kept after having matched the names.
=====================
Edit for the extra question:
Yes, the first $match can be changed to filter on specific universities.
{$match: {
university: "University X"
}},
That will give one or more matching documents (in case you have a document per year or so) and the rest of the aggregation steps would still be valid.
The following match would retrieve the students for the given university for a given academic year in case that would be needed.
{$match: {
university: "University X",
academic_year: "2018-2019"
}},
That should narrow it down to get the correct documents.
I'm working on a project by using MongoDB as a database and I'm encountering a problem: I can't find the right query to make a simple count of the likes of a document. The collection that I use is this :
{ "username" : "example1",
"like" : [ { "document_id" : "doc1" },
"document_id" : "doc2 },
...]
}
So what I need is to compute is the number of likes of each document so at the end I will have
{ "document_id" : "docA" , nbLikes : 30 }, {"document_id" : "docB", nbLikes : 1}
Can anyone help me on this because I failed.
You can do this by unwinding the like array of each doc and then grouping by document_id to get a count for each value:
db.test.aggregate([
// Duplicate each doc, once per 'like' array element
{$unwind: '$like'},
// Group them by document_id and assemble a count
{$group: {_id: '$like.document_id', nbLikes: {$sum: 1}}},
// Reshape the docs to match the desired output
{$project: {_id: 0, document_id: '$_id', nbLikes: 1}}
])
Add "likeCount" field and increase count for per $push operation and read "likeCount" field
db.test.update(
{ _id: "..." },
{
$inc: { likeCount: 1 },
$push: { like: { "document_id" : "doc1" } }
}
)
I have a collection where every document in the collection has an array named foo that contains a set of embedded documents. Is there currently a trivial way in the MongoDB shell to count how many instances are within foo? something like:
db.mycollection.foos.count() or db.mycollection.foos.size()?
Each document in the array needs to have a unique foo_id and I want to do a quick count to make sure that the right amount of elements are inside of an array for a random document in the collection.
In MongoDB 2.6, the Aggregation Framework has a new array $size operator you can use:
> db.mycollection.insert({'foo':[1,2,3,4]})
> db.mycollection.insert({'foo':[5,6,7]})
> db.mycollection.aggregate([{$project: { count: { $size:"$foo" }}}])
{ "_id" : ObjectId("5314b5c360477752b449eedf"), "count" : 4 }
{ "_id" : ObjectId("5314b5c860477752b449eee0"), "count" : 3 }
if you are on a recent version of mongo (2.2 and later) you can use the aggregation framework.
db.mycollection.aggregate([
{$unwind: '$foo'},
{$group: {_id: '$_id', 'sum': { $sum: 1}}},
{$group: {_id: null, total_sum: {'$sum': '$sum'}}}
])
which will give you the total foos of your collection.
Omitting the last group will aggregate results per record.
Using Projections and Groups
db.mycollection.aggregate(
[
{
$project: {
_id:0,
foo_count:{$size:"$foo"},
}
},
{
$group: {
foo_total:{$sum:"$foo_count"},
}
}
]
)
Multiple child array counts can also be calculated this way
db.mycollection.aggregate(
[
{
$project: {
_id:0,
foo1_count:{$size:"$foo1"},
foo2_count:{$size:"$foo2"},
}
},
{
$group: {
foo1_total:{$sum:"$foo1_count"},
foo2_total:{$sum:"$foo2_count"},
}
}
]
)
I have a collection with a few million documents for which i need to find at least duplicate document. The duplication criteria is based on 2 keys, not one. So i need to find at least 2 documents which both have { property1 : value1, property2 : value2,}.
For this i am trying to use the aggregate framewotk as in the following example:
db.listings.aggregate({
$group:
{
_id : { property1 : "$property1", property2 : "$property2" },
count: { $sum: 1 }
},},{
$match : {
count: {
$gt : 1
}
}},{
$limit: 1})
I think this should be working, BUT
Mongo returns the following error:
{
"code" : 16390,
"ok" : 0,
"errmsg" : "exception: sharded pipeline failed on shard shard1: { errmsg: \"exception: aggregation result exceeds maximum document size (16MB)\", code: 16389, ok: 0.0}"
I have also tried
db.collection.aggregate( { $group: { _id:
{ $concat: [ "$property1",
": ",
"$property2"
]
},
count: { $sum: 1 }
}
}
)
Got the same result
Does anyone have a better idea how to do this? I am not really a mongo expert, but i have to do this one way or the other.
Thanks in advance
Your idea to shrink the doc as much as possible with $concat is a good one, but $concat is a $project operator, not a $group operator. So try something like this:
db.collection.aggregate(
{ $project: { _id: { $concat: ["$property1", ":", "$property2"] }}},
{ $group: { _id: '$_id', c: { $sum: 1 }}},
{ $match: { c: { $gt: 1 }}})
It still may use too much memory, but it's worth a shot.
Using map-reduce is an alternative. Here you can find examples :
http://docs.mongodb.org/manual/tutorial/map-reduce-examples/