MongoDB sorting documents by nested data - mongodb

Considering the following design for posts:
{
title: string,
body: string,
comments: [
{name: string, comment: string, ...},
{name: string, comment: string, ...},
...
]
}
...
1) I would like to select all posts in my collection and have them sorted by the posts that have the most comments. I'm assuming since the .length variable is always set via javascript that it is possible to use this to sort by but I don't know how or if it's actually more efficient to store the comment count in a field in the post document?
1.1) Or does it make more sense to store the comment count in a separate document and continiously update that?
2) When selecting posts, is it possible to limit the result to only return back the last 3 comments of a post document as opposed to the whole array?

You need to use the aggregate command
This should give you a list of post _id with the number of comments sorted by the count in reverse order.
You can use the $limit operators to return the x top rows. e.g. { $limit : 5 }
db.posts.aggregate(
{ $unwind : "$comments" },
{ $group : { _id : "$_id" , number : { $sum : 1 } } },
{ $sort : { number : -1 } }
);
Take a look
http://docs.mongodb.org/manual/tutorial/aggregation-examples/

Related

Aggregate query with no $match

I have a collection in which unique documents from a different collection can appear over and over again (in example below item), depending on how much a user shares them. I want to create an aggregate query which finds the most shared documents. There is no $match necessary because I'm not matching a certain criteria, I'm just querying the most shared. Right now I have:
db.stories.aggregate(
{
$group: {
_id:'item.id',
'item': {
$first: '$item'
},
'total': {
$sum: 1
}
}
}
);
However this only returns 1 result. It occurs to me I might just need to do a simple find query, but I want the results aggregated, so that each result has the item and total is how many times it's appeared in the collection.
Example of a document in the stories collection:
{
_id: ObjectId('...'),
user: {
id: ObjectId('...'),
propertyA: ...,
propertyB: ...,
etc
},
item: {
id: ObjectId('...'),
propertyA: ...,
propertyB: ...,
etc
}
}
users and items each have their own collections as well.
Change the line
_id:'item.id'
to
_id:'$item.id'
Currently you group by the constant 'item.id' and therefore you only get one document as result.

usage for MongoDB sort in array

I would like to ranked in descending order a list of documents in array names via their number value.
Here's the structure part of my collection :
_id: ObjectId("W")
var1: "X",
var2: "Y",
var3: "Z",
comments: {
names: [
{
number: 1;
},
{
number: 3;
},
{
number: 2;
}
],
field: Y;
}
but all my request with db.collection.find().sort( { "comments.names.number": -1 } ) doesn't work.
the desired output sort is :
{ "_id" : ObjectId("W"), "var1" : "X", "var3" : "Z", "comments" : { [ { "number" : 3 }, { "number" : 2 },{ "number" : 1 } ], "field": "Y" } }
Can you help me?
You need to aggregate the result, as below:
Unwind the names array.
Sort the records based on comments.names.number in descending
order.
Group the records based on the _id field.
project the required structure.
Code:
db.collection.aggregate([
{$unwind:"$comments.names"},
{$sort:{"comments.names.number":-1}},
{$group:{"_id":"$_id",
"var1":{$first:"$var1"},
"var2":{$first:"$var2"},
"var3":{$first:"$var3"},
"field":{$first:"$comments.field"},
"names":{$push:"$comments.names"}}},
{$project:{"comments":{"names":"$names","field":"$field"},"var1":1,
"var2":1,"var3":1}}
],{"allowDiskUse":true})
If your collection is large, you might want to add a $match criteria in the beginning of the aggregation pipeline to filter records or use (allowDiskUse:true), to facilitate sorting large number of records.
db.collection.aggregate([
{$match:{"_id":someId}},
{$unwind:"$comments.names"},
{$sort:{"comments.names.number":-1}},
{$group:{"_id":"$_id",
"var1":{$first:"$var1"},
"var2":{$first:"$var2"},
"var3":{$first:"$var3"},
"field":{$first:"$comments.field"},
"names":{$push:"$comments.names"}}},
{$project:{"comments":{"names":"$names","field":"$field"},"var1":1,
"var2":1,"var3":1}}
])
What The below query does:
db.collection.find().sort( { "comments.names.number": -1 } )
is to find all the documents, then sort those documents based on the number field in descending order. What this actually does is for each document get the comments.names.number field value which is the largest, for each document. And then sort the parent documents based on this number. It doesn't manipulate the names array inside each parent document.
You need update document for sort an array.
db.collection.update(
{ _id: 1 },
{
$push: {
comments.names: {
$each: [ ],
$sort: { number: -1 }
}
}
}
)
check documentation here:
http://docs.mongodb.org/manual/reference/operator/update/sort/#use-sort-with-other-push-modifiers
MongoDB queries sort the result documents based on the collection of fields specified in the sort. They do not sort arrays within a document. If you want the array sorted, you need to sort it yourself after you retrieve the document, or store the array in sorted order. See this old SO answer from Stennie.

Mongo find query for longest arrays inside object

I currently have objects in mongo set up like this for my application (simplified example, I removed some irrelevant fields for clarity here):
{
"_id" : ObjectId("529159af5b508dd71500000a"),
"c" : "somecontent",
"l" : [
{
"d" : "2013-11-24T01:43:11.367Z",
"u" : "User1"
},
{
"d" : "2013-11-24T01:43:51.206Z",
"u" : "User2"
}
]
}
What I would like to do is run a find query to return the objects which have the highest array length under "l" and sort highest->lowest, limit to 25 results. Some objects may have 1 object in the array, some may have 100. I'd like to find out which ones have the most under "l". I'm new to mongo and got everything else to work up until this point, but I just can't figure out the right parameters to get this specific query. Where I'm getting confused is how to handle counting the length of the array, sorting, etc. I could manually code this by parsing everything in the collection, but I'm sure there has to be a way for mongo to do this far more efficiently. I'm not against learning, if anyone knows any resources for more advanced queries or could help me out I'd really be thankful as this is the last piece! :-)
As a side note, node.js and mongo together is amazing and I wish I started using them in conjunction a long time ago.
Use the aggregation framework. Here's how:
db.collection.aggregate( [
{ $unwind : "$l" },
{ $group : { _id : "$_id", len : { $sum : 1 } } },
{ $sort : { len : -1 } },
{ $limit : 25 }
] )
There is no easy way to do this with your existing schema. The reason for this is that there is nothing in mongodb to find the size of your array length. Yes, you have $size operator, but the way it works is just to find all the arrays of a specific length.
So you can not sort your find query based on the length of the array. The only reasonable way out is to add additional field to your schema which will hold the length of the array (you will have something like "l_length : 3" in additional to your fields for every document). Good thing is that you can do it easily by looking at this relevant answer and after this you just need to make sure to increment or decrement this value when you are modifying the array.
When you will add this field, you can easily sort it by that field and moreover you can take advantage of indexes.
There is no straight approach to do this,
You can try adding size field in your document using $size,
$addFields to add new field total to get total elements in l array
$sort by total in descending order
$limit to select single document
$project to remove total field if you don't needed
db.collection.aggregate([
{ $addFields: { total: { $size: "$l" } } },
{ $sort: { total: -1 } },
{ $limit: 25 }
// { $project: { total: 0 } }
])
Playground

Matching for latest documents for a unique set of fields before aggregating

Assuming I have the following document structures:
> db.logs.find()
{
'id': ObjectId("50ad8d451d41c8fc58000003")
'name': 'Sample Log 1',
'uploaded_at: ISODate("2013-03-14T01:00:00+01:00"),
'case_id: '50ad8d451d41c8fc58000099',
'tag_doc': {
'group_x: ['TAG-1','TAG-2'],
'group_y': ['XYZ']
}
},
{
'id': ObjectId("50ad8d451d41c8fc58000004")
'name': 'Sample Log 2',
'uploaded_at: ISODate("2013-03-15T01:00:00+01:00"),
'case_id: '50ad8d451d41c8fc58000099'
'tag_doc': {
'group_x: ['TAG-1'],
'group_y': ['XYZ']
}
}
> db.cases.findOne()
{
'id': ObjectId("50ad8d451d41c8fc58000099")
'name': 'Sample Case 1'
}
Is there a way to perform a $match in aggregation framework that will retrieve only all the latest Log for each unique combination of case_id and group_x? I am sure this can be done with multiple $group pipeline but as much as possible, I want to immediately limit the number of documents that will pass through the pipeline via the $match operator. I am thinking of something like the $max operator except it is used in $match.
Any help is very much appreciated.
Edit:
So far, I can come up with the following:
db.logs.aggregate(
{$match: {...}}, // some match filters here
{$project: {tag:'$tag_doc.group_x', case:'$case_id', latest:{uploaded_at:1}}},
{$unwind: '$tag'},
{$group: {_id:{tag:'$tag', case:'$case'}, latest: {$max:'$latest'}}},
{$group: {_id:'$_id.tag', total:{$sum:1}}}
)
As I mentioned, what I want can be done with multiple $group pipeline but this proves to be costly when handling large number of documents. That is why, I wanted to limit the documents as early as possible.
Edit:
I still haven't come up with a good solution so I am thinking if the document structure itself is not optimized for my use-case. Do I have to update the fields to support what I want to achieve? Suggestions very much appreciated.
Edit:
I am actually looking for an implementation in mongodb similar to the one expected in How can I SELECT rows with MAX(Column value), DISTINCT by another column in SQL? except it involves two distinct field values. Also, the $match operation is crucial because it makes the resulting set dynamic, with filters ranging to matching tags or within a range of dates.
Edit:
Due to the complexity of my use-case I tried to use a simple analogy but this proves to be confusing. Above is now the simplified form of the actual use case. Sorry for the confusion I created.
I have done something similar. But it's not possible with match, but only with one group pipeline. The trick is do use multi key with correct sorting:
{ user_id: 1, address: "xyz", date_sent: ISODate("2013-03-14T01:00:00+01:00"), message: "test" }, { user_id: 1, address: "xyz2", date_sent: ISODate("2013-03-14T01:00:00+01:00"), message: "test" }
if i wan't to group on user_id & address and i wan't the message with the latest date we need to create a key like this:
{ user_id:1, address:1, date_sent:-1 }
then you are able to perform aggregate without sort, which is much faster and will work on shards with replicas. if you don't have a key with correct sort order you can add a sort pipeline, but then you can't use it with shards, because all that is transferred to mongos and grouping is done their (also will get memory limit problems)
db.user_messages.aggregate(
{ $match: { user_id:1 } },
{ $group: {
_id: "$address",
count: { $sum : 1 },
date_sent: { $max : "$date_sent" },
message: { $first : "$message" },
} }
);
It's not documented that it should work like this - but it does. We use it on production system.
I'd use another collection to 'create' the search results on the fly - as new posts are posted - by upserting a document in this new collection every time a new blog post is posted.
Every new combination of author/tags is added as a new document in this collection, whereas a new post with an existing combination just updates an existing document with the content (or object ID reference) of the new blog post.
Example:
db.searchResult.update(
... {'author_id':'50ad8d451d41c8fc58000099', 'tag_doc.tags': ["TAG-1", "TAG-2" ]},
... { $set: { 'Referenceid':ObjectId("5152bc79e8bf3bc79a5a1dd8")}}, // or embed your blog post here
... {upsert:true}
)
Hmmm, there is no good way of doing this optimally in such a manner that you only need to pick out the latest of each author, instead you will need to pick out all documents, sorted, and then group on author:
db.posts.aggregate([
{$sort: {created_at:-1}},
{$group: {_id: '$author_id', tags: {$first: '$tag_doc.tags'}}},
{$unwind: '$tags'},
{$group: {_id: {author: '$_id', tag: '$tags'}}}
]);
As you said this is not optimal however, it is all I have come up with.
If I am honest, if you need to perform this query often it might actually be better to pre-aggregate another collection that already contains the information you need in the form of:
{
_id: {},
author: {},
tag: 'something',
created_at: ISODate(),
post_id: {}
}
And each time you create a new post you seek out all documents in this unqiue collection which fullfill a $in query of what you need and then update/upsert created_at and post_id to that collection. This would be more optimal.
Here you go:
db.logs.aggregate(
{"$sort" : { "uploaded_at" : -1 } },
{"$match" : { ... } },
{"$unwind" : "$tag_doc.group_x" },
{"$group" : { "_id" : { "case" :'$case_id', tag:'$tag_doc.group_x'},
"latest" : { "$first" : "$uploaded_at"},
"Name" : { "$first" : "$Name" },
"tag_doc" : { "$first" : "$tag_doc"}
}
}
);
You want to avoid $max when you can $sort and take $first especially if you have an index on uploaded_at which would allow you to avoid any in memory sorts and reduce the pipeline processing costs significantly. Obviously if you have other "data" fields you would add them along with (or instead of) "Name" and "tag_doc".

MongoDB: Iterate over collection by key?

How can I iterate over all documents matching each value of a specified key in a MongoDB collection?
E.g. for a collection containing:
{ _id: ObjectId, keyA: 1 },
{ _id: ObjectId, keyA: 2 },
{ _id: ObjectId, keyA: 2 },
...with an index of { keyA: 1 }, how can I run an operation on all documents where keyA:1, then keyA:2, and so on?
Specifically, I want to run a count() of the documents for each keyA value. So for this collection, the equivalent of find({keyA:1}).count(), find({keyA:2}).count(), etc.
UPDATE: whether or not the keys are indexed is irrelevant in terms of how they're iterated, so edited title and description to make Q/A easier to reference in the future.
A simpler approach to get the grouped count of unique values for keyA would be to use the new Aggregation Framework in MongoDB 2.2:
eg:
db.coll.aggregate(
{ $group : {
_id: "$keyA",
count: { $sum : 1 }
}}
)
... returns a result set where each _id is a unique value for keyA, with the count of how many times that value appears:
{
"result" : [
{
"_id" : 2,
"count" : 2
},
{
"_id" : 1,
"count" : 1
}
],
"ok" : 1
}
I am not sure I get you here but is this what you are looking for:
db.mycollection.find({ keyA: 1 }).count()
Will count all keys with keyA being 1.
If that does not answer the question do think you can be a little more specific?
Do you mean to do an aggregation for all unique key values for keyA?
It may be implemented with multiple queries:
var i=0;
var f=[];
while(i!=db.col.count()){
var k=db.col.findOne({keyA:{$not:{$in:f}}}).keyA;
i+=db.col.find({keyA:k}).count();
f.push(k);
}
The sense of this code is to collect unique values of KeyA field of objects of col collection in array f, which will be result of operation. Unfortunately, for a while doing this operation you should block any operations, which will change col collection.
UPDATE:
All can be done much easier using distinct:
db.col.distinct("KeyA")
Thanks to #Aleksey for pointing me to db.collection.distinct.
Looks like this does it:
db.ships.distinct("keyA").forEach(function(v){
db.ships.find({keyA:v}).count();
});
Of course calling count() within a loop doesn't do much; in my case I was looking for key-values with more than one document, so I did this:
db.ships.distinct("keyA").forEach(function(v){
print(db.ships.find({keyA:v}).count() > 1);
});