Group Mongo documents by id and get the latest document by timestamp - mongodb

Imagine we have the following set of documents stored in mongodb:
{ "fooId" : "1", "status" : "A", "timestamp" : ISODate("2016-01-01T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "1", "status" : "B", "timestamp" : ISODate("2016-01-02T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "1", "status" : "C", "timestamp" : ISODate("2016-01-03T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "2", "status" : "A", "timestamp" : ISODate("2016-01-01T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "2", "status" : "B", "timestamp" : ISODate("2016-01-02T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "3", "status" : "A", "timestamp" : ISODate("2016-01-01T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "3", "status" : "B", "timestamp" : ISODate("2016-01-02T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "3", "status" : "C", "timestamp" : ISODate("2016-01-03T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "3", "status" : "D", "timestamp" : ISODate("2016-01-04T00:00:00.000Z") "otherInfo" : "BAR", ... }
I'd like to get the latest status for each fooId based on timestamp. Therefore, my return would look like:
{ "fooId" : "1", "status" : "C", "timestamp" : ISODate("2016-01-03T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "2", "status" : "B", "timestamp" : ISODate("2016-01-02T00:00:00.000Z") "otherInfo" : "BAR", ... }
{ "fooId" : "3", "status" : "D", "timestamp" : ISODate("2016-01-04T00:00:00.000Z") "otherInfo" : "BAR", ... }
I've been trying to go about this by using aggregation using the group operator, but the part I'm wondering is there an easy way to get the whole document back from an aggregation so it looks the same as if I had used a find query? It seems you have to specify all the fields when you group, and that doesn't seem extensible if documents can have optional fields on them that may be unknown to me. The current query I have looks like this:
db.collectionName.aggregate(
[
{ $sort: { timestamp: 1 } },
{
$group:
{
_id: "$fooId",
timestamp: { $last: "$timestamp" },
status: { "$last": "$status" },
otherInfo: { "$last": "$otherInfo" },
}
}
]
)

If you are doing and aggregation, you need to do similar to SQL , which mean specify the aggregation operation per column, the only option you have is use the $$ROOT operator
db.test.aggregate(
[
{ $sort: { timestamp: 1 } },
{
$group:
{
_id: "$fooId",
timestamp: { $last: "$$ROOT" }
}
}
]
);
But that will change the output a little bit
{ "_id" : "1", "timestamp" : { "_id" : ObjectId("570e6be3e81c8b195818e7fa"),
"fooId" : "1", "status" : "A", "timestamp" :ISODate("2016-01-01T00:00:00Z"),
"otherInfo" : "BAR" } }
If you want to return the original document format, you probably need a $project stage after that

You can use the $$ROOT system variable with the $last operator to return the last document.
db.collectionName.aggregate([
{ "$sort": { "timestamp": 1 } },
{ "$group": {
"_id": "$fooId",
"last_doc": { "$last": "$$ROOT" }
}}
])
Of course this will the last document for each group as a value of a field.
{
"_id" : "2",
"doc" : {
"_id" : ObjectId("570e6df92f5bb4fcc8bb177e"),
"fooId" : "2",
"status" : "B",
"timestamp" : ISODate("2016-01-02T00:00:00Z")
}
}
If you are not happy with that output then your best bet will be to add another $group stage to the pipeline when you simply return an array of those documents using the $push accumulator operator.
db.collectionName.aggregate([
{ "$sort": { "timestamp": 1 } },
{ "$group": {
"_id": "$fooId",
"last_doc": { "$last": "$$ROOT" }
}},
{ "$group": {
"_id": null,
"result": { "$push": "$last_doc" }
}}
])

Though there is no direct way to bring back original documents and I don't see any value, but try following aggregation query:
db.collection.aggregate([
{$sort: {fooId:1, timestamp: -1}},
{$group:{_id:"$fooId", doc:{$first:"$$ROOT"}}},
{$project:{_id:0, doc:["$doc"]}}
]).forEach(function(item){
printjson(item.doc[0]);
});
This query will emit:
{
"_id" : ObjectId("570e76d5e94e6584078f02c4"),
"fooId" : "2",
"status" : "B",
"timestamp" : ISODate("2016-01-02T00:00:00.000+0000"),
"otherInfo" : "BAR"
}
{
"_id" : ObjectId("570e76d5e94e6584078f02c8"),
"fooId" : "3",
"status" : "D",
"timestamp" : ISODate("2016-01-04T00:00:00.000+0000"),
"otherInfo" : "BAR"
}
{
"_id" : ObjectId("570e76d5e94e6584078f02c2"),
"fooId" : "1",
"status" : "C",
"timestamp" : ISODate("2016-01-03T00:00:00.000+0000"),
"otherInfo" : "BAR"
}

Related

match element in the array with aggregation

i have mongo db collection the follwing structure
{
{
"_id" : ObjectId("63e37afe7a3453d5014c011b"),
"schemaVersion" : NumberInt(1),
"Id" : "ObjectId("63e37afe7a3453d5014c0112")",
"Id1" : "ObjectId("63e37afe7a3453d5014c0113")",
"Id2" : "ObjectId("63e37afe7a3453d5014c0114")",
"collectionName" : "Country",
"List" : [
{
"countryId" : NumberInt(1),
"name" : "Afghanistan",
},{
"countryId" : NumberInt(1),
"name" : "India",
},
{
"countryId" : NumberInt(1),
"name" : "USA",
}
}
i need to match the value with id, id1, id2, collectionName and name in the list to get country id for example if match the below value
"Id" : "ObjectId("63e37afe7a3453d5014c0112")",
"Id1" : "ObjectId("63e37afe7a3453d5014c0113")",
"Id2" : "ObjectId("63e37afe7a3453d5014c0114")",
"collectionName" : "Country",
"name" : "Afghanistan",
i need result
{
"countryId" : 1,
"name" : "Afghanistan",
}
i tried like below
db.country_admin.aggregate([
{ $match: { collectionName: "Country" } },
{ $unwind : '$countryList' },
{ $project : { _id : 0, 'countryList.name' : 1, 'countryList.countryId' : 1 } }
]).pretty()
and i have following output
[
{
"List" : {
"countryId" : 1.0,
"name" : "Afghanistan"
}
},
{
"List" : {
"countryId" : 2.0,
"name" : "india"
}
},
{
"List" : {
"countryId" : 3.0,
"name" : "USA"
}
}]```
You can try using $filter to avoid $unwind like this example:
First $match by your desired condition(s).
Then $filter and get the first element (as "List.name": "Afghanistan" is used into $match stage there will be at least one result).
And output only values you want using $project.
db.collection.aggregate([
{
"$match": {
"Id": ObjectId("63e37afe7a3453d5014c0112"),
"Id1": ObjectId("63e37afe7a3453d5014c0113"),
"Id2": ObjectId("63e37afe7a3453d5014c0114"),
"collectionName": "Country",
"List.name": "Afghanistan",
}
},
{
"$project": {
"country": {
"$arrayElemAt": [
{
"$filter": {
"input": "$List",
"cond": {
"$eq": [
"$$this.name",
"Afghanistan"
]
}
}
},
0
]
}
}
},
{
"$project": {
"_id": 0,
"countryId": "$country.countryId",
"name": "$country.name"
}
}
])
Example here
By the way, using $unwind is also possible and you can check this example

MongoDB - Limit the number of elements in array based on type (max N elements for each type)

I would like to create a query that will limit the number of elements to a max N=2 for each entityType.
Besides entityType & entityId, the original document also has some other properties (eg: timestamp) which I simply removed for simplicity.
Here is the initial/reference document.
{
"_id" : ObjectId("100000"),
"agency" : "agency_1",
"username" : "user_one",
"recentEntities" : {
"entities" : [
{
"entityType" : "type_one",
"entityId" : "11",
"other" : "aa",
},
{
"entityType" : "type_one",
"entityId" : "12",
"other" : "ab",
},
{
"entityType" : "type_two",
"entityId" : "21",
"other" : "ba",
}
]
}
}
Here are 3 specifications/cases for this problem :
Every time a new entity is added as the first element in the entities array, meaning most recent visied entity.
Let's say that I want to update the initial document with the following entity :
{
"entityType" : "type_two",
"entityId" : "22",
"other" : "bb",
}
Since we did not reach the limit for the "entityType" = "type_two", we will simply add the object to the array and the updated document will look like:
{
"_id" : ObjectId("100000"),
"agency" : "agency_1",
"username" : "user_one",
"recentEntities" : {
"entities" : [
{
"entityType" : "type_two",
"entityId" : "22",
"other" : "bb",
},
{
"entityType" : "type_one",
"entityId" : "11",
"other" : "aa",
},
{
"entityType" : "type_one",
"entityId" : "12",
"other" : "ab",
},
{
"entityType" : "type_two",
"entityId" : "21",
"other" : "ba",
}
]
}
}
If the document with a particular entityId already exists, but the other fields inside the object have changed, then I would like to replace that entity object with the recent one.
Updating the reference document with this entity:
{
"entityType" : "type_one",
"entityId" : "12",
"other" : "xy",
}
Will result in :
{
"_id" : ObjectId("100000"),
"agency" : "agency_1",
"username" : "user_one",
"recentEntities" : {
"entities" : [
{
"entityType" : "type_one",
"entityId" : "12",
"other" : "xy",
},
{
"entityType" : "type_one",
"entityId" : "11",
"other" : "aa",
},
{
"entityType" : "type_two",
"entityId" : "21",
"other" : "ba",
}
]
}
}
On the other hand, if the limit has been reached, then the oldest entity of a particular type will be deleted.
For example by adding the following entity:
{
"entityType" : "type_one",
"entityId" : "13",
"other" : "ac",
}
we need to remove the "entityId" = "12" and put the new one on top.
After the update, the reference document will look like:
{
"_id" : ObjectId("100000"),
"agency" : "agency_1",
"username" : "user_one",
"recentEntities" : {
"entities" : [
{
"entityType" : "type_one",
"entityId" : "13",
"other" : "ac",
},
{
"entityType" : "type_one",
"entityId" : "11",
"other" : "aa",
},
{
"entityType" : "type_two",
"entityId" : "21",
"other" : "ba",
}
]
}
}
I managed to do the first 2 points, but the last one is a bit tricky to implement so any help will be much appreciated.
You can do the followings in an aggregation pipeline:
append the latest entry(i.e entityId 13 to the array by using $concatArrays
$unwind the array with includeArrayIndex option. I name the index as idx
$sort by idx: -1
$limit by the count you want(i.e. 3 in your example)
$group again the entities by $push
db.collection.aggregate([
{
"$match": {
"_id": "100000"
}
},
{
"$addFields": {
"recentEntities.entities": {
"$concatArrays": [
"$recentEntities.entities",
[
{
"entityType": "type_one",
"entityId": "13",
"other": "ac",
}
]
]
}
}
},
{
"$unwind": {
path: "$recentEntities.entities",
includeArrayIndex: "idx"
}
},
{
"$sort": {
idx: -1
}
},
{
$limit: 3
},
{
"$sort": {
idx: 1
}
},
{
$group: {
_id: "$_id",
"agency": {
$first: "$agency"
},
"username": {
$first: "$username"
},
"re": {
$push: "$recentEntities.entities"
}
}
},
{
"$project": {
"agency": 1,
"username": 1,
"recentEntities": {
entities: "$re"
}
}
}
])
Here is the Mongo playground for your reference.

mongodb aggregate sum item as nested data

Here is my some sample data in collection sale
[
{group:2, item:a, qty:3 },
{group:2, item:b, qty:3 },
{group:2, item:b, qty:2 },
{group:1, item:a, qty:3 },
{group:1, item:a, qty:5 },
{group:1, item:b, qty:5 }
]
and I want to query data like below and sort the popular group to the top
[
{ group:1, items:[{name:'a',total_qty:8},{name:'b',total_qty:5} ],total_qty:13 },
{ group:2, items:[{name:'a',total_qty:3},{name:'b',total_qty:5} ],total_qty:8 },
]
Actually we can loop in server script( php, nodejs ...) but the problem is pagination. I cannot use skip to get the right result.
The following query can get us the expected output:
db.collection.aggregate([
{
$group:{
"_id":{
"group":"$group",
"item":"$item"
},
"group":{
$first:"$group"
},
"item":{
$first:"$item"
},
"total_qty":{
$sum:"$qty"
}
}
},
{
$group:{
"_id":"$group",
"group":{
$first:"$group"
},
"items":{
$push:{
"name":"$item",
"total_qty":"$total_qty"
}
},
"total_qty":{
$sum:"$total_qty"
}
}
},
{
$project:{
"_id":0
}
}
]).pretty()
Data set:
{
"_id" : ObjectId("5d84a37febcbd560107c54a7"),
"group" : 2,
"item" : "a",
"qty" : 3
}
{
"_id" : ObjectId("5d84a37febcbd560107c54a8"),
"group" : 2,
"item" : "b",
"qty" : 3
}
{
"_id" : ObjectId("5d84a37febcbd560107c54a9"),
"group" : 2,
"item" : "b",
"qty" : 2
}
{
"_id" : ObjectId("5d84a37febcbd560107c54aa"),
"group" : 1,
"item" : "a",
"qty" : 3
}
{
"_id" : ObjectId("5d84a37febcbd560107c54ab"),
"group" : 1,
"item" : "a",
"qty" : 5
}
{
"_id" : ObjectId("5d84a37febcbd560107c54ac"),
"group" : 1,
"item" : "b",
"qty" : 5
}
Output:
{
"group" : 2,
"items" : [
{
"name" : "b",
"total_qty" : 5
},
{
"name" : "a",
"total_qty" : 3
}
],
"total_qty" : 8
}
{
"group" : 1,
"items" : [
{
"name" : "b",
"total_qty" : 5
},
{
"name" : "a",
"total_qty" : 8
}
],
"total_qty" : 13
}
You need to use $group aggregation with $sum and $push accumulator
db.collection.aggregate([
{ "$group": {
"_id": "$group",
"items": { "$push": "$$ROOT" },
"total_qty": { "$sum": "$qty" }
}},
{ "$sort": { "total_qty": -1 }}
])

Map aggregation results in Mongo

Here is the data set:
{ "_id" : "1", "key" : "111", "payload" : 100, "type" : "foo", "createdAt" : ISODate("2016-07-08T11:59:18.000Z") }
{ "_id" : "2", "key" : "111", "payload" : 100, "type" : "bar", "createdAt" : ISODate("2016-07-09T11:59:19.000Z") }
{ "_id" : "3", "key" : "222", "payload" : 100, "type" : "foo", "createdAt" : ISODate("2016-07-10T11:59:20.000Z") }
{ "_id" : "4", "key" : "222", "payload" : 100, "type" : "foo", "createdAt" : ISODate("2016-07-11T11:59:21.000Z") }
{ "_id" : "5", "key" : "222", "payload" : 100, "type" : "bar", "createdAt" : ISODate("2016-07-12T11:59:22.000Z") }
I have to group them by key:
db.items.aggregate([{$group: {_id: {key: '$key'}}}])
that produces the next set:
{ "_id" : { "key" : "111" } }
{ "_id" : { "key" : "222" } }
And after that I have to retrieve the most recent values of foo and bar per each group record.
My question is what is the most optimal way to do it? I can iterate the items in javascript and perform additional roundtrip to DB per each group result. But I'm not sure if it's time-efficient.
I am not sure about the most optimal way to do it, but the easy one will be to expand your aggregation pipeline like
db.items.aggregate([
{
$group:
{
_id: { key: "$key", type: "$type" },
last: { $max: "$createdAt" }
}
},
{
$group:
{
_id: { key: "$_id.key" },
mostRecent: { $push: { type: "$_id.type", createdAt: "$last" } }
}
}
]);
that for your collection of documents will result into
{ "_id" : { "key" : "222" }, "mostRecent" : [ { "type" : "bar", "createdAt" : ISODate("2016-07-12T11:59:22Z") }, { "type" : "foo", "createdAt" : ISODate("2016-07-11T11:59:21Z") } ] }
{ "_id" : { "key" : "111" }, "mostRecent" : [ { "type" : "bar", "createdAt" : ISODate("2016-07-09T11:59:19Z") }, { "type" : "foo", "createdAt" : ISODate("2016-07-08T11:59:18Z") } ] }

mongodb multiple aggregations in single operation

I have an item collection with following documents.
{ "item" : "i1", "category" : "c1", "brand" : "b1" }
{ "item" : "i2", "category" : "c2", "brand" : "b1" }
{ "item" : "i3", "category" : "c1", "brand" : "b2" }
{ "item" : "i4", "category" : "c2", "brand" : "b1" }
{ "item" : "i5", "category" : "c1", "brand" : "b2" }
I want to separate aggregation results --> count by category, count by brand. Please note, it is not count by (category,brand)
I am able to do this using map-reduce using following code.
map = function(){
emit({type:"category",category:this.category},1);
emit({type:"brand",brand:this.brand},1);
}
reduce = function(key, values){
return Array.sum(values)
}
db.item.mapReduce(map,reduce,{out:{inline:1}})
And the result is
{
"results" : [
{
"_id" : {
"type" : "brand",
"brand" : "b1"
},
"value" : 3
},
{
"_id" : {
"type" : "brand",
"brand" : "b2"
},
"value" : 2
},
{
"_id" : {
"type" : "category",
"category" : "c1"
},
"value" : 3
},
{
"_id" : {
"type" : "category",
"category" : "c2"
},
"value" : 2
}
],
"timeMillis" : 21,
"counts" : {
"input" : 5,
"emit" : 10,
"reduce" : 4,
"output" : 4
},
"ok" : 1,
}
I can get same results by firing two different aggregation commands as below.
db.item.aggregate({$group:{_id:"$category",count:{$sum:1}}})
db.item.aggregate({$group:{_id:"$brand",count:{$sum:1}}})
Is there anyway I can do the same using aggregation framework by single aggregation command.
I have simplified my case here, but in actual I need this grouping from fields in array of subdocuments. Assume the above is structure after I do unwind.
It is a real-time query (someone waiting for response), though on smaller dataset, so execution time is important.
I am using MongoDB 2.4.
Starting in Mongo 3.4, the $facet aggregation stage greatly simplifies this type of use case by processing multiple aggregation pipelines within a single stage on the same set of input documents:
// { "item" : "i1", "category" : "c1", "brand" : "b1" }
// { "item" : "i2", "category" : "c2", "brand" : "b1" }
// { "item" : "i3", "category" : "c1", "brand" : "b2" }
// { "item" : "i4", "category" : "c2", "brand" : "b1" }
// { "item" : "i5", "category" : "c1", "brand" : "b2" }
db.collection.aggregate(
{ $facet: {
categories: [{ $group: { _id: "$category", count: { "$sum": 1 } } }],
brands: [{ $group: { _id: "$brand", count: { "$sum": 1 } } }]
}}
)
// {
// "categories" : [
// { "_id" : "c1", "count" : 3 },
// { "_id" : "c2", "count" : 2 }
// ],
// "brands" : [
// { "_id" : "b1", "count" : 3 },
// { "_id" : "b2", "count" : 2 }
// ]
// }
Over a large data set I would say that your current mapReduce approach would be the best one, because the aggregation technique for this would not work well with large data. But possibly over a reasonably small size it might just be what you need:
db.items.aggregate([
{ "$group": {
"_id": null,
"categories": { "$push": "$category" },
"brands": { "$push": "$brand" }
}},
{ "$project": {
"_id": {
"categories": "$categories",
"brands": "$brands"
},
"categories": 1
}},
{ "$unwind": "$categories" },
{ "$group": {
"_id": {
"brands": "$_id.brands",
"category": "$categories"
},
"count": { "$sum": 1 }
}},
{ "$group": {
"_id": "$_id.brands",
"categories": { "$push": {
"category": "$_id.category",
"count": "$count"
}},
}},
{ "$project": {
"_id": "$categories",
"brands": "$_id"
}},
{ "$unwind": "$brands" },
{ "$group": {
"_id": {
"categories": "$_id",
"brand": "$brands"
},
"count": { "$sum": 1 }
}},
{ "$group": {
"_id": null,
"categories": { "$first": "$_id.categories" },
"brands": { "$push": {
"brand": "$_id.brand",
"count": "$count"
}}
}}
])
Not really the same as the mapReduce output, you could throw in some more stages to change the output format, but this should be usable:
{
"_id" : null,
"categories" : [
{
"category" : "c2",
"count" : 2
},
{
"category" : "c1",
"count" : 3
}
],
"brands" : [
{
"brand" : "b2",
"count" : 2
},
{
"brand" : "b1",
"count" : 3
}
]
}
As you can see, this involves a fair bit of shuffling between arrays in order to group each set of either "category" or "brand" within the same pipeline process. Again I will say, this will not do well for large data, but for something like "items in an order" it would probably do nicely.
Of course as you say, you have simplified somewhat, so the first grouping key on null is either going to be something else or either narrowed down to do that null case by an earlier $match stage, which is probably what you want to do.