count the subdocument field and total amount in mongodb - mongodb

I've a collection with below documents:
{
"_id" : ObjectId("54acfb67a81bf9509246ed81"),
"Billno" : 1234,
"details" : [
{
"itemcode" : 12,
"itemname" : "Paste100g",
"qty" : 2,
"price" : 50
},
{
"itemcode" : 14,
"itemname" : "Paste30g",
"qty" : 4,
"price" : 70
},
{
"itemcode" : 12,
"itemname" : "Paste100g",
"qty" : 4,
"price" : 100
}
]
}
{
"_id" : ObjectId("54acff86a81bf9509246ed82"),
"Billno" : 1237,
"details" : [
{
"itemcode" : 12,
"itemname" : "Paste100g",
"qty" : 3,
"price" : 75
},
{
"itemcode" : 19,
"itemname" : "dates100g",
"qty" : 4,
"price" : 170
},
{
"itemcode" : 22,
"itemname" : "dates200g",
"qty" : 2,
"price" : 160
}
]
}
I need to display below output. Please help
Required Output:
---------------------------------------------------------------------------------
itemcode itemname totalprice totalqty
---------------------------------------------------------------------------------
12 Paste100g 225 9
14 Paste30g 70 4
19 dates100g 170 4
22 dates200g 160 2

The MongoDB aggregation pipeline is available to solve your problem. You get details out of an array my processing with $unwind and then using $group to "sum" the totals:
db.collection.aggregate([
// Unwind the array to de-normalize as documents
{ "$unwind": "$details" },
// Group on the key you want and provide other values
{ "$group": {
"_id": "$details.itemcode",
"itemname": { "$first": "$details.itemname" },
"totalprice": { "$sum": "$details.price" },
"totalqty": { "$sum": "$details.qty" }
}}
])
Ideally you want a $match stage in there to filter out any irrelevant data first. This is basically MongoDB query and takes all the same arguments and operators.
Most here is simple really. The $unwind is sort of like a "JOIN" in SQL except that in an embedded structure the "join" is already made, so you are just "de-normalizing" like a join would do between "one to many" table relationships but just within the document itself. It basically "repeats" the "parent" document parts to the array for each array member as a new document.
Then the $group works of a key, as in "GROUP BY", where the "key" is the _id value. Everything there is "distinct" and all other values are gathered by "grouping operators".
This is where operations like $first come in. As described on the manual page, this takes the "first" value from the "grouping boundary" mentioned in the "key" earlier. You want this because all values of this field are "likely" to be the same, so this is a logical choice to just pick the "first" match.
Finally there is the $sum grouping operator which does what should be expected. All supplied values under the "key" are "added" or "summed" together to provide a total. Just like SQL SUM().
Also note that all the $ prefixed names there is how the aggregation framework deals with variables for "field/property" names within the current document being processed. "Dot notation" is used to reference the embedded "fields/properties" nested within a parent property name.
It is useful to learn aggregation in MongoDB. It is to general queries what anything beyond a basic "SELECT" statement is to SQL. Not just for "grouping" but for other manipulation as well.
Read through the documentation of all aggregation operators and also take a look a SQL to Aggregation Mapping in the documentation as a general guide if you have some familiarity with SQL to begin with. It helps explain concepts and shows some things that can be done.

Related

Complex Sort on multiple very large MongoDB Collections

I have a mongodb database with currently about 30 collections ranging from 1.5gb to 2.5gb and I need to reformat and sort the data into nested groups and dump them to a new collection. This database will eventually have about 2000 collections of the same type and formatting of data.
Data is currently available like this:
{
"_id" : ObjectId("598392d6bab47ec75fd6aea6"),
"orderid" : NumberLong("4379116282"),
"regionid" : 10000068,
"systemid" : 30045305,
"stationid" : 60015036,
"typeid" : 7489,
"bid" : 0,
"price" : 119999.91,
"minvolume" : 1,
"volremain" : 6,
"volenter" : 8,
"issued" : "2015-12-31 09:12:29",
"duration" : "14 days, 0:00:00",
"range" : 65535,
"reportedby" : 0,
"reportedtime" : "2016-01-01 00:22:42.997926"} {...} {...}
I need to group these by regionid > typeid > bid like this:
{"regionid": 10000176,
"orders": [
{
"typeid": 34,
"buy": [document, document, document, ...],
"sell": [document, document, document, ...]
},
{
"typeid": 714,
"buy": [document, document, document, ...],
"sell": [document, document, document, ...]
}]
}
Here's more verbose a sample of my ideal output format: https://gist.github.com/BatBrain/cd3426c29ce8ca8152efd1fa06ca1392
I have been trying to use the db.collection.aggregate() to do this, running this command as an initial test step:
db.day_2016_01_01.aggregate( [{ $group : { _id : "$regionid", entries : { $push: "$$ROOT" } } },{ $out : "test_group" }], { allowDiskUse:true, cursor:{} })
But I have been getting this message, "errmsg" : "BufBuilder attempted to grow() to 134217728 bytes, past the 64MB limit."
I tried looking into how to use the cursor object, but I'm pretty confused about how to apply it in this situation, or even if that is a viable option. Any advice or solutions would be great.

Extract two sub array values in mongodb by $elemMatch

Aggregate, $unwind and $group is not my solution as they make query very slow, there for I am looking to get my record by db.collection.find() method.
The problem is that I need more then one value from sub array. For example from the following example I want to get the "type" : "exam" and "type" : "quiz" elements.
{
"_id" : 22,
"scores" : [
{
"type" : "exam",
"score" : 75.04996547553947
},
{
"type" : "quiz",
"score" : 10.23046475899236
},
{
"type" : "homework",
"score" : 96.72520512117761
},
{
"type" : "homework",
"score" : 6.488940333376703
}
]
}
I am looking something like
db.students.find(
// Search criteria
{ '_id': 22 },
// Projection
{ _id: 1, scores: { $elemMatch: { type: 'exam', type: 'quiz' } }}
)
The result should be like
{ "_id": 22, "scores" : [ { "type" : "exam", "type" : "quiz" } ] }
But this over ride the type: 'exam' and returns only type: 'quiz'. Have anybody any idea how to do this with db.find()?
This is not possible directly using find and elemMatch because of following limitation of elemMatch and mongo array fields.
The $elemMatch operator limits the contents of an field from the query results to contain only the first element matching the $elemMatch condition. ref. from $elemMacth
and mongo array field limitations as below
Only one positional $ operator may appear in the projection document.
The query document should only contain a single condition on the array field being projected. Multiple conditions may override each other internally and lead to undefined behavior. ref from mongo array field limitations
So either you tried following this to find out only exam or quiz
db.collectionName.find({"_id":22,"scores":{"$elemMatch":{"type":"exam"}}},{"scores.$.type":1}).pretty()
is shows only exam scores array.
Otherwise you should go through aggregation

Mongo aggregation on array elements

I have a mongo document like
{ "_id" : 12, "location" : [ "Kannur","Hyderabad","Chennai","Bengaluru"] }
{ "_id" : 13, "location" : [ "Hyderabad","Chennai","Mysore","Ballary"] }
From this how can I get the location aggregation (distinct area count).
some thing like
Hyderabad 2,
Kannur 1,
Chennai 2,
Bengaluru 1,
Mysore 1,
Ballary 1
Using aggregation you cannot get the exact output that you want. One of the limitations of aggregation pipeline is its inability to transform values to keys in the output document.
For example, Kannur is one of the values of the location field, in the input document. In your desired output structure it needs to be the key("kannur":1). This is not possible using aggregation. While, this can be used achieving map-reduce, you can however get a very closely related and useful structure using aggregation.
Unwind the location array.
Group by the location fields, get the count of individual locations
using the $sum operator.
Group again all the documents once again to get a consolidated array
of results.
Code:
db.collection.aggregate([
{$unwind:"$location"},
{$group:{"_id":"$location","count":{$sum:1}}},
{$group:{"_id":null,"location_details":{$push:{"location":"$_id",
"count":"$count"}}}},
{$project:{"_id":0,"location_details":1}}
])
Sample o/p:
{
"location_details" : [
{
"location" : "Ballary",
"count" : 1
},
{
"location" : "Mysore",
"count" : 1
},
{
"location" : "Bengaluru",
"count" : 1
},
{
"location" : "Chennai",
"count" : 2
},
{
"location" : "Hyderabad",
"count" : 2
},
{
"location" : "Kannur",
"count" : 1
}
]
}

Update an array element with inc mongo update

HI All I have this Data in mongo,
{"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 1
}
],
"count" : NumberLong(1),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
}
I want to update it using this new data
{"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 3
}
{
"articleId" : "9514667",
"articleCount" : 3
}
],
"count" : NumberLong(6),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
}
What i need in the output is
{"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 4
}
{
"articleId" : "9514667",
"articleCount" : 3
}
],
"count" : NumberLong(7),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
}
Could you please suggest me how can i achieve this this using update operation
My update query will have tags field as query parameter.
You'll never get this in a single query operation as presently there is no way for MongoDB updates to refer to the existing values of fields. The exception of course is operators such as $inc, but this has a bit more going on than can be really handled by this.
You need multiple updates, but there is a consistent model to follow and the Bulk Operations API can at least help with sending all of those updates in a single request:
var updoc = {
"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 3
},
{
"articleId" : "9514667",
"articleCount" : 3
}
],
"count" : NumberLong(6),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
};
var bulk = db.collection.initializeOrderedBulkOp();
// Inspect the document variable for update
// For each array entry
updoc.articleId.forEach(function(doc) {
// First try to match the document and array entry to update
bulk.find({
"tags": updoc.tags,
"articleId.articleId": doc.articleId
}).update({
"$inc": { "articleId.$.articleCount": doc.articleCount }
});
// Then try to "push" the array entry where it does not exist
bulk.find({
"tags": updoc.tags,
"articleId.articleId": { "$ne": doc.articleId }
}).update({
"$push": { "articleId": doc }
});
})
// Finally increment the overall count
bulk.find({ "tags": updoc.tags }).update({
"$inc": { "count": updoc.count }
});
bulk.execute();
Now that is not "truly" atomic and there is a very small chance that the modified document could be read without all of the modifications in place. And the Bulk API sends these over to the server to process all at once, then that is a lot better than individual operations between the client and server where the chance of the document being read in a non-consistent state would be much higher.
So for each array member in the document to "merge" you want to both try to $inc where the
member is matched in the query and to $push a new member where it was not. Finally you just want to $inc again for the total count on the merged document with the existing one.
For this sample that is a total of 5 update operations but all sent in one package. Note that the response though will confirm that only 3 operations where applied here as 2 of the operations would not actually match a document due to the conditions specified:
BulkWriteResult({
"writeErrors" : [ ],
"writeConcernErrors" : [ ],
"nInserted" : 0,
"nUpserted" : 0,
"nMatched" : 3,
"nModified" : 3,
"nRemoved" : 0,
"upserted" : [ ]
})
So that is one way to handle it. Another may be to just submit each document individually and then periodically "merge" the data into grouped documents using the aggregation framework. It depends on how "real time" you want to do this. The above is as close to "real time" updates as you can generally get.
Delayed Processing
As mentioned, there is another approach to this where you can consider a "delayed" processing of this "merging" where you do not need the data to be updated in real time. The approach considers the use of the aggregation framework to perform the "merge", and you could even use the aggregation as the general query for the data, but you probably want to accumulate in a collection instead.
The basic premise of the aggregation is that you store each "change" document as a separate document in the collection, rather than merge in real time. So two documents in the collection would be represented like this:
{
"_id" : ObjectId("548fe1c78ad2c25d4c952eee"),
"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 1
}
],
"count" : NumberLong(1),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
},
{
"_id" : ObjectId("548fe2286032bac607405eb3"),
"articleId" : [
{
"articleId" : "9514666",
"articleCount" : 3
},
{
"articleId" : "9514667",
"articleCount" : 3
}
],
"count" : NumberLong(6),
"timeStamp" : NumberLong("1416634200000"),
"interval" : 1,
"tags" : "famous"
}
In order to "merge" these results for a given "tags" value, you want an aggregation pipeline like this:
db.collection.aggregate([
// Unwinds the array members to de-normalize
{ "$unwind": "$articleId" },
// Group the elements by "tags" value and "articleId"
{ "$group": {
"_id": {
"tags": "$tags",
"articleId": "$articleId.articleId",
},
"articleCount": { "$sum": "$articleId.articleCount" },
"timeStamp": { "$max": "$timeStamp" },
"interval": { "$max": "$interval" },
}},
// Now group again creating the array of "merged" items
{ "$group": {
"_id": "$tags",
"articleId": {
"$push": {
"articleId": "$_id.articleId",
"articleCount": "$articleCount"
}
},
"count": { "$sum": "$articleCount" },
"timeStamp": { "$max": "$timeStamp" },
"interval": { "$max": "$interval" },
}}
])
So using "tags" and "articleId" ( the inner value ) you group the results together, taking the $sum of the "articleCount" fields where both of those fields are the same and the $max value for the rest of the fields, which makes sense.
In a second $group pass you then just break the result documents down to "tags", pushing each matching "articleId" value under that into an array. To avoid any duplication the document "count" is summed at this stage and the other values are just taken from the same groupings.
The result is the same "merged" document, which you could either use the above aggregation query to simply return your results from such a collection, or use those results to either just create a new collection for the merged documents ( see the $out operator for one option ) or use a similar process to the first example to "merge" these "merged" results with an existing "merged" collection.
Accumulating data like this is generally a wide topic, even though a common use case for many. There is a reference project maintained but MongoDB solutions architecture called HVDF or High Volume Data Feed. It is aimed at providing a framework or at least a reference example of handling volume feeds ( for which change document accumulation is a case ) and aggregating these in a series manner for analysis.
The actual approaches depend on the overall needs of your application. Concepts such as these are employed internally by a framework like HVDF, it's just a matter of how much complexity you need and the approach that suits your application best for how you need to access the data.

Ensure Unique indexes in embedded doc in mongodb

Is there a way to make a subdocument within a list have a unique field in mongodb?
document structure:
{
"_id" : "2013-08-13",
"hours" : [
{
"hour" : "23",
"file" : [
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
]
}
]
}
I want to make sure that the document's hours.hour value has a unique item when inserted. The issue is hours is a list. Can you ensureIndex in this way?
Indexes are not the tool for ensuring uniqueness in an embedded array, rather they are used across documents to ensure that certain fields do not repeat there.
As long as you can be certain that the content you are adding does not differ from any other value in any way then you can use the $addToSet operator with update:
db.collection.update(
{ "_id": "2013-08-13", "hour": 23 },
{ "$addToSet": {
"hours.$.file": {
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
}}
)
So that document would not be added as there is already an element matching those exact values within the target array. If the content was different (and that means any part of the content, then a new item would be added.
For anything else you would need to maintain that manually by loading up the document and inspecting the elements of the array. Say for a different "filename" with exactly the same timestamp.
Problems with your Schema
Now the question is answered I want to point out the problems with your schema design.
Dates as strings are "horrible". You may think you need them but you do not. See the aggregation framework date operators for more on this.
You have nested arrays, which generally should be avoided. The general problems are shown in the documentation for the positional $ operator. That says you only get one match on position, and that is always the "top" level array. So updating beyond adding things as shown above is going to be difficult.
A better schema pattern for you is to simply do this:
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
If that is in it's own collection then you can always actually use indexes to ensure uniqueness. The aggregation framework can break down the date parts and hours where needed.
Where you must have that as part of another document then try at least to avoid the nested arrays. This would be acceptable but not as flexible as separating the entries:
{
"_id" : "2013-08-13",
"hours" : {
"23": [
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
]
}
}
It depends on your intended usage, the last would not allow you to do any type of aggregation comparison across hours within a day. Not in any simple way. The former does this easily and you can still break down selections by day and hour with ease.
Then again, if you are only ever appending information then your existing schema should be find. But be aware of the possible issues and alternatives.