"Map Reduce" reduce function finds value undefined - mongodb

I have the following collection:
{
"_id" : ObjectId("51f1fcc08188d3117c6da351"),
"cust_id" : "abc123",
"ord_date" : ISODate("2012-10-03T18:30:00Z"),
"status" : "A",
"price" : 25,
"items" : [{
"sku" : "ggg",
"qty" : 7,
"price" : 2.5
}, {
"sku" : "ppp",
"qty" : 5,
"price" : 2.5
}]
}
My map function is:
var map=function(){emit(this._id,this);}
For debugging purpose I overide the emit method as follows:
var emit = function (key,value){
print("emit");
print("key: " + key + "value: " + tojson(value));
reduceFunc2(key, toJson(value));
}
and the reduce function as follows:
var reduceFunc2 = function reduce(key,values){
var val = values;
print("val",val);
var items = [];
val.items.some(function (entry){
print("entry is:::"+entry);
if (entry.qty>5 && entry.sku=='ggg'){
items.push(entry)
}
});
val.items = items;
return val;
}
But when I apply map as:
var myDoc = db.orders.findOne({
_id: ObjectId("51f1fcc08188d3117c6da351")
});
map.apply(myDoc);
I get the following error:
emit key: 51f1fcc08188d3117c6da351 value:
{
"_id":" ObjectId(\"51f1fcc08188d3117c6da351\")",
"cust_id":"abc123",
"ord_date":" ISODate(\"2012-10-03T18:30:00Z\")",
"status":"A",
"price":25,
"items":[
{
"sku":"ggg",
"qty":7,
"price":2.5
},
{
"sku":"ppp",
"qty":5,
"price":2.5
}
]
}
value:: undefined
Tue Jul 30 12:49:22.920 JavaScript execution failed: TypeError: Cannot call method 'some' of undefined
you can find that their is an items field in the value as printed which is of array kind, even then it is throwing error cannot call some on undefined, if someone can tell where i am going wrong.

You have an error in your reduceFunc2 function:
var reduceFunc2 = function reduce(key,values){
var val = values[0]; //values is an array!!!
// ...
}
Reduce function meant to reduce an array of elements, emitted with the same key, to a single document. So, it accepts an array. You're emitting each key only once, so it's an array with a single element with it.
Now you'll be able to call your MapReduce normally:
db.orders.mapReduce(map, reduceFunc2, {out: {inline: 1}});
The way you overridden emit function is broken, so you shouldn't use it.
Update. Mongo may skip reduce operation if there is only one document associated with the given key, because there is no point in reducing a single document.
The idea of MapReduce is that you maps each document into an array of key-value pairs to be reduced on the next step. If there is more than one value associated with the given key, Mongo runs a reduce operation to reduce it to the single document. Mongo expects reduce function to return reduced document in the same format as the elements which was emitted. It's why Mongo may run reduce operation any number of times for each key (up to the number of emits). There is also no guarantee that reduce operation will be called at all if there is nothing to reduce (e.g. if there is only one element).
So, it's best to move map logic to the proper place.
Update 2. Anyway, why are you using MapReduce here? You can just query for the documents you need:
db.orders.find({}, {
items: {
$elemMatch: {
qty: {$gt: 5},
sku: 'qqq'
}
}
})
Update 3. If you really want to do it with MapReduce, try this:
db.runCommand({
mapreduce: 'orders',
query: {
items: {
$elemMatch: {
qty: {$gt: 5},
sku: 'ggg'
}
}
},
map: function map (){
this.items = this.items.filter(function (entry) {
return (entry.qty>5 && entry.sku=='ggg')
});
emit(this._id,this);
},
reduce: function reduce (key, values) {
return values[0];
},
verbose: true,
out: {
merge: 'map_reduce'
}
})

Related

How do you find aggregate in mongo array with size greater than two?

In the mongo 2.6 document, see few below
nms:PRIMARY> db.checkpointstest4.find()
{ "_id" : 1, "cpu" : [ 100, 20, 60 ], "hostname" : "host1" }
{ "_id" : 2, "cpu" : [ 40, 30, 80 ], "hostname" : "host1" }
I need to find average cpu (per cpu array index) per hosts I.E based on two above, average for host1 will be [70,25,70] because cpu[0] is 100+40=70 etc
I am lost when I have 3 array elements instead of two array elements, see mongodb aggregate average of array elements
Finally below worked for me:
var map = function () {
for (var idx = 0; idx < this.cpu.length; idx++) {
var mapped = {
idx: idx,
val: this.cpu[idx]
};
emit(this.hostname, {"cpu": mapped});
}
};
var reduce = function (key, values) {
var cpu = []; var sum = [0,0,0]; cnt = [0,0,0];
values.forEach(function (value) {
sum[value.cpu.idx] += value.cpu.val;
cnt[value.cpu.idx] +=1;
cpu[value.cpu.idx] = sum[value.cpu.idx]/cnt[value.cpu.idx]
});
return {"cpu": cpu};
};
db.checkpointstest4.mapReduce(map, reduce, {out: "checkpointstest4_result"});
In MongoDB 3.2 where includeArrayIndex showed up, you can do this;
db.test.aggregate(
{$unwind: {path:"$cpu", includeArrayIndex:"index"}},
{$group: {_id:{h:"$hostname",i:"$index"}, cpu:{$avg:"$cpu"}}},
{$sort:{"_id.i":1}},
{$group:{_id:"$_id.h", cpu:{$push:"$cpu"}}}
)
// Make a row for each array element with an index field added.
{$unwind: {path:"$cpu", includeArrayIndex:"index"}},
// Group by hostname+index, calculate average for each group.
{$group: {_id:{h:"$hostname",i:"$index"}, cpu:{$avg:"$cpu"}}},
// Sort by index (to get the array in the next step sorted correctly)
{$sort:{"_id.i":1}},
// Group by host, pushing the averages into an array in order.
{$group:{_id:"$_id.h", cpu:{$push:"$cpu"}}}
Upgrading would be your best option as mentioned with the includeArrayIndex available to $unwind from MongoDB 3.2 onwards.
If you cannot do that, then you can always process with mapReduce instead:
db.checkpointstest4.mapReduce(
function() {
var mapped = this.cpu.map(function(val) {
return { "val": val, "cnt": 1 };
});
emit(this.hostname,{ "cpu": mapped });
},
function(key,values) {
var cpu = [];
values.forEach(function(value) {
value.cpu.forEach(function(item,idx) {
if ( cpu[idx] == undefined )
cpu[idx] = { "val": 0, "cnt": 0 };
cpu[idx].val += item.val;
cpu[idx].cnt += item.cnt
});
});
return { "cpu": cpu };
},
{
"out": { "inline": 1 },
"finalize": function(key,value) {
return {
"cpu": value.cpu.map(function(cpu) {
return cpu.val / cpu.cnt;
})
};
}
}
)
So the steps there are in the "mapper" function to transform the array content to be an array of objects containing the "value" from the element and a "count" for later reference as input to the "reduce" function. You need this to be consistent with how the reducer is going to work with this and is necessary to get the overall counts needed to get the average.
In the "reducer" itself you are basically summing the array contents for each position for both the "value" and the "count". This is important as the "reduce" function can be called multiple times in the overall reduction process, feeding it's output as "input" in a subsequent call. So that is why both mapper and reducer are working in this format.
With the final reduced results, the finalize function is called to simply look at each summed "value" and "count" and divide by the count to return an average.
Mileage may vary on whether modern aggregation pipeline processing or indeed this mapReduce process will perform the best, mostly depending on the data. Using $unwind in the prescribed way will certainly increase the amount of documents to be analyzed and thus produce overhead. On the contrary, while JavaScript processing as opposed to native operators in the aggregation framework will generally be slower, but the document processing overhead here is reduced since this is keeping arrays.
The advice I would give is use this if upgrading to 3.2 is not an option, yet if even an option then at least benchmark the two on your data and expected growth to see which works best for you.
Returns
{
"results" : [
{
"_id" : "host1",
"value" : {
"cpu" : [
70,
25,
70
]
}
}
],
"timeMillis" : 38,
"counts" : {
"input" : 2,
"emit" : 2,
"reduce" : 1,
"output" : 1
},
"ok" : 1
}

mongodb delete nested object without knowledge of object nodes

For the below document, I am trying to delete the node which contains id = 123
{
'_id': "1234567890",
"image" : {
"unknown-node-1" : {
"id" : 123
},
"unknown-node-2" : {
"id" : 124
}
}
}
Result should be as below.
{
'_id': "1234567890",
"image" : {
"unknown-node-2" : {
"id" : 124
}
}
}
The below query achieves the result. But i have to know the unknown-node-1 in advance. How can I achieve the results without pre-knowledge of node, but only
info that I have is image.*.id = 123
(* means unknown node)
Is it possible in mongo? or should I do these find on my app code.
db.test.update({'_id': "1234567890"}, {$unset: {'image.unknown-node-1': ""}})
Faiz,
There is no operator to help match and project a single key value pair without knowing the key. You'll have to write post processing code to scan each one of the documents to find the node with the id and then perform your removal.
If you have the liberty of changing your schema, you'll have more flexibilty. With a document design like this:
{
'_id': "1234567890",
"image" : [
{"id" : 123, "name":"unknown-node-1"},
{"id" : 124, "name":"unknown-node-2"},
{"id" : 125, "name":"unknown-node-3"}
]
}
You could remove documents from the array like this:
db.collectionName.update(
{'_id': "1234567890"},
{ $pull: { image: { id: 123} } }
)
This would result in:
{
'_id': "1234567890",
"image" : [
{"id" : 124, "name":"unknown-node-2"},
{"id" : 125, "name":"unknown-node-3"}
]
}
With your current schema, you will need a mechanism to get a list of the dynamic keys that you need to assemble the query before doing the update and one way of doing this would be with MapReduce. Take for instance the following map-reduce operation which will populate a separate collection with all the keys as the _id values:
mr = db.runCommand({
"mapreduce": "test",
"map" : function() {
for (var key in this.image) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "test_keys"
})
To get a list of all the dynamic keys, run distinct on the resulting collection:
> db[mr.result].distinct("_id")
[ "unknown-node-1", "unknown-node-2" ]
Now given the list above, you can assemble your query by creating an object that will have its properties set within a loop. Normally if you knew the keys beforehand, your query will have this structure:
var query = {
"image.unknown-node-1.id": 123
},
update = {
"$unset": {
"image.unknown-node-1": ""
}
};
db.test.update(query, update);
But since the nodes are dynamic, you will have to iterate the list returned from the mapReduce operation and for each element, create the query and update parameters as above to update the collection. The list could be huge so for maximum efficiency and if your MongoDB server is 2.6 or newer, it would be better to take advantage of using a write commands Bulk API that allow for the execution of bulk update operations which are simply abstractions on top of the server to make it easy to build bulk operations and thus get perfomance gains with your update over large collections. These bulk operations come mainly in two flavours:
Ordered bulk operations. These operations execute all the operation in order and error out on the first write error.
Unordered bulk operations. These operations execute all the operations in parallel and aggregates up all the errors. Unordered bulk operations do not guarantee order of execution.
Note, for older servers than 2.6 the API will downconvert the operations. However it's not possible to downconvert 100% so there might be some edge cases where it cannot correctly report the right numbers.
In your case, you could implement the Bulk API update operation like this:
mr = db.runCommand({
"mapreduce": "test",
"map" : function() {
for (var key in this.image) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "test_keys"
})
// Get the dynamic keys
var dynamic_keys = db[mr.result].distinct("_id");
// Get the collection and bulk api artefacts
var bulk = db.test.initializeUnorderedBulkOp(), // Initialize the Unordered Batch
counter = 0;
// Execute the each command, triggers for each key
dynamic_keys.forEach(function(key) {
// Create the query and update documents
var query = {},
update = {
"$unset": {}
};
query["image."+ key +".id"] = 123;
update["$unset"]["image." + key] = ";"
bulk.find(query).update(update);
counter++;
if (counter % 100 == 0 ) {
bulk.execute() {
// re-initialise batch operation
bulk = db.test.initializeUnorderedBulkOp();
}
});
if (counter % 100 != 0) { bulk.execute(); }

How can i remove empty string from a mongodb collection?

I have a "mongodb colllenctions" and I'd like to remove the "empty strings"with keys from it.
From this:
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "15",
"year_comment" : "",
}
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "",
"year_comment" : "asd",
}
I'd like to gain this result:
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "15",
}
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year_comment" : "asd",
}
How could I solve it?
Please try executing following code snippet in Mongo shell which strips fields with empty or null values
var result=new Array();
db.getCollection('test').find({}).forEach(function(data)
{
for(var i in data)
{
if(data[i]==null || data[i]=='')
{
delete data[i]
}
}
result.push(data)
})
print(tojson(result))
Would start with getting a distinct list of all the keys in the collection, use those keys as your query basis and do an ordered bulk update using the Bulk API operations. The update statement uses the $unset operator to remove the fields.
The mechanism to get distinct keys list that you need to assemble the query is possible through Map-Reduce. The following mapreduce operation will populate a separate collection with all the keys as the _id values:
mr = db.runCommand({
"mapreduce": "my_collection",
"map" : function() {
for (var key in this) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "my_collection" + "_keys"
})
To get a list of all the dynamic keys, run distinct on the resulting collection:
db[mr.result].distinct("_id")
// prints ["_id", "year", "year_comment", ...]
Now given the list above, you can assemble your query by creating an object that will have its properties set within a loop. Normally your query will have this structure:
var keysList = ["_id", "year", "year_comment"];
var query = keysList.reduce(function(obj, k) {
var q = {};
q[k] = "";
obj["$or"].push(q);
return obj;
}, { "$or": [] });
printjson(query); // prints {"$or":[{"_id":""},{"year":""},{"year_comment":""}]}
You can then use the Bulk API (available with MongoDB 2.6 and above) as a way of streamlining your updates for better performance with the query above. Overall, you should be able to have something working as:
var bulk = db.collection.initializeOrderedBulkOp(),
counter = 0,
query = {"$or":[{"_id":""},{"year":""},{"year_comment":""}]},
keysList = ["_id", "year", "year_comment"];
db.collection.find(query).forEach(function(doc){
var emptyKeys = keysList.filter(function(k) { // use filter to return an array of keys which have empty strings
return doc[k]==="";
}),
update = emptyKeys.reduce(function(obj, k) { // set the update object
obj[k] = "";
return obj;
}, { });
bulk.find({ "_id": doc._id }).updateOne({
"$unset": update // use the $unset operator to remove the fields
});
counter++;
if (counter % 1000 == 0) {
// Execute per 1000 operations and re-initialize every 1000 update statements
bulk.execute();
bulk = db.collection.initializeOrderedBulkOp();
}
})
If you need to update a single blank parameter or you prefer to do parameter by parameter, you can use the mongo updateMany functionality:
db.comments.updateMany({year: ""}, { $unset : { year : 1 }})

How to get average value from a hashmap in MongoDB?

I have a time data in my Mongo database. Each document equal a minute and contain 60 seconds as objects with value for each. How to get average value of all seconds in one minute?
A document looking like that:
{
"_id" : ObjectId("55575e4062771c26ec5f2287"),
"timestamp" : "2015-05-16T18:12:00.000Z",
"values" : {
"0" : "26.17",
"1" : "26.17",
"2" : "26.17",
...
"58" : "24.71",
"59" : "25.20"
}
}
You could take two approaches here:
Changing the schema and use the aggregation framework to get the average by using the $avg operator OR
Apply Map-Reduce.
Let's look at the first option. Currently as it is, the schema will not make it possible to use the aggregation framework because of the dynamic keys in the values subdocument. The ideal schema that would favour the aggregation framework would have the values field be an array which contains embedded key/value documents like this:
/* 0 */
{
"_id" : ObjectId("5559d66c9bbec0dd0344e4b0"),
"timestamp" : "2015-05-16T18:12:00.000Z",
"values" : [
{
"k" : "0",
"v" : 26.17
},
{
"k" : "1",
"v" : 26.17
},
{
"k" : "2",
"v" : 26.17
},
...
{
"k" : "58",
"v" : 24.71
},
{
"k" : "59",
"v" : 25.20
}
]
}
With MongoDB 3.6 and newer, use the aggregation framework to tranform the hashmaps to an array by using the $objectToArray operator then use $avg to calculate the average.
Consider running the following aggregate pipeline:
db.test.aggregate([
{
"$addFields": {
"values": { "$objectToArray": "$values" }
}
}
])
Armed with this new schema, you would then need to update your collection to change the string values to int by iterating the cursor returned from the aggregate method and using bulkWrite as follows:
var bulkUpdateOps = [],
cursor = db.test.aggregate([
{
"$addFields": {
"values": { "$objectToArray": "$values" }
}
}
]);
cursor.forEach(doc => {
const { _id, values } = doc;
let temp = values.map(item => {
item.key = item.k;
item.value = parseFloat(item.v) || 0;
delete item.k;
delete item.v;
return item;
});
bulkUpdateOps.push({
"updateOne": {
"filter": { _id },
"update": { "$set": { values: temp } },
"upsert": true
}
});
if (bulkUpdateOps.length === 1000) {
db.test.bulkWrite(bulkUpdateOps);
bulkUpdateOps = [];
}
});
if (bulkUpdateOps.length > 0) {
db.test.bulkWrite(bulkUpdateOps);
}
If your MongoDB version does not support the $objectToArray operator in the aggregation framework, then to convert the current schema into the one above takes a bit of native JavaScript functions with the MongoDB find() cursor's forEach() function as follows (assuming you have a test collection):
var bulkUpdateOps = [],
cursor = db.test.find();
cursor.forEach(doc => {
const { _id, values } = doc;
let temp = Object.keys(values).map(k => {
let obj = {};
obj.key = k;
obj.value = parseFloat(doc.values[k]) || 0;
return obj;
});
bulkUpdateOps.push({
"updateOne": {
"filter": { _id },
"update": { "$set": { values: temp } },
"upsert": true
}
});
if (bulkUpdateOps.length === 1000) {
db.test.bulkWrite(bulkUpdateOps);
bulkUpdateOps = [];
}
});
if (bulkUpdateOps.length > 0) {
db.test.bulkWrite(bulkUpdateOps);
}
or
db.test.find().forEach(function (doc){
var keys = Object.keys(doc.values),
values = keys.map(function(k){
var obj = {};
obj.key = k;
obj.value = parseFloat(doc.values[k]) || 0;
return obj;
});
doc.values = values;
db.test.save(doc);
});
The collection will now have the above schema and thus follows the aggregation pipeline that will give you the average time in one minute:
db.test.aggregate([
{
"$fields": {
"average": { "$avg": "$values.value" }
}
}
])
Or for MongoDB 3.0 and lower
db.test.aggregate([
{ "$unwind": "$values" },
{
"$group": {
"_id": "$timestamp",
"average": {
"$avg": "$values.value"
}
}
}
])
For the above document, the output would be:
/* 0 */
{
"result" : [
{
"_id" : "2015-05-16T18:12:00.000Z",
"average" : 25.684
}
],
"ok" : 1
}
As for the other Map-Reduce option, the intuition behind the operation is you would use JavaScript to make the necessary transformations and calculate the final average. You would need to define three functions:
Map
When you tell Mongo to MapReduce, the function you provide as the map function will receive each document as the this parameter. The purpose of the map is to exercise whatever logic you need in JavaScript and then call emit 0 or more times to produce a reducible value.
var map = function(){
var obj = this.values;
var keys = Object.keys(obj);
var values = [];
keys.forEach(function(key){
var val = parseFloat(obj[key]);
var value = { count: 1, qty: val };
emit(this.timestamp, value);
});
};
For each document you need to emit a key and a value. The key is the first parameter to the emit function and represents how you want to group the values (in this case you will be grouping by the timestamp). The second parameter to emit is the value, which in this case is a little object containing the count of documents (always 1) and total value of each individual value object key i.e. for each second within the minute.
Reduce
Next you need to define the reduce function where Mongo will group the items you emit and pass them as an array to this reduce function It's inside the reduce function where you want to do the aggregation calculations and reduce all the objects to a single object.
var reduce = function(key, values) {
var result = {count: 0, total: 0 };
values.forEach(function(value){
result.count += value.count;
result.total += value.qty;
});
return result;
};
This reduce function returns a single result. It's important for the return value to have the same shape as the emitted values. It's also possible for MongoDB to call the reduce function multiple times for a given key and ask you to process a partial set of values, so if you need to perform some final calculation, you can also give MapReduce a finalize function.
Finalize
The finalize function is optional, but if you need to calculate something based on a fully reduced set of data, you'll want to use a finalize function. Mongo will call the finalize function after all the reduce calls for a set are complete. This would be the place to calculate the average of all the second values in a document/timestamp:
var finalize = function (key, value) {
value.average = value.total / value.count;
return value;
};
Putting It Together
With the JavaScript in place, all that is left is to tell MongoDB to execute a MapReduce:
var map = function(){
var obj = this.values;
var keys = Object.keys(obj);
var values = [];
keys.forEach(function(key){
var val = parseFloat(obj[key]);
var value = { count: 1, qty: val };
emit(this.timestamp, value);
});
};
var reduce = function(key, values) {
var result = {count: 0, total: 0 };
values.forEach(function(value){
result.count += value.count;
result.total += value.qty;
});
return result;
};
var finalize = function (key, value) {
value.average = value.total / value.count;
return value;
};
db.collection.mapReduce(
map,
reduce,
{
out: { merge: "map_reduce_example" },
finalize: finalize
}
)
And when you query the output collection map_reduce_example, db.map_reduce_example.find(), you get the result:
/* 0 */
{
"_id" : null,
"value" : {
"count" : 5,
"total" : 128.42,
"average" : 25.684
}
}
References:
A Simple MapReduce with MongoDB and C#
MongoDB docuumentation on mapReduce
This kind of data structure creates lots of conflicts and difficult to handled mongo operations. This case either you changed your schema design. But, if you not able to changed this schema then follow this :
In your schema having two major problem 1> keys dynamic and 2> values of given keys in string so you should use some programming code to calculating avg check below scripts
From ref this first calculated size of values
Object.size = function(obj) {
var size = 0,
key;
for (key in obj) {
if (obj.hasOwnProperty(key)) size++;
}
return size;
};
db.collectionName.find().forEach(function(myDoc) {
var objects = myDoc.values;
var value = 0;
// Get the size of an object
var size = Object.size(objects);
for (var key in objects) {
value = value + parseFloat(objects[key]); // parse string values to float
}
var avg = value / size
print(value);
print(size);
print(avg);
});

MongoDB mapreduce missing data with 'null' in return

So this is strange. I'm trying to use mapreduce to group datetime/metrics under a unique port:
Document layout:
{
"_id" : ObjectId("5069d68700a2934015000000"),
"port_name" : "CL1-A",
"metric" : "340.0",
"port_number" : "0",
"datetime" : ISODate("2012-09-30T13:44:00Z"),
"array_serial" : "12345"
}
and mapreduce functions:
var query = {
'array_serial' : array,
'port_name' : { $in : ports },
'datetime' : { $gte : from, $lte : to}
}
var map = function() {
emit( { portname : this.port_name } , { datetime : this.datetime,
metric : this.metric });
}
var reduce = function(key, values) {
var res = { dates : [], metrics : [], count : 0}
values.forEach(function(value){
res.dates.push(value.datetime);
res.metrics.push(value.metric);
res.count++;
})
return res;
}
var command = {
mapreduce : collection,
map : map.toString(),
reduce : reduce.toString(),
query : query,
out : { inline : 1 }
}
mongoose.connection.db.executeDbCommand(command, function(err, dbres){
if(err) throw err;
console.log(dbres.documents);
res.json(dbres.documents[0].results);
})
If a small number of records is requested, say 5 or 10, or even 60 I get all the data back I'm expecting. Larger queries return truncated values....
I just did some more testing and it seems like it's limiting the record output to 100?
This is minutely data and when I run a query for a 24 hour period I would expect 1440 records back... I just ran it a received 80. :\
Is this expected? I'm not specifying a limit anywhere I can tell...
More data:
Query for records from 2012-10-01T23:00 - 2012-10-02T00:39 (100 minutes) returns correctly:
[
{
"_id": {
"portname": "CL1-A"
},
"value": {
"dates": [
"2012-10-01T23:00:00.000Z",
"2012-10-01T23:01:00.000Z",
"2012-10-01T23:02:00.000Z",
...cut...
"2012-10-02T00:37:00.000Z",
"2012-10-02T00:38:00.000Z",
"2012-10-02T00:39:00.000Z"
],
"metrics": [
"1596.0",
"1562.0",
"1445.0",
...cut...
"774.0",
"493.0",
"342.0"
],
"count": 100
}
}
]
...add one more minute to the query 2012-10-01T23:00 - 2012-10-02T00:39 (101 minutes) :
[
{
"_id": {
"portname": "CL1-A"
},
"value": {
"dates": [
null,
"2012-10-02T00:40:00.000Z"
],
"metrics": [
null,
"487.0"
],
"count": 2
}
}
]
the dbres.documents object shows the correct expected emitted records:
[ { results: [ [Object] ],
timeMillis: 8,
counts: { input: 101, emit: 101, reduce: 2, output: 1 },
ok: 1 } ]
...so is the data getting lost somewhere?
Rule number one of MapReduce:
Thou shall return from Reduce the exact same format that you emit with your key in Map.
Rule number two of MapReduce:
Thou shall reduce the array of values passed to reduce as many times as necessary. Reduce function may be called many times.
You've broken both of those rules in your implementation of reduce.
Your Map function is emitting key, value pairs.
key: port name (you should simply emit the name as the key, not a document)
value: a document representing three things you need to accumulate (date, metric, count)
Try this instead:
map = function() { // if you want to reduce to an array you have to emit arrays
emit ( this.port_name, { dates : [this.datetime], metrics : [this.metric], count: 1 });
}
reduce = function(key, values) { // for each key you get an array of values
var res = { dates: [], metrics: [], count: 0 }; // you must reduce them to one
values.forEach(function(value) {
res.dates = value.dates.concat(res.dates);
res.metrics = value.metrics.concat(res.metrics);
res.count += value.count; // VERY IMPORTANT reduce result may be re-reduced
})
return res;
}
Try to output the map reduce data in a temp collection instead of in memory. May that is the reason. From Mongo Docs:
{ inline : 1} - With this option, no collection will be created, and
the whole map-reduce operation will happen in RAM. Also, the results
of the map-reduce will be returned within the result object. Note that
this option is possible only when the result set fits within the 16MB
limit of a single document. In v2.0, this is your only available
option on a replica set secondary.
Also, It may not be the reason but MongoDB has data size limitations (2GB) on a 32bit machine.