MongoDB Aggregation - Buckets Boundaries to Referenced Array - mongodb

To whom this may concern:
I would like to know if there is some workaround in MongoDB to set the "boundaries" field of a "$bucket" aggregation pipeline stage to an array that's already in the previous aggregation stage. (Or some other aggregation pipeline that will get me the same result). I am using this data to create a histogram of a bunch of values. Rather than retrieve 1 million-or-so values, I can receive 20 buckets with their respective counts.
The previous stages of yield the following result:
{
"_id" : ObjectId("5cfa6fad883d3a9b8c6ad50a"),
"boundaries" : [ 73.0, 87.25, 101.5, 115.75, 130.0 ],
"value" : 83.58970621935025
},
{
"_id" : ObjectId("5cfa6fe0883d3a9b8c6ad5a8"),
"boundaries" : [ 73.0, 87.25, 101.5, 115.75, 130.0 ],
"value" : 97.3261380262403
},
...
The "boundaries" field for every document is a result a facet/unwind/addfield with some statistical mathematics involving "value" fields in the pipeline. Therefore, every "boundaries" field value is an array of evenly spaced values in ascending order, all with the same length and values.
The following stage of the aggregation I am trying to perform is:
$bucket: {
groupBy: "$value",
boundaries : "$boundaries" ,
default: "no_group",
output: { count: { $sum: 1 } }
}
I get the following error from the explain when I try to run this aggregation:
{
"ok" : 0.0,
"errmsg" : "The $bucket 'boundaries' field must be an array, but found type: string.",
"code" : NumberInt(40200),
"codeName" : "Location40200"
}
The result I would like to get is something like this, which is the result of a basic "$bucket" pipeline operator:
{
"_id" : 73.0, // range of [73.0,87.25)
"count" : 2 // number of documents with "value" in this range.
}, {
"_id" : 87.25, // range of [87.25,101.5)
"count" : 7 // number of documents with "value" in this range.
}, {
"_id" : 101.5,
"count" : 3
}, ...
What I know:
The JIRA documentation says
'boundaries' must be constant values (can't use "$x", but can use {$add: [4, 5]}), and must be sorted.
What I've tried:
$bucketAuto does not have a linear "granularity" setting. By default, it tries to evenly distribute the values amongst the buckets, and the bucket ranges are therefore spaced differently.
Building the constant array by retrieving the pipeline results, and then adding the constant array into the pipeline again. This is effective but inefficient and not atomic, as it creates an O(2N) time complexity. I can live with this solution if needs be.
There HAS to be a solution to this. Any workaround or alternative solutions are greatly appreciated.
Thank you for your time!

Related

find() return the latest value only on MongoDB

I have this collection in MongoDB that contains the following entries. I'm using Robo3T to run the query.
{
"_id" : ObjectId("xxx1"),
"Evaluation Date" : "2021-09-09",
"Results" : [
{
"Name" : "ABCD",
"Version" : "3.2.x"
}
]
"_id" : ObjectId("xxx2"),
"Evaluation Date" : "2022-09-09",
"Results" : [
{
"Name" : "ABxD",
"Version" : "5.2.x"
}
]
}
This document contains multiple entries of similar format. Now, I need to extract the latest value for "Version".
Expected output:
5.2.x
Measures I've taken so far:
(1) I've only tried findOne() and while I was able to extract the value of "Version": db.getCollection('TestCollectionName').findOne().Results[0].Version
...only the oldest entry was returned.
3.2.x
(2) Using the find().sort().limit() like below, returns the entire document for the latest entry and not just the data value that I wanted; db.getCollection('TestCollectionName').find({}).sort({"Results.Version":-1}).limit(1)
Results below:
"_id" : ObjectId("xxx2"),
"Evaluation Date" : "2022-09-09",
"Results" : [
{
"Name" : "ABxD",
"Version" : "5.2.x"
}
]
(3) I've tried to use sort() and limit() alongside findOne() but I've read that findOne is maybe deprecated and also not compatible with sort. And thus, resulting to an error.
(4) Finally, if I try to use sort and limit on find like this: db.getCollection('LD_exit_Evaluation_Result_MFC525').find({"Results.New"}).sort({_id:-1}).limit(1) I would get an unexpected token error.
What would be a good measure for this?
Did I simply mistake to/remove a bracket or need to reorder the syntax?
Thanks in advance.
I'm not sure if I understood well, but maybe this could be what are you looking for:
db.collection.aggregate([
{
"$project": {
lastResult: {
"$last": "$Results"
},
},
},
{
"$project": {
version: "$lastResult.Version",
_id: 0
}
}
])
It uses aggregate with some operators: the first $project calculate a new field called lastResult with the last element of each array using $last operator. The second $project is just to clean the output. If you need the _id reference, just remove _id: 0 or change its value to 1.
You can check how it works here: https://mongoplayground.net/p/jwqulFtCh6b
Hope I helped

Mongodb query using db.collection.find() method 100 times faster than using db.collection.aggregate()?

My collection testData has some 4 milion documents with the identical structure:
{"_id" : ObjectId("5932c56571f5a268cea12226"),
"x" : 1.0,
"text" : "w592cQzC5aAfZboMujL3knCUlIWgHqZNuUcH0yJNS9U4",
"country" : "Albania",
"location" : {
"longitude" : 118.8775183,
"latitude" : 75.4316019
}}
The collection is indexed on (country, location.longitude) pair.
The following two queries, which I would consider identical and which produce identical output, differ in execution time by a factor of 100:
db.testData.aggregate(
[
{ $match : {country : "Brazil"} },
{ $sort : { "location.longitude" : 1 } },
{ $project : {"_id" : 0, "country" : 1, "location.longitude" : 1} }
]);
(this one produces output within about 6 seconds for the repeated query and about 120 seconds for the first-time query)
db.testData.find(
{ country : "Brazil" },
{"_id" : 0, "country" : 1, "location.longitude" : 1}
).sort(
{"location.longitude" : 1}
);
(this one produces output within 15 milliseconds for the repeated query and about 1 second for the first-time query).
What am I missing here? Thanx for any feedback.
MongoDB find operation is used to fetch documents from a collection according to filters .
MongoDB aggregation groups values from a collection and performs computation on group of values through execution of stages in pipeline and return computed result.
MongoDB find operation performs speedily as compared to aggregation operation as aggregate operation encapsulates multiple stages into pipeline which performs computation on data stored into collection with each stage's output serving as input to another stage and return processed result.
Mongo DB find operation returns a cursor to fetched documents that match filters and cursor is iterated to access document.
According to above mentioned description we need to fetch only those documents where value of country key is Brazil and sort documents according to values of longitude key in ascending order which can be accomplished easily using MongoDB find operation.

$Avg aggregation in Mongodb [duplicate]

For a given record id, how do I get the average of a sub document field if I have the following in MongoDB:
/* 0 */
{
"item" : "1",
"samples" : [
{
"key" : "test-key",
"value" : "1"
},
{
"key" : "test-key2",
"value" : "2"
}
]
}
/* 1 */
{
"item" : "1",
"samples" : [
{
"key" : "test-key",
"value" : "3"
},
{
"key" : "test-key2",
"value" : "4"
}
]
}
I want to get the average of the values where key = "test-key" for a given item id (in this case 1). So the average should be $avg (1 + 3) = 2
Thanks
You'll need to use the aggregation framework. The aggregation will end up looking something like this:
db.stack.aggregate([
{ $match: { "samples.key" : "test-key" } },
{ $unwind : "$samples" },
{ $match : { "samples.key" : "test-key" } },
{ $project : { "new_key" : "$samples.key", "new_value" : "$samples.value" } },
{ $group : { `_id` : "$new_key", answer : { $avg : "$new_value" } } }
])
The best way to think of the aggregation framework is like an assembly line. The query itself is an array of JSON documents, where each sub-document represents a different step in the assembly.
Step 1: $match
The first step is a basic filter, like a WHERE clause in SQL. We place this step first to filter out all documents that do not contain an array element containing test-key. Placing this at the beginning of the pipeline allows the aggregation to use indexes.
Step 2: $unwind
The second step, $unwind, is used for separating each of the elements in the "samples" array so we can perform operations across all of them. If you run the query with just that step, you'll see what I mean.
Long story short :
{ name : "bob",
children : [ {"name" : mary}, { "name" : "sue" } ]
}
becomes two documents :
{ name : "bob", children : [ { "name" : mary } ] }
{ name : "bob", children : [ { "name" : sue } ] }
Step 3: $match
The third step, $match, is an exact duplicate of the first $match stage, but has a different purpose. Since it follows $unwind, this stage filters out previous array elements, now documents, that don't match the filter criteria. In this case, we keep only documents where samples.key = "test-key"
Step 4: $project (Optional)
The fourth step, $project, restructures the document. In this case, I pulled the items out of the array so I could reference them directly. Using the example above..
{ name : "bob", children : [ { "name" : mary } ] }
becomes
{ new_name : "bob", new_child_name : mary }
Note that this step is entirely optional; later stages could be completed even without this $project after a few minor changes. In most cases $project is entirely cosmetic; aggregations have numerous optimizations under the hood such that manually including or excluding fields in a $project should not be necessary.
Step 5: $group
Finally, $group is where the magic happens. The _id value what you will be "grouping by" in the SQL world. The second field is saying to average over the value that I defined in the $project step. You can easily substitute $sum to perform a sum, but a count operation is typically done the following way: my_count : { $sum : 1 }.
The most important thing to note here is that the majority of the work being done is to format the data to a point where performing the operation is simple.
Final Note
Lastly, I wanted to note that this would not work on the example data provided since samples.value is defined as text, which can't be used in arithmetic operations. If you're interested, changing the type of a field is described here: MongoDB How to change the type of a field

MongoDB Calculate Values from Two Arrays, Sort and Limit

I have a MongoDB database storing float arrays. Assume a collection of documents in the following format:
{
"id" : 0,
"vals" : [ 0.8, 0.2, 0.5 ]
}
Having a query array, e.g., with values [ 0.1, 0.3, 0.4 ], I would like to compute for all elements in the collection a distance (e.g., sum of differences; for the given document and query it would be computed by abs(0.8 - 0.1) + abs(0.2 - 0.3) + abs(0.5 - 0.4) = 0.9).
I tried to use the aggregation function of MongoDB to achieve this, but I can't work out how to iterate over the array. (I am not using the built-in geo operations of MongoDB, as the arrays can be rather long)
I also need to sort the results and limit to the top 100, so calculation after reading the data is not desired.
Current Processing is mapReduce
If you need to execute this on the server and sort the top results and just keep the top 100, then you could use mapReduce for this like so:
db.test.mapReduce(
function() {
var input = [0.1,0.3,0.4];
var value = Array.sum(this.vals.map(function(el,idx) {
return Math.abs( el - input[idx] )
}));
emit(null,{ "output": [{ "_id": this._id, "value": value }]});
},
function(key,values) {
var output = [];
values.forEach(function(value) {
value.output.forEach(function(item) {
output.push(item);
});
});
output.sort(function(a,b) {
return a.value < b.value;
});
return { "output": output.slice(0,100) };
},
{ "out": { "inline": 1 } }
)
So the mapper function does the calculation and output's everything under the same key so all results are sent to the reducer. The end output is going to be contained in an array in a single output document, so it is both important that all results are emitted with the same key value and that the output of each emit is itself an array so mapReduce can work properly.
The sorting and reduction is done in the reducer itself, as each emitted document is inspected the elements are put into a single tempory array, sorted, and the top results are returned.
That is important, and just the reason why the emitter produces this as an array even if a single element at first. MapReduce works by processing results in "chunks", so even if all emitted documents have the same key, they are not all processed at once. Rather the reducer puts it's results back into the queue of emitted results to be reduced until there is only a single document left for that particular key.
I'm restricting the "slice" output here to 10 for brevity of listing, and including the stats to make a point, as the 100 reduce cycles called on this 10000 sample can be seen:
{
"results" : [
{
"_id" : null,
"value" : {
"output" : [
{
"_id" : ObjectId("56558d93138303848b496cd4"),
"value" : 2.2
},
{
"_id" : ObjectId("56558d96138303848b49906e"),
"value" : 2.2
},
{
"_id" : ObjectId("56558d93138303848b496d9a"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d93138303848b496ef2"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d94138303848b497861"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d94138303848b497b58"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d94138303848b497ba5"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d94138303848b497c43"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d95138303848b49842b"),
"value" : 2.1
},
{
"_id" : ObjectId("56558d96138303848b498db4"),
"value" : 2.1
}
]
}
}
],
"timeMillis" : 1758,
"counts" : {
"input" : 10000,
"emit" : 10000,
"reduce" : 100,
"output" : 1
},
"ok" : 1
}
So this is a single document output, in the specific mapReduce format, where the "value" contains an element which is an array of the sorted and limitted result.
Future Processing is Aggregate
As of writing, the current latest stable release of MongoDB is 3.0, and this lacks the functionality to make your operation possible. But the upcoming 3.2 release introduces new operators that make this possible:
db.test.aggregate([
{ "$unwind": { "path": "$vals", "includeArrayIndex": "index" }},
{ "$group": {
"_id": "$_id",
"result": {
"$sum": {
"$abs": {
"$subtract": [
"$vals",
{ "$arrayElemAt": [ { "$literal": [0.1,0.3,0.4] }, "$index" ] }
]
}
}
}
}},
{ "$sort": { "result": -1 } },
{ "$limit": 100 }
])
Also limitting to the same 10 results for brevity, you get output like this:
{ "_id" : ObjectId("56558d96138303848b49906e"), "result" : 2.2 }
{ "_id" : ObjectId("56558d93138303848b496cd4"), "result" : 2.2 }
{ "_id" : ObjectId("56558d96138303848b498e31"), "result" : 2.1 }
{ "_id" : ObjectId("56558d94138303848b497c43"), "result" : 2.1 }
{ "_id" : ObjectId("56558d94138303848b497861"), "result" : 2.1 }
{ "_id" : ObjectId("56558d96138303848b499037"), "result" : 2.1 }
{ "_id" : ObjectId("56558d96138303848b498db4"), "result" : 2.1 }
{ "_id" : ObjectId("56558d93138303848b496ef2"), "result" : 2.1 }
{ "_id" : ObjectId("56558d93138303848b496d9a"), "result" : 2.1 }
{ "_id" : ObjectId("56558d96138303848b499182"), "result" : 2.1 }
This is made possible largely due to $unwind being modified to project a field in results that contains the array index, and also due to $arrayElemAt which is a new operator that can extract an array element as a singular value from a provided index.
This allows the "look-up" of values by index position from your input array in order to apply the math to each element. The input array is facilitated by the existing $literal operator so $arrayElemAt does not complain and recongizes it as an array, ( seems to be a small bug at present, as other array functions don't have the problem with direct input ) and gets the appropriate matching index value by using the "index" field produced by $unwind for comparison.
The math is done by $subtract and of course another new operator in $abs to meet your functionality. Also since it was necessary to unwind the array in the first place, all of this is done inside a $group stage accumulating all array members per document and applying the addition of entries via the $sum accumulator.
Finally all result documents are processed with $sort and then the $limit is applied to just return the top results.
Summary
Even with the new functionallity about to be availble to the aggregation framework for MongoDB it is debatable which approach is actually more efficient for results. This is largely due to there still being a need to $unwind the array content, which effectively produces a copy of each document per array member in the pipeline to be processed, and that generally causes an overhead.
So whilst mapReduce is the only present way to do this until a new release, it may actually outperform the aggregation statement depending on the amount of data to be processed, and despite the fact that the aggregation framework works on native coded operators rather than translated JavaScript operations.
As with all things, testing is always recommended to see which case suits your purposes better and which gives the best performance for your expected processing.
Sample
Of course the expected result for the sample document provided in the question is 0.9 by the math applied. But just for my testing purposes, here is a short listing used to generate some sample data that I wanted to at least verify the mapReduce code was working as it should:
var bulk = db.test.initializeUnorderedBulkOp();
var x = 10000;
while ( x-- ) {
var vals = [0,0,0];
vals = vals.map(function(val) {
return Math.round((Math.random()*10),1)/10;
});
bulk.insert({ "vals": vals });
if ( x % 1000 == 0) {
bulk.execute();
bulk = db.test.initializeUnorderedBulkOp();
}
}
The arrays are totally random single decimal point values, so there is not a lot of distribution in the listed results I gave as sample output.

Average a Sub Document Field Across Documents in Mongo

For a given record id, how do I get the average of a sub document field if I have the following in MongoDB:
/* 0 */
{
"item" : "1",
"samples" : [
{
"key" : "test-key",
"value" : "1"
},
{
"key" : "test-key2",
"value" : "2"
}
]
}
/* 1 */
{
"item" : "1",
"samples" : [
{
"key" : "test-key",
"value" : "3"
},
{
"key" : "test-key2",
"value" : "4"
}
]
}
I want to get the average of the values where key = "test-key" for a given item id (in this case 1). So the average should be $avg (1 + 3) = 2
Thanks
You'll need to use the aggregation framework. The aggregation will end up looking something like this:
db.stack.aggregate([
{ $match: { "samples.key" : "test-key" } },
{ $unwind : "$samples" },
{ $match : { "samples.key" : "test-key" } },
{ $project : { "new_key" : "$samples.key", "new_value" : "$samples.value" } },
{ $group : { `_id` : "$new_key", answer : { $avg : "$new_value" } } }
])
The best way to think of the aggregation framework is like an assembly line. The query itself is an array of JSON documents, where each sub-document represents a different step in the assembly.
Step 1: $match
The first step is a basic filter, like a WHERE clause in SQL. We place this step first to filter out all documents that do not contain an array element containing test-key. Placing this at the beginning of the pipeline allows the aggregation to use indexes.
Step 2: $unwind
The second step, $unwind, is used for separating each of the elements in the "samples" array so we can perform operations across all of them. If you run the query with just that step, you'll see what I mean.
Long story short :
{ name : "bob",
children : [ {"name" : mary}, { "name" : "sue" } ]
}
becomes two documents :
{ name : "bob", children : [ { "name" : mary } ] }
{ name : "bob", children : [ { "name" : sue } ] }
Step 3: $match
The third step, $match, is an exact duplicate of the first $match stage, but has a different purpose. Since it follows $unwind, this stage filters out previous array elements, now documents, that don't match the filter criteria. In this case, we keep only documents where samples.key = "test-key"
Step 4: $project (Optional)
The fourth step, $project, restructures the document. In this case, I pulled the items out of the array so I could reference them directly. Using the example above..
{ name : "bob", children : [ { "name" : mary } ] }
becomes
{ new_name : "bob", new_child_name : mary }
Note that this step is entirely optional; later stages could be completed even without this $project after a few minor changes. In most cases $project is entirely cosmetic; aggregations have numerous optimizations under the hood such that manually including or excluding fields in a $project should not be necessary.
Step 5: $group
Finally, $group is where the magic happens. The _id value what you will be "grouping by" in the SQL world. The second field is saying to average over the value that I defined in the $project step. You can easily substitute $sum to perform a sum, but a count operation is typically done the following way: my_count : { $sum : 1 }.
The most important thing to note here is that the majority of the work being done is to format the data to a point where performing the operation is simple.
Final Note
Lastly, I wanted to note that this would not work on the example data provided since samples.value is defined as text, which can't be used in arithmetic operations. If you're interested, changing the type of a field is described here: MongoDB How to change the type of a field