How to calculate difference between values of different documents using mongo aggregation? - mongodb

Hi my mongo structure as below
{
"timemilliSec":1414590255,
"data":[
{
"x":23,
"y":34,
"name":"X"
},
{
"x":32,
"y":50,
"name":"Y"
}
]
},
{
"timemilliSec":1414590245,
"data":[
{
"x":20,
"y":13,
"name":"X"
},
{
"x":20,
"y":30,
"name":"Y"
}
]
}
Now I want to calculate difference of first document and second document and second to third in this way
so calculation as below
diffX = ((data.x-data.x)/(data.y-data.y)) in our case ((23-20)/(34-13))
diffY = ((data.x-data.x)/(data.y-data.y)) in our case ((32-20)/(50-30))

Tough question in principle, but I'm going to stay with the simplified case you present of two documents and base a solution around that. The concepts should abstract, but are more difficult for expanded cases. Possible with the aggregation framework in general:
db.collection.aggregate([
// Match the documents in a pair
{ "$match": {
"timeMilliSec": { "$in": [ 1414590255, 1414590245 ] }
}}
// Trivial, just keeping an order
{ "$sort": { "timeMilliSec": -1 } },
// Unwind the arrays
{ "$unwind": "$data" },
// Group first and last
{ "$group": {
"_id": "$data.name",
"firstX": { "$first": "$data.x" },
"lastX": { "$last": "$data.x" },
"firstY": { "$first": "$data.y" },
"lastY": { "$last": "$data.y" }
}},
// Difference on the keys
{ "$project": {
"diff": {
"$divide": [
{ "$subtract": [ "$firstX", "$lastX" ] },
{ "$subtract": [ "$firstY", "$lastY" ] }
]
}
}},
// Not sure you want to take it this far
{ "$group": {
"_id": null,
"diffX": {
"$min": {
"$cond": [
{ "$eq": [ "$_id", "X" ] },
"$diff",
false
]
}
},
"diffY": {
"$min": {
"$cond": [
{ "$eq": [ "$_id", "Y" ] },
"$diff",
false
]
}
}
}}
])
Possibly overblown, not sure of the intent, but the output of this based on the sample would be:
{
"_id" : null,
"diffX" : 0.14285714285714285,
"diffY" : 0.6
}
Which matches the calculations.
You can adapt to your case, but the general principle is as shown.
The last "pipeline" stage there is a little "extreme" as all that is done is combine the results into a single document. Otherwise, the "X" and "Y" results are already obtained in two documents in the pipeline. Mostly by the $group operation with $first and $last operations to find the respective elements on the grouping boundary.
The subsequent operations in $project as a pipeline stage performs the required math to determine the distinct results. See the aggregation operators for more details, particularly $divide and $subtract.
Whatever you do you follow this course. Get a "start" and "end" pair on your two keys. Then perform the calculations.

Related

Mongodb: Sort by custom expression

How to sort results by a custom expression which I use in find?
The collection contains documents with the following attributes for example:
{
"_id" : ObjectId("5ef1cd704b35c6d6698f2050"),
"Name" : "TD",
"Date" : ISODate("2021-06-23T09:37:51.976Z"),
"A" : "19.36",
"B" : 2.04,
}
I'm using the following find query to get the records with Date since "2022-01-01" and the ratio between A and B is lower than 0.1:
db.getCollection('my_collection').find(
{
"experationDate" :
{
$gte: new ISODate("2022-01-01 00:00:00.000Z")
},
"$expr":
{
"$lte": [
{ "$divide": ["$A", "$B"] },
0.1
]
}
})
Now, I can't find the right way to sort the results by this ratio.
You can use aggregate in this way:
Search the documents you want using $match, add a field named ratio and use it to sort. And finally, not shown the field using $project:
db.collection.aggregate([
{ "$match": {
"Date": { "$gte": ISODate("2020-01-01") },
"$expr": { "$lte": [ { "$divide": [ "$B", { "$toDouble": "$A" } ] }, 0.1 ] } }
},
{
"$set": {
"ratio": { "$divide": [ "$B", { "$toDouble": "$A" } ] }
}
},
{
"$sort": { "ratio": 1 }
},
{
"$project": { "ratio": 0 }
}
])
Example here
By te way, I've used other values to get results. Ratio between 2.04 and 19.36 is greater than 0.1. You have dividied A/B but I think you mean B/A.
By the way, this is not important, you can change values but the query will still works ok.
Also, maybe this could work better. Is the same query, but could be more efficient (maybe, I don't know) because prevent to divide each value into collection twice:
First filter by date, then add the field ratio to each document found (and in this way is not necessary divide every document). Another filter using the ratio, the sort, and not output the field.
db.collection.aggregate([
{
"$match": { "Date": { "$gte": ISODate("2020-01-01") } }
},
{
"$set": { "ratio": { "$divide": [ "$B", { "$toDouble": "$A" } ] } }
},
{
"$match": { "ratio": { "$lte": 0.1 } }
},
{
"$sort": { "ratio": 1 }
},
{
"$project": { "ratio": 0 }
}
])
Example

How to join documents after a pipeline stage in mongo aggregation framwork

So lets say after the first stage of aggregation I have grouped all the documents by the center so i have something like this:
{
center:"A",
gender:"Male",
count:50
}
{
center:"A",
gender:"Female",
count:20
}
I want to join these two documents such that the final document looks something like
{
center:A,
Male:50,
Female:20
}
Introduce another $group pipeline step where you group the incoming documents stream by the center field, introduce new fields within the group that use the $sum operator. The $sum operation will be determined by a condition using the $cond operator which evaluates the gender expression, and depending on the result, and returns the previous count based on the evaluated logic.
Consider the following pipeline continuation:
db.collection.aggregate([
/* previous pipeline(s) here ... */
{
"$group": {
"_id": "$center",
"Male": {
"$sum": {
"$cond": [ { "$eq": [ "$gender", "Male" ] }, "$count", 0 ]
}
},
"Female": {
"$sum": {
"$cond": [ { "$eq": [ "$gender", "Female" ] }, "$count", 0 ]
}
}
}
},
{
"$project": {
"_id": 0, "center": "$_id", "Male": 1, "Female": 1
}
}
])

Get Distinct list of two properties using MongoDB 2.4

I have an article collection:
{
_id: 9999,
authorId: 12345,
coAuthors: [23456,34567],
title: 'My Article'
},
{
_id: 10000,
authorId: 78910,
title: 'My Second Article'
}
I'm trying to figure out how to get a list of distinct author and co-author ids out of the database. I have tried push, concat, and addToSet, but can't seem to find the right combination. I'm on 2.4.6 so I don't have access to setUnion.
Whilst $setUnion would be the "ideal" way to do this, there is another way that basically involved "switching" between a "type" to alternate which field is picked:
db.collection.aggregate([
{ "$project": {
"authorId": 1,
"coAuthors": { "$ifNull": [ "$coAuthors", [null] ] },
"type": { "$const": [ true,false ] }
}},
{ "$unwind": "$coAuthors" },
{ "$unwind": "$type" },
{ "$group": {
"_id": {
"$cond": [
"$type",
"$authorId",
"$coAuthors"
]
}
}},
{ "$match": { "_id": { "$ne": null } } }
])
And that is it. You may know the $const operation as the $literal operator from MongoDB 2.6. It has always been there, but was only documented and given an "alias" at the 2.6 release.
Of course the $unwind operations in both cases produce more "copies" of the data, but this is grouping for "distinct" values so it does not matter. Just depending on the true/false alternating value for the projected "type" field ( once unwound ) you just pick the field alternately.
Also this little mapReduce does much the same thing:
db.collection.mapReduce(
function() {
emit(this.authorId,null);
if ( this.hasOwnProperty("coAuthors"))
this.coAuthors.forEach(function(id) {
emit(id,null);
});
},
function(key,values) {
return null;
},
{ "out": { "inline": 1 } }
)
For the record, $setUnion is of course a lot cleaner and more performant:
db.collection.aggregate([
{ "$project": {
"combined": {
"$setUnion": [
{ "$map": {
"input": ["A"],
"as": "el",
"in": "$authorId"
}},
{ "$ifNull": [ "$coAuthors", [] ] }
]
}
}},
{ "$unwind": "$combined" },
{ "$group": {
"_id": "$combined"
}}
])
So there the only real concerns are converting the singular "authorId" to an array via $map and feeding an empty array where the "coAuthors" field is not present in the document.
Both output the same distinct values from the sample documents:
{ "_id" : 78910 }
{ "_id" : 23456 }
{ "_id" : 34567 }
{ "_id" : 12345 }

MongoDb : Find common element from two arrays within a query

Let's say we have records of following structure in database.
{
"_id": 1234,
"tags" : [ "t1", "t2", "t3" ]
}
Now, I want to check if database contains a record with any of the tags specified in array tagsArray which is [ "t3", "t4", "t5" ]
I know about $in operator but I not only want to know whether any of the records in database has any of the tag specified in tagsArray, I also want to know which tag of the record in database matches with any of the tags specified in tagsArray. (i.e. t3 in for the case of record mentioned above)
That is, I want to compare two arrays (one of the record and other given by me) and find out the common element.
I need to have this expression along with many expressions in the query so projection operators like $, $elematch etc won't be of much use. (Or is there a way it can be used without having to iterate over all records?)
I think I can use $where operator but I don't think that is the best way to do this.
How can this problem be solved?
There are a few approaches to do what you want, it just depends on your version of MongoDB. Just submitting the shell responses. The content is basically JSON representation which is not hard to translate for DBObject entities in Java, or JavaScript to be executed on the server so that really does not change.
The first and the fastest approach is with MongoDB 2.6 and greater where you get the new set operations:
var test = [ "t3", "t4", "t5" ];
db.collection.aggregate([
{ "$match": { "tags": {"$in": test } }},
{ "$project": {
"tagMatch": {
"$setIntersection": [
"$tags",
test
]
},
"sizeMatch": {
"$size": {
"$setIntersection": [
"$tags",
test
]
}
}
}},
{ "$match": { "sizeMatch": { "$gte": 1 } } },
{ "$project": { "tagMatch": 1 } }
])
The new operators there are $setIntersection that is doing the main work and also the $size operator which measures the array size and helps for the latter filtering. This ends up as a basic comparison of "sets" in order to find the items that intersect.
If you have an earlier version of MongoDB then this is still possible, but you need a few more stages and this might affect performance somewhat depending if you have large arrays:
var test = [ "t3", "t4", "t5" ];
db.collection.aggregate([
{ "$match": { "tags": {"$in": test } }},
{ "$project": {
"tags": 1,
"match": { "$const": test }
}},
{ "$unwind": "$tags" },
{ "$unwind": "$match" },
{ "$project": {
"tags": 1,
"matched": { "$eq": [ "$tags", "$match" ] }
}},
{ "$match": { "matched": true }},
{ "$group": {
"_id": "$_id",
"tagMatch": { "$push": "$tags" },
"count": { "$sum": 1 }
}}
{ "$match": { "count": { "$gte": 1 } }},
{ "$project": { "tagMatch": 1 }}
])
Or if all of that seems to involved or your arrays are large enough to make a performance difference then there is always mapReduce:
var test = [ "t3", "t4", "t5" ];
db.collection.mapReduce(
function () {
var intersection = this.tags.filter(function(x){
return ( test.indexOf( x ) != -1 );
});
if ( intersection.length > 0 )
emit ( this._id, intersection );
},
function(){},
{
"query": { "tags": { "$in": test } },
"scope": { "test": test },
"output": { "inline": 1 }
}
)
Note that in all cases the $in operator still helps you to reduce the results even though it is not the full match. The other common element is checking the "size" of the intersection result to reduce the response.
All pretty easy to code up, convince the boss to switch to MongoDB 2.6 or greater if you are not already there for the best results.

How to optimize mongoDB query?

I am having following sample document in the mongoDB.
{
"location" : {
"language" : null,
"country" : "null",
"city" : "null",
"state" : null,
"continent" : "null",
"latitude" : "null",
"longitude" : "null"
},
"request" : [
{
"referrer" : "direct",
"url" : "http://www.google.com/"
"title" : "index page"
"currentVisit" : "1401282897"
"visitedTime" : "1401282905"
},
{
"referrer" : "direct",
"url" : "http://www.stackoverflow.com/",
"title" : "index page"
"currentVisit" : "1401282900"
"visitedTime" : "1401282905"
},
......
]
"uuid" : "109eeee0-e66a-11e3"
}
Note:
The database contains more than 10845 document
Each document contains nearly 100 request(100 object in the request array).
Technology/Language - node.js
I had setProfiling to check the execution time
First Query - 13899ms
Second Query - 9024ms
Third Query - 8310ms
Fourth Query - 6858ms
There is no much difference using indexing
Queries:
I am having the following aggregation queries to be executed to fetch the data.
var match = {"request.currentVisit":{$gte:core.getTime()[1].toString(),$lte:core.getTime()[0].toString()}};
For Example: var match = {"request.currentVisit":{$gte:"1401282905",$lte:"1401282935"}};
For third and fourth query request.visitedTime instead of request.currentVisit
First
[
{ "$project":{
"request.currentVisit":1,
"request.url":1
}},
{ "$match":{
"request.1": {$exists:true}
}},
{ "$unwind": "$request" },
{ "$match": match },
{ "$group": {
"_id": {
"url":"$request.url"
},
"count": { "$sum": 1 }
}},
{ "$sort":{ "count": -1 } }
]
Second
[
{ "$project": {
"request.currentVisit":1,
"request.url":1
}},
{ "$match": {
"request":{ "$size": 1 }
}},
{ "$unwind": "$request" },
{ "$match": match },
{ "$group": {
"_id":{
"url":"$request.url"
},
"count":{ "$sum": 1 }
}},
{ "$sort": { "count": -1} }
]
Third
[
{ "$project": {
"request.visitedTime":1,
"uuid":1
}},
{ "$match":{
"request.1": { "$exists": true }
}},
{ "$match": match },
{ "$group": {
"_id": "$uuid",
"count":{ "$sum": 1 }
}},
{ "$group": {
"_id": null,
"total": { "$sum":"$count" }}
}}
]
Forth
[
{ "$project": {
"request.visitedTime":1,
"uuid":1
}},
{ "$match":{
"request":{ "$size": 1 }
}},
{ "$match": match },
{ "$group": {
"_id":"$uuid",
"count":{ "$sum": 1 }
}},
{ "$group": {
"_id":null,
"total": { "$sum": "$count" }
}}
]
Problem:
It is taking more than 38091 ms to fetch the data.
Is there any way to optimize the query?
Any suggestion will be grateful.
Well there are a few problems and you definitely need indexes, but you cannot have compound ones. It is the "timestamp" values that you are querying within the array that you want to index. It would also be advised that you either convert these to numeric values rather than the current strings, or indeed to BSON Date types. The latter form is actually internally stored as a numeric timestamp value, so there is a general storage size reduction, which also reduces the index size as well as being more efficient to match on the numeric values.
The big problem with each query is that you are always later diving into the "array" contents after processing an $unwind and then "filtering" that with match. While this what you want to do for your result, since you have not applied the same filter at an earlier stage, you have many documents in the pipeline that do not match these conditions when you $unwind. The result is "lots" of documents you do not need being processed in this stage. And here you cannot use an index.
Where you need this match is at the start of the pipeline stages. This narrows down the documents to the "possible" matches before that acutual array is filtered.
So using the first as an example:
[
{ "$match":{
{ "request.currentVisit":{
"$gte":"1401282905", "$lte": "1401282935"
}
}},
{ "$unwind": "$request" },
{ "$match":{
{ "request.currentVisit":{
"$gte":"1401282905", "$lte": "1401282935"
}
}},
{ "$group": {
"_id": {
"url":"$request.url"
},
"count": { "$sum": 1 }
}},
{ "$sort":{ "count": -1 } }
]
So a few changes. There is a $match at the head of the pipeline. This narrows down documents and is able to use an index. That is the most important performance consideration. Golden rule, always "match" first.
The $project you had in there was redundant as you cannot project "just" the fields of an array that is yet unwound. There is also a misconception that people believe they $project first to reduce the pipeline. The effect is very minimal if in fact there is a later $project or $group statement that actually limits the fields, then this will be "forward optimized" so things do get taken out of the pipeline processing for you. Still the $match statement above does more to optimize.
Dropping the need to see if the array is actually there with the other $match stage, as you are now "implicitly" doing that at the start of the pipeline. If more conditions make you more comfortable, then add them to that initial pipeline stage.
The rest remains unchanged, as you then $unwind the array and $match to filter the items that you actually want before moving on to your remaining processing. By now, the input documents have been significantly reduced, or reduced as much as they are going to be.
The other alternative that you can do with MongoDB 2.6 and greater is "filter" the array content before you even **$unwind it. This would produce a listing like this:
[
{ "$match":{
{ "request.currentVisit":{
"$gte":"1401282905", "$lte": "1401282935"
}
}},
{ "$project": {
"request": {
"$setDifference": [
{
"$map": {
"input": "$request",
"as": "el",
"in": {
"$cond"": [
{
"$and":[
{ "$gte": [ "1401282905", "$$el.currentVisit" ] },
{ "$lt": [ "1401282935", "$$el.currentVisit" ] }
]
}
"$el",
false
]
}
}
}
[false]
]
}
}}
{ "$unwind": "$request" },
{ "$group": {
"_id": {
"url":"$request.url"
},
"count": { "$sum": 1 }
}},
{ "$sort":{ "count": -1 } }
]
That may save you some by being able to "filter" the array before the $unwind and which is possibly better than doing the $match afterwards.
But this is the general rule for all of your statements. You need usable indexes and you need to $match first.
It is possible that the actual results you really want could be obtained in a single query, but as it stands your question is not presented that way. Try changing your processing as outlined, and you should see a notable improvement.
If you are still then trying to come to terms with how this could possibly be singular, then you can always ask another question.