How to ggregate two collections and match field with array - mongodb

I need to group the results of two collections candidatos and ofertas, and then "merge" those groups to return an array with matched values.
I've created this example with the aggregate and similar data to make this easier to test:
https://mongoplayground.net/p/m0PUfdjEye4
This is the explanation of the problem that I'm facing.
I can get both groups with the desired results independently:
candidatos collection:
db.getCollection('ofertas').aggregate([
{"$group" : {_id:"$ubicacion_puesto.provincia", countProvinciaOferta:{$sum:1}}}
]);
This is the result...
ofertas collection:
db.getCollection('candidatos').aggregate([
{"$group" : {_id:"$que_busco.ubicacion_puesto_trabajo.provincia", countProvinciaCandidato:{$sum:1}}}
]);
This is the result...
What I need to do, is to aggregate those groups to merge their results based on their _id coincidence. I think I'm going in the right way with the next aggregate, but the field countOfertas always returns 0.0. I think that there is something wrong in my project $cond, but I don't know what is it. This is the aggregate:
db.getCollection('candidatos').aggregate([
{"$group" : {_id:"$que_busco.ubicacion_puesto_trabajo.provincia", countProvinciaCandidato:{$sum:1}}},
{
$lookup: {
from: 'ofertas',
let: {},
pipeline: [
{"$group" : {_id:"$ubicacion_puesto.provincia", countProvinciaOferta:{$sum:1}}}
],
as: 'ofertas'
}
},
{
$project: {
_id: 1,
countProvinciaCandidato: 1,
countOfertas: {
$cond: {
if: {
$eq: ['$ofertas._id', "$_id"]
},
then: '$ofertas.countProvinciaOferta',
else: 0,
}
}
}
},
{ $sort: { "countProvinciaCandidato": -1}},
{ $limit: 20 }
]);
And this is the result, but as you can see, field countOfertas is always 0
Any kind of help will be welcome

What you have tried is so much appreciated. But in $project you need to use $reduce which helps to loop through the array and satisfy the condition
Here is the code
db.candidatos.aggregate([
{
"$group": {
_id: "$que_busco.ubicacion_puesto_trabajo.provincia",
countProvinciaCandidato: { $sum: 1 }
}
},
{
$lookup: {
from: "ofertas",
let: {},
pipeline: [
{
"$group": {
_id: "$ubicacion_puesto.provincia",
countProvinciaOferta: { $sum: 1 }
}
}
],
as: "ofertas"
}
},
{
$project: {
_id: 1,
countProvinciaCandidato: 1,
countOfertas: {
"$reduce": {
"input": "$ofertas",
initialValue: 0,
"in": {
$cond: [
{ $eq: [ "$$this._id", "$_id" ] },
{ $add: [ "$$value", 1 ] },
"$$value"
]
}
}
}
}
},
{ $sort: { "countProvinciaCandidato": -1 } },
{ $limit: 20 }
])
Working Mongo playground
Note : If you need to do with aggregations only, this is fine. But I personally feel this approach is not good. My suggestion is, you can concurrently call group aggregations in different service and do it with programmatically. Because $lookup is expensive, when you get massive data, this performance will be reduced

The $eq in the $cond is comparing an array to an ObjectId, so it never matches.
The $lookup stage results will be in the ofertas field as an array of documents, so '$ofertas._id' will be an array of all the _id values.
You will probably need to use $unwind, $reduce after the $lookup.

Related

Array is reordered when using $lookup

I have this aggregation:
db.getCollection("users").aggregate([
{
"$match": {
"_id": "5a708a38e6a4078bd49f01d5"
}
},
{
"$lookup": {
"from": "user-locations",
"localField": "locations",
"as": "locations",
"foreignField": "_id"
}
}
])
It works well, but there is one small thing that I don't understand and I can't fix.
In the query output, the locations array is reordered by ObjectId and I really need to keep the original order of data.
Here is how the locations array from the users collection looks like
'locations' : [
ObjectId("5b55e9820b720a1a7cd19633"),
ObjectId("5a708a38e6a4078bd49ef13f")
],
And here is the result after the aggregation:
'locations' : [
{
'_id' : ObjectId("5a708a38e6a4078bd49ef13f"),
'name': 'Location 2'
},
{
'_id' : ObjectId("5b55e9820b720a1a7cd19633"),
'name': 'Location 1'
}
],
What am I missing here? I really have no idea how to proceed with this issue.
Could you give me a push?
$lookup does not guarantee order of result documents, you can try a approach to manage natural order of document,
$unwind deconstruct locations array and add auto index number will start from 0,
$lookup with locations
$set to select first element from locations
$sort by index field in ascending order
$group by _id and reconstruct locations array
db.users.aggregate([
{ $match: { _id: "5a708a38e6a4078bd49f01d5" } },
{
$unwind: {
path: "$locations",
includeArrayIndex: "index"
}
},
{
$lookup: {
from: "user-locations",
localField: "locations",
foreignField: "_id",
as: "locations"
}
},
{ $set: { locations: { $arrayElemAt: ["$locations", 0] } } },
{ $sort: { index: 1 } },
{
$group: {
_id: "$_id",
locations: { $push: "$locations" }
}
}
])
Playground
From this closed bug report:
When using $lookup, the order of the documents returned is not guaranteed. The documents are returned in "natural order" - as they are encountered in the database. The only way to get a guaranteed consistent order is to add a $sort stage to the query.
Basically the way any Mongo query/pipeline works is that it returns documents in the order they were matched, meaning the "right" order is not guaranteed especially if there's indes usage involved.
What you should do is add a $sort stage as suggested, like so:
db.collection.aggregate([
{
"$match": {
"_id": "5a708a38e6a4078bd49f01d5"
}
},
{
"$lookup": {
"from": "user-locations",
"let": {
"locations": "$locations"
},
"pipeline": [
{
"$match": {
"$expr": {
"$setIsSubset": [
[
"$_id"
],
"$$locations"
]
}
}
},
{
$sort: {
_id: 1 // any other sort field you want.
}
}
],
"as": "locations",
}
}
])
You can also keep the original $lookup syntax you're using and just $unwind, $sort and then $group to restore the structure.

"iterate" through all document fields in mongodb

I have a collection with documents in this form:
{
"fields_names": ["field1", "field2", "field3"]
"field1": 1,
"field2": [1, 2, 3]
"field3": "12345"
}
where field1, field2, field3 are "dynamic" for each document (I have for each document the fields names in the "fields_names" array)
I would like to test whether 2 documents are equals using the aggregation framework.
I used $lookup stage for getting another documents.
My issue is: how can I "iterate" through the whole fields for my collection?
db.collection.aggregate([
{
{$match: "my_id": "test_id"},
{$lookup:
from: "collection"
let: my_id: "$my_id", prev_id: "$_id"
pipeline: [
{$match: "my_id": "$$my_id", "_id": {$ne: "$$prev_id"}}
]
as: "lookup_test"
}
}])
and in the pipeline of the lookup, I would like to iterate the "fields_names" array for getting the names of the fields, and then access their value and compare between the "orig document" (not the $lookup) and the other documents ($lookup documents).
OR: just to iterate all fields (not include the "fields_names" array)
I would like to fill the "lookup_test" array with all documents which as the same fields values..
You will have to compare the two "partial" parts of the document meaning you'll have to ( for each document ) do this in the $lookup, needless to say this is going to be a -very- expensive pipeline. With that said here's how I would do it:
db.collection.aggregate([
{
$match: {
"my_id": "test_id"
}
},
{
"$lookup": {
"from": "collection",
"let": {
id: "$_id",
partialRoot: {
$filter: {
input: {
"$objectToArray": "$$ROOT"
},
as: "fieldObj",
cond: {
"$setIsSubset": [
[
"$$fieldObj.k"
],
"$fields_names"
]
}
}
}
},
pipeline: [
{
$match: {
$expr: {
$and: [
{
$ne: [
"$$id",
"$_id"
]
},
{
$eq: [
{
$size: "$$partialRoot"
},
{
$size: {
"$setIntersection": [
"$$partialRoot",
{
$filter: {
input: {
"$objectToArray": "$$ROOT"
},
as: "fieldObj",
cond: {
"$setIsSubset": [
[
"$$fieldObj.k"
],
"$fields_names"
]
}
}
}
]
}
}
]
}
]
}
}
},
],
"as": "x"
}
}
])
Mongo Playground
If you could dynamically build the query through code you could make this much more efficient by using the same match query in the $lookup stage like so:
const query = { my_id: "test_id" };
db.collection.aggregate([
{
$match: query
},
{
$lookup: {
...
pipeline: [
{ $match: query },
... rest of pipeline ...
]
}
}
])
This way you're only matching documents who at least match the initial query, this should drastically improve query performance ( obviously dependant on field x value entropy )
One more caveat to note is that if x document match you will get the same result x times, meaning you probably want to add $limit: 1 stage to your pipeline.

Results different documents for the same query

db.getCollection('rien').aggregate([
{
$match: {
$and: [
{
"id": "10356"
},
{
$or: [
{
"sys_date": {
"$gte": newDate(ISODate().getTime()-90*24*60*60*1000)
}
},
{
"war_date": {
"$gte": newDate(ISODate().getTime()-90*24*60*60*1000)
}
}
]
}
]
}
},
{
$group: {
"_id": "$b_id",
count: {
$sum: 1
},
ads: {
$addToSet: {
"s": "$s",
"ca": "$ca"
}
},
files: {
$addToSet: {
"system": "$system",
"hostname": "$hostname"
}
}
}
},
{
$sort: {
"ads.s": -1
}
},
{
$group: {
"_id": "$b_id",
total_count: {
$sum: 1
},
"data": {
"$push": "$$ROOT"
}
}
},
{
$project: {
"_id": 0,
"total_count": 1,
results: {
$slice: [
"$data",
0,
50
]
}
}
}
])
When I execute this pipelines 5 times, it results in different set of documents. It is 3 node cluster. No sharding enabled. Have 10million documents. Data is static.
Any ideas about the inconsistent results? I feel I am missing some fundamentals here.
I can see 2 problems,
"ads.s": -1 will not work because, its an array field $sort will not apply in array field
$addToSet will not maintain sort order even its ordered from previous stage,
here mentioned in $addToSet documentation => Order of the elements in the output array is unspecified.
and also here mentioned in accumulators-group-addToSet => Order of the array elements is undefined
and also a JIRA ticket SERVER-8512 and DOCS-1114
You can use $setUnion operator for ascending order and $reduce for descending order result from $setUnion,
I workaround I am adding a solution below, I am not sure this is good option or not but you can use if this not affect performance of your query,
I am adding updated stages here only,
remain same
{ $match: {} }, // skipped
{ $group: {} }, // skipped
$sort, optional its up to your requirement if you want order by main document
{ $sort: { _id: -1 } },
$setUnion, treating arrays as sets. If an array contains duplicate entries, $setUnion ignores the duplicate entries, and second its return array in ascending order on the base of first field that we specified in $group stage is s, but make sure all element in array have s as first field,
$reduce to iterate loop of array and concat arrays current element $$this and initial value $$value, this will change order of array in descending order,
{
$addFields: {
ads: {
$reduce: {
input: { $setUnion: "$ads" },
initialValue: [],
in: { $concatArrays: [["$$this"], "$$value"] }
}
},
files: {
$reduce: {
input: { $setUnion: "$files" },
initialValue: [],
in: { $concatArrays: [["$$this"], "$$value"] }
}
}
}
},
remain same
{ $group: {} }, // skipped
{ $project: {} } // skipped
Playground
$setUnion mentioned in documentation: The order of the elements in the output array is unspecified., but I have tested every way its returning in ascending order perfectly, why I don't know,
I have asked question in MongoDB Developer Forum does-setunion-expression-operator-order-array-elements-in-ascending-order?, they replied => it will not guarantee of order!

Optimization and Indexing on Mongo Query

Help me on what kind of indexes need to be created and fields to be indexed.
I have tested multiple indexes but still it taking long time to execute
db.collection_1.aggregate([{ $match: { $and: [ { date: { $gte: new Date(1593561600000), $lt: new Date(1604966400000) } },
{ type: 0 }, { $or: [ { partysr: 0 }, {} ] }, { $or: [ { code: "******" }, { _id: { $type: -1 } } ] } ] } },
{ $sort: { date: -1 } }, { $skip: 0 }, { $limit: 100 },
{ $lookup: { from: "collection_2", localField: "code", foreignField: "code", as: "j" } },
{ $group: { _id: "$codeTsr" } } ]).explain("executionStats")
You gave us very little information here.
Have you tried testing this query with and without the $lookup stage?
How does the query behave without lookup speed-wise?
My first guess here is that your collection_2 collection does not have a proper index and it slows query. If your query is much faster without the lookup stage, I would create index on collection_2 for the "code" property.
Also, one more performance optimization might be to first do the $group stage, and after that, you do the $lookup stage.

Complex aggregation query with in clause from document array

Below is the sample MongoDB Data Model for a user collection:
{
"_id": ObjectId('58842568c706f50f5c1de662'),
"userId": "123455",
"user_name":"Bob"
"interestedTags": [
"music",
"cricket",
"hiking",
"F1",
"Mobile",
"racing"
],
"listFriends": [
"123456",
"123457",
"123458"
]
}
listFriends is an array of userId for other users
For a particular userId I need to extract the listFriends (userId's) and for those userId's I need to aggregate the interestedTags and their count.
I would be able to achieve this by splitting the query into two parts:
1.) Extract the listFriends for a particular userId,
2.) Use this list in an aggregate() function, something like this
db.user.aggregate([
{ $match: { userId: { $in: [ "123456","123457","123458" ] } } },
{ $unwind: '$interestedTags' },
{ $group: { _id: '$interestedTags', countTags: { $sum : 1 } } }
])
I am trying to solve the question: Is there a way to achieve the above functionality (both steps 1 and 2) in a single aggregate function?
You could use $lookup to look for friend documents. This stage is usually used to join two different collection, but it can also do join upon one single collection, in your case I think it should be fine:
db.user.aggregate([{
$match: {
_id: 'user1',
}
}, {
$unwind: '$listFriends',
}, {
$lookup: {
from: 'user',
localField: 'listFriends',
foreignField: '_id',
as: 'friend',
}
}, {
$project: {
friend: {
$arrayElemAt: ['$friend', 0]
}
}
}, {
$unwind: '$friend.interestedTags'
}, {
$group: {
_id: '$friend.interestedTags',
count: {
$sum: 1
}
}
}]);
Note: I use $lookup and $arrayElemAt which are only available in Mongo 3.2 or newer version, so check your Mongo version before using this pipeline.