Count _id occurrences in other collection - mongodb

We have a DB structure similar to the following:
Pet owners:
/* 1 */
{
"_id" : ObjectId("5baa8b8ce70dcbe59d7f1a32"),
"name" : "bob"
}
/* 2 */
{
"_id" : ObjectId("5baa8b8ee70dcbe59d7f1a33"),
"name" : "mary"
}
Pets:
/* 1 */
{
"_id" : ObjectId("5baa8b4fe70dcbe59d7f1a2a"),
"name" : "max",
"owner" : ObjectId("5baa8b8ce70dcbe59d7f1a32")
}
/* 2 */
{
"_id" : ObjectId("5baa8b52e70dcbe59d7f1a2b"),
"name" : "charlie",
"owner" : ObjectId("5baa8b8ce70dcbe59d7f1a32")
}
/* 3 */
{
"_id" : ObjectId("5baa8b53e70dcbe59d7f1a2c"),
"name" : "buddy",
"owner" : ObjectId("5baa8b8ee70dcbe59d7f1a33")
}
I need a list of all pet owners and additionally the number of pets they own. Our current query looks similar to the following:
db.getCollection('owners').aggregate([
{ $lookup: { from: 'pets', localField: '_id', foreignField: 'owner', as: 'pets' } },
{ $project: { '_id': 1, name: 1, numPets: { $size: '$pets' } } }
]);
This works, however it's quite slow and I'm asking myself if there's a more efficient way to perform the query?
[update and feedback] Thanks for the answers. The solutions work, however I can unfortunately see no performance improvement compared to the query given above. Obviously, MongoDB still needs to scan the entire pet collection. My hope was, that the owner index (which is present) on the pets collection could somehow be exploited for getting just the counts (not needing to touch the pet documents), but this does not seem to be the case.
Are there any other ideas or solutions for a very fast retrieval of the 'pet count' beside explicitly storing the count within the owner documents?

In MongoDB 3.6 you can create custom $lookup pipeline and count instead of entire pets documents, try:
db.owners.aggregate([
{
$lookup: {
from: "pets",
let: { ownerId: "$_id" },
pipeline: [
{ $match: { $expr: { $eq: [ "$$ownerId", "$owner" ] } } },
{ $count: "count" }
],
as: "numPets"
}
},
{
$unwind: "$numPets"
}
])

You can try below aggregation
db.owners.aggregate([
{ "$lookup": {
"from": "pets",
"let": { "ownerId": "$_id" },
"pipeline": [
{ "$match": { "$expr": { "$eq": [ "$$ownerId", "$owner" ] }}},
{ "$count": "count" }
],
"as": "numPets"
}},
{ "$project": {
"_id": 1,
"name": 1,
"numPets": { "$ifNull": [{ "$arrayElemAt": ["$numPets.count", 0] }, 0]}
}}
])

Related

MongoDB join 2 tables and get ids on condition

We are really new to MongoDB query writing. We have 2 MongoDB tables Supplier1 & Supplier 2. Both have the same _id. But the version number of these objects can be different sometimes.
We need to find out _id when the version of 2 collections are different (i.e. Suplier1.version != Supplier2.version)
Supplier1
{
"_id" : ObjectId("60cd86b914dfed073d77300f"),
"companyName" : "Main Supplier",
"version" : NumberLong(246),
}
Supplier2
{
"_id" : ObjectId("60cd86b914dfed073d77300f"),
"companyName" : "Main Supplier",
"version" : NumberLong(247),
}
What we have written up to now and no idea to move forward with this. Any help is highly appreciated.
db.getCollection("Supplier1").aggregate([
{
$lookup: {
from: "Supplier2",
localField: "_id",
foreignField: "_id",
as: "selected-supplier"
}
},
You can simply use a sub-pipeline in $lookup. Simply $unwind the result array to filter out unwanted result.
db.Supplier1.aggregate([
{
"$lookup": {
"from": "Supplier2",
"let": {
id1: "$_id",
version1: "$version"
},
"pipeline": [
{
"$match": {
$expr: {
$and: [
{
$eq: [
"$$id1",
"$_id"
]
},
{
$ne: [
"$$version1",
"$version"
]
}
]
}
}
}
],
"as": "selected-supplier"
}
},
{
"$unwind": "$selected-supplier"
}
])
Here is the Mongo playground for your reference.

Mongodb combine aggregate queries

I have following collections in MongoDB
Profile Collection
> db.Profile.find()
{ "_id" : ObjectId("5ec62ccb8897af3841a46d46"), "u" : "Test User", "is_del": false }
Store Collection
> db.Store.find()
{ "_id" : ObjectId("5eaa939aa709c30ff4703ffd"), "id" : "5ec62ccb8897af3841a46d46", "a" : { "ci": "Test City", "st": "Test State" }, "ip" : false }, "op" : [ ], "b" : [ "normal" ], "is_del": false}
Item Collection
> db.Item.find()
{ "_id" : ObjectId("5ea98a25f1246b53a46b9e10"), "sid" : "5eaa939aa709c30ff4703ffd", "n" : "sample", "is_del": false}
Relation among these collections are defined as follows:
Profile -> Store: It is 1:n relation. id field in Store relates with _id field in Profile.
Store -> Item: It is also 1:n relation. sid field in Item relates with _id field in Store.
Now, I need to write a query to find the all the store of profiles alongwith their count of Item for each store. Document with is_del as true must be excluded.
I am trying it following way:
Query 1 to find the count of item for each store.
Query 2 to find the store for each profile.
Then in the application logic use both the result to produce the combined output.
I have query 1 as follows:
db.Item.aggregate({$group: {_id: "$sid", count:{$sum:1}}})
Query 2 is as follows:
db.Profile.aggregate([{ "$addFields": { "pid": { "$toString": "$_id" }}}, { "$lookup": {"from": "Store","localField": "pid","foreignField": "id", "as": "stores"}}])
In the query, is_del is also missing. Is there any simpler way to perform all these in a single query? If so, what will be scalability impact?
You can use uncorrelated sub-queries, available from MongoDB v3.6
db.Profile.aggregate([
{
$match: { is_del: false }
},
{
$lookup: {
from: "Store",
as: "stores",
let: {
pid: { $toString: "$_id" }
},
pipeline: [
{
$match: {
is_del: false,
$expr: { $eq: ["$$pid", "$id"] }
}
},
{
$lookup: {
from: "Item",
as: "items",
let: {
sid: { $toString: "$_id" }
},
pipeline: [
{
$match: {
is_del: false,
$expr: { $eq: ["$$sid", "$sid"] }
}
},
{
$count: "count"
}
]
}
},
{
$unwind: "$items"
}
]
}
}
])
Mongo Playground
To improve performance, I suggest you store the reference ids as ObjectId so you don't have to convert them in each step.

Use fields that start with $ in MongoDB aggregation lookup

I have a MongoDB database that is populated by a Spring application using Spring Data. I want to perform a manual query to join two collections and extract some statistics from this data.
The first collection is named emailCampaign and contains this information (simplified):
{
"_id" : ObjectId("5db85687307b0a0d184448db"),
"name" : "Welcome email",
"subject" : "¡Welcome {{ user.name }}!",
"status" : "Sent",
"_class" : "com.mycompany.EmailCampaign"
}
The second collection is named campaignDelivery and contains this information (simplified):
/* 1 */
{
"_id" : ObjectId("5db183fb307b0aef3113361f"),
"campaign" : {
"$ref" : "emailCampaign",
"$id" : ObjectId("5db85687307b0a0d184448db")
},
"deliveries" : 3,
"_class" : "com.mycompany.CampaignDelivery"
}
/* 2 */
{
"_id" : ObjectId("5db85f2c307b0a0d184448e1"),
"campaign" : {
"$ref" : "emailCampaign",
"$id" : ObjectId("5db85687307b0a0d184448db")
},
"deliveries" : 5,
"_class" : "com.mycompany.CampaignDelivery"
}
Ultimately I want to obtain the sum of both deliveries field, but by now I'm stuck with the basic JOIN:
db.emailCampaign.aggregate([
{
$lookup: {
from: 'campaignDelivery',
localField: '_id',
foreignField: 'campaign.$id',
as: 'deliveries'
}
}
])
Throws the following error:
FieldPath field names may not start with '$'.
Escaping the dollar had no impact whatsoever, and I can't any examples of fields that start with dollars.
You can workaround it by using uncorrelated $lookup with $objectToArray in the sub-query to access campaign.$id:
db.emailCampaign.aggregate([
{ $lookup: {
from: "campaignDelivery",
let: { id: "$_id" },
pipeline: [
{ $addFields: {
refId: { $arrayElemAt: [
{ $filter: {
input: { $objectToArray: "$campaign" },
cond: { $eq: [ "$$this.k", { $literal: "$id" } ] }
} }
, 0
] }
} },
{ $match: {
$expr: { $eq: [
"$refId.v",
"$$id"
] }
} },
{ $project: {
refId: 0
} }
],
as: "deliveries"
} }
])

How to make lookup between two collections when an item in an array exists in the other collection?

In Lookup with a pipeline, I would like to get the linked records from an array in the parent document.
// Orders
[{
"_id" : ObjectId("5b5b91a25c68de2538620689"),
"Name" : "Test",
"Products" : [
ObjectId("5b5b919a5c68de2538620688"),
ObjectId("5b5b925a5c68de2538621a15")
]
}]
// Products
[
{
"_id": ObjectId("5b5b919a5c68de2538620688"),
"ProductName": "P1"
},
{
"_id": ObjectId("5b5b925a5c68de2538621a15"),
"ProductName": "P2"
}
,
{
"_id": ObjectId("5b5b925a5c68de2538621a55"),
"ProductName": "P3"
}
]
How to make a lookup between Orders and Products when Products field is an array!
I tried this query
db.getCollection("Orders").
aggregate(
[
{
$lookup:
{
from: "Products",
let: { localId: "$_id" , prods: "$Products" },
pipeline: [
{
"$match":
{
"_id" : { $in: "$$prods" }
}
},
{
$project:
{
"_id": "$_id",
"name": "$prods" ,
}
}
],
as: "linkedData"
}
},
{
"$skip": 0
},
{
"$limit": 1
},
]
)
This is not working because $in is expecting an array, and even though $$prods is an array, it is not accepting it.
Is my whole approach correct? How to make this magic join ?
You were going in the right direction the only thing you missed here is to use expr with in aggregation operator which matches the same fields of the document
db.getCollection("Orders").aggregate([
{ "$lookup": {
"from": "Products",
"let": { "localId": "$_id" , "prods": "$Products" },
"pipeline": [
{ "$match": { "$expr": { "$in": [ "$_id", "$$prods" ] } } },
{ "$project": { "_id": 1, "name": "$ProductName" } }
],
"as": "linkedData"
}},
{ "$skip": 0 },
{ "$limit": 1 }
])
See the docs here
You just need regular $lookup, the documentation states that:
If your localField is an array, you may want to add an $unwind stage to your pipeline. Otherwise, the equality condition between the localField and foreignField is foreignField: { $in: [ localField.elem1, localField.elem2, ... ] }.
So for below aggregation:
db.Orders.aggregate([
{
$lookup: {
from :"Products",
localField: "Products",
foreignField: "_id",
as: "Products"
}
}
])
you'll get following result for your sample data:
{
"_id" : ObjectId("5b5b91a25c68de2538620689"),
"Name" : "Test",
"Products" : [
{
"_id" : ObjectId("5b5b919a5c68de2538620688"),
"ProductName" : "P1"
},
{
"_id" : ObjectId("5b5b925a5c68de2538621a15"),
"ProductName" : "P2"
}
]
}
have you try unwind before the lookup. use unwind to brak the array annd then make lookup.

Querying mongoDB for some chart data - my pipeline seems convoluted

This is a long question. If you bother answering, I will be extra grateful.
I have some time series data that I am trying to query to create various charts. The data format isn't the most simple, but I think my aggregation pipeline is getting a bit out of hand. I am planning to use charts.js to visualise the data on the client.
I will post a sample of my data below as well as my pipeline, with the desired output.
My question is in two parts - answering either one could solve the problem.
Does charts.js accept data formats other than an array of numbers per row? This would mean my pipeline could try to do less.
My pipeline doesn't quite get to the result I need. Can you recommend any alterations to get the correct result from my pipeline? Is there is a simpler way to get my desired output format?
Sample data
Here is a real data sample - a brand with one facebook account and one twitter account. There is some data for some dates in June. Lots of null day and month fields have been omitted.
Brand
[{
"_id": "5943f427e7c11ac3ad3652b0",
"name": "Brand1",
"facebookAccounts": [
"5943f427e7c11ac3ad3652ac",
],
"twitterAccounts": [
"5943f427e7c11ac3ad3652aa",
],
}]
FacebookAccounts
[
{
"_id" : "5943f427e7c11ac3ad3652ac"
"name": "Brand 1 Name",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"likes": 904025,
},
{
"date": "2017-06-17T00:00:00.000Z",
"likes": null,
},
{
"date": "2017-06-18T00:00:00.000Z",
"likes": 904345,
},
],
},
],
}
]
}
]
Twitter accounts
[
{
"_id": "5943f427e7c11ac3ad3652aa",
"name": "Brand 1 Name",
"vendorId": "twitterhandle",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"followers": 69390,
},
{
"date": "2017-06-17T00:00:00.000Z",
"followers": 69397,
{
"date": "2017-06-18T00:00:00.000Z",
"followers": 69428,
},
{
"date": "2017-06-19T00:00:00.000Z",
"followers": 69457,
},
]
},
],
}
]
}
]
The query
For this example, I want, for each brand, a daily sum of facebook likes and twitter followers between June 16th and June 18th. So here, the required format is:
{
brand: Brand1,
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [973415, 69397, 973773]
}
The pipeline
The pipeline seems more convoluted due to the population, but I accept that complexity and it is necessary. Here are the steps:
db.getCollection('brands').aggregate([
{ $match: { _id: { $in: [ObjectId("5943f427e7c11ac3ad3652b0") ] } } },
// Unwind all relevant account types. Make one row per account
{ $project: {
accounts: { $setUnion: [ '$facebookAccounts', '$twitterAccounts' ] } ,
name: '$name'
}
},
{ $unwind: '$accounts' },
// populate the accounts.
// These transform the arrays of facebookAccount ObjectIds into the objects described above.
{ $lookup: { from: 'facebookaccounts', localField: 'accounts', foreignField: '_id', as: 'facebookAccounts' } },
{ $lookup: { from: 'twitteraccounts', localField: 'accounts', foreignField: '_id', as: 'twitterAccounts' } },
// unwind the populated accounts. Back to one record per account.
{ $unwind: { path: '$facebookAccounts', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts', preserveNullAndEmptyArrays: true } },
// unwind to the granularity we want. Here it is one record per day per account per brand.
{ $unwind: { path: '$facebookAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
// Filter each one between dates
{ $match: { $or: [
{ $and: [
{ 'facebookAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'facebookAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]},
{ $and: [
{ 'twitterAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'twitterAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]}
] }},
// Build stats and date arrays for each account
{ $group: {
_id: '$accounts',
brandId: { $first: '$_id' },
brandName: { $first: '$name' },
stat: {
$push: {
$sum: {
$add: [
{ $ifNull: ['$facebookAccounts.years.months.days.likes', 0] },
{ $ifNull: ['$twitterAccounts.years.months.days.followers', 0] }
]
}
}
},
date: { $push: { $ifNull: ['$facebookAccounts.years.months.days.date', '$twitterAccounts.years.months.days.date'] } } ,
}}
])
This gives me the output format
[{
_id: accountId, // facebook
brandName: 'Brand1'
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [904025, null, 904345]
},
{
_id: accountId // twitter
brandName: 'Brand1',
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [69457, 69390, 69397]
}]
So I now need to perform column-wise addition on my stat properties.And then I am stuck - I feel like there should be a more pipeline friendly way to sum these rather than column-wise addition.
Note I accept the extra work that the population required and am happy with that. Most of the repetition is done programmatically.
Thank you if you've gotten this far.
I can trim a lot of fat out of this and keep it compatible with MongoDB 3.2 ( which you must be using at least due to preserveNullAndEmptyArrays ) available operators with a few simple actions. Mostly by simply joining the arrays immediately after $lookup, which is the best place to do it:
Short Optimize
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ]
}
}},
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$unwind": "$all" },
{ "$unwind": "$all.years" },
{ "$unwind": "$all.years.months" },
{ "$unwind": "$all.years.months.days" },
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$all.years.months.days.date"
},
"total": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$all.years.months.days.likes", 0 ] },
{ "$ifNull": [ "$all.years.months.days.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$total" }
}}
])
This gives the result:
{
"_id" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
With MongoDB 3.4 we could probably speed it up a "little" more by filtering the arrays and breaking them down before we eventually $unwind to make this work across documents, or maybe even not worry about going across documents at all if the "name" from "brands" is unique. The pipeline operations to compact down the arrays "in place" though are quite cumbersome to code, if a "little" better on performance.
You seem to be doing this "per brand" or for a small sample, so it's likely of little consequence.
As for the chartjs data format, I don't seem to be able to get my hands on what I believe is a different data format to the array format here, but again this should have little bearing.
The main point I see addressed is we can easily move away from your previous output that separated the "facebook" and "twitter" data, and simply aggregate by date moving all the data together "before" the arrays are constructed.
That last point then obviates the need for further "convoluted" operations to attempt to "merge" those two documents and the arrays produced.
Alternate Optimize
As an alternate approach where this does in fact not aggregate across documents, then you can essentially do the "filter" on the array in place and then simply sum and reshape the received result in client code.
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$map": {
"input": { "$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ] },
"as": "all",
"in": {
"years": {
"$map": {
"input": "$$all.years",
"as": "year",
"in": {
"months": {
"$map": {
"input": "$$year.months",
"as": "month",
"in": {
"days": {
"$filter": {
"input": "$$month.days",
"as": "day",
"cond": {
"$and": [
{ "$gte": [ "$$day.date", new Date("2017-06-16") ] },
{ "$lte": [ "$$day.date", new Date("2017-06-18") ] }
]
}
}
}
}
}
}
}
}
}
}
}
}
}}
]).map(doc => {
doc.all = [].concat.apply([],[].concat.apply([],[].concat.apply([],doc.all.map(d => d.years)).map(d => d.months)).map(d => d.days));
doc.all = doc.all.reduce((a,b) => {
if ( a.findIndex( d => d.date.valueOf() == b.date.valueOf() ) != -1 ) {
a[a.findIndex( d => d.date.valueOf() == b.date.valueOf() )].stat += (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0);
} else {
a = a.concat([{ date: b.date, stat: (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0) }]);
}
return a;
},[]);
doc.date = doc.all.map(d => d.date);
doc.stat = doc.all.map(d => d.stat);
delete doc.all;
return doc;
})
This really leaves all the things that "need" to happen on the server, on the server. And it's then a fairly trivial task to "flatten" the array and process to "sum up" and reshape it. This would mean less load on the server, and the data returned is not really that much greater per document.
Gives the same result of course:
[
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
]
Committing to the Diet
The biggest problem you really have is with the multiple collections and the heavily nested documents. Neither of these is doing you any favors here and will with larger results cause real performance problems.
The nesting in particular is completely unnecessary as well as not being very maintainable since there are limitations to "update" where you have nested arrays. See the positional $ operator documentation, as well as many posts about this.
Instead you really want a single collection with all those "days" entries in it. You can always work with that source easily for query as well as aggregation purposes and it should look something like this:
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38097"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"likes" : 904025,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38098"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"likes" : null,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38099"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"likes" : 904345,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809a"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"followers" : 69390,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809b"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"followers" : 69397,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809c"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"followers" : 69428,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809d"),
"date" : ISODate("2017-06-19T00:00:00Z"),
"followers" : 69457,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
Combining those referenced in the brands collection as well:
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"accounts" : [
ObjectId("5943f427e7c11ac3ad3652ac"),
ObjectId("5943f427e7c11ac3ad3652aa")
]
}
Then you simply aggregate like this:
db.brands.aggregate([
{ "$lookup": {
"from": "social",
"localField": "accounts",
"foreignField": "account",
"as": "accounts"
}},
{ "$unwind": "$accounts" },
{ "$match": {
"accounts.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$accounts.date"
},
"stat": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$accounts.likes", 0 ] },
{ "$ifNull": [ "$accounts.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$stat" }
}}
])
This is actually the most efficient thing you can do, and it's mostly because of what actually happens on the server. We need to look at the "explain" output to see what happens to the pipeline here:
{
"$lookup" : {
"from" : "social",
"as" : "accounts",
"localField" : "accounts",
"foreignField" : "account",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"$and" : [
{
"date" : {
"$gte" : ISODate("2017-06-16T00:00:00Z")
}
},
{
"date" : {
"$lte" : ISODate("2017-06-18T00:00:00Z")
}
}
]
}
}
}
This is what happens when you send $lookup -> $unwind -> $match to the server as the latter two stages are "hoisted" into the $lookup itself. This reduces the results in the actual "query" run on the collection to be joined.
Without that sequence, then $lookup potentially pulls in "a lot of data" with no constraint, and would break the 16MB BSON limit under most normal loads.
So not only is the process a lot more simple in the altered form, it actually "scales" where the present structure will not. This is something that you seriously should consider.