I have following collections in MongoDB
Profile Collection
> db.Profile.find()
{ "_id" : ObjectId("5ec62ccb8897af3841a46d46"), "u" : "Test User", "is_del": false }
Store Collection
> db.Store.find()
{ "_id" : ObjectId("5eaa939aa709c30ff4703ffd"), "id" : "5ec62ccb8897af3841a46d46", "a" : { "ci": "Test City", "st": "Test State" }, "ip" : false }, "op" : [ ], "b" : [ "normal" ], "is_del": false}
Item Collection
> db.Item.find()
{ "_id" : ObjectId("5ea98a25f1246b53a46b9e10"), "sid" : "5eaa939aa709c30ff4703ffd", "n" : "sample", "is_del": false}
Relation among these collections are defined as follows:
Profile -> Store: It is 1:n relation. id field in Store relates with _id field in Profile.
Store -> Item: It is also 1:n relation. sid field in Item relates with _id field in Store.
Now, I need to write a query to find the all the store of profiles alongwith their count of Item for each store. Document with is_del as true must be excluded.
I am trying it following way:
Query 1 to find the count of item for each store.
Query 2 to find the store for each profile.
Then in the application logic use both the result to produce the combined output.
I have query 1 as follows:
db.Item.aggregate({$group: {_id: "$sid", count:{$sum:1}}})
Query 2 is as follows:
db.Profile.aggregate([{ "$addFields": { "pid": { "$toString": "$_id" }}}, { "$lookup": {"from": "Store","localField": "pid","foreignField": "id", "as": "stores"}}])
In the query, is_del is also missing. Is there any simpler way to perform all these in a single query? If so, what will be scalability impact?
You can use uncorrelated sub-queries, available from MongoDB v3.6
db.Profile.aggregate([
{
$match: { is_del: false }
},
{
$lookup: {
from: "Store",
as: "stores",
let: {
pid: { $toString: "$_id" }
},
pipeline: [
{
$match: {
is_del: false,
$expr: { $eq: ["$$pid", "$id"] }
}
},
{
$lookup: {
from: "Item",
as: "items",
let: {
sid: { $toString: "$_id" }
},
pipeline: [
{
$match: {
is_del: false,
$expr: { $eq: ["$$sid", "$sid"] }
}
},
{
$count: "count"
}
]
}
},
{
$unwind: "$items"
}
]
}
}
])
Mongo Playground
To improve performance, I suggest you store the reference ids as ObjectId so you don't have to convert them in each step.
I am trying to perform $lookup on collection with conditions, the problem I am facing is that I would like to match the text field of all objects which are inside an array (accounts array) in other (plates) collection.
I have tried using $map as well as $in and $setIntersection but nothing seems to work. And, I am unable to find a way to match the text fields of each of the objects in array.
My document structures are as follows:
plates collection:
{
"_id": "Batch 1",
"rego" : "1QX-WA-123",
"date" : 1516374000000.0
"accounts": [{
"text": "Acc1",
"date": 1516374000000
},{
"text": "Acc2",
"date": 1516474000000
}]
}
accounts collection:
{
"_id": "Acc1",
"date": 1516374000000
"createdAt" : 1513810712802.0
}
I am trying to achieve something like this:
{
$lookup: {
from: 'plates',
let: { 'accountId': '$_id' },
pipeline: [{
'$match': {
'$expr': { '$and': [
{ '$eq': [ '$account.text', '$$accountId' ] },
{ '$gte': [ '$date', ISODate ("2016-01-01T00:00:00.000Z").getTime() ] },
{ '$lte': [ '$date', ISODate ("2019-01-01T00:00:00.000Z").getTime() ] }
]}
}
}],
as: 'cusips'
}
},
The output I am trying to get is:
{
"_id": "Acc1",
"date": 1516374000000
"createdAt" : 1513810712802.0,
"plates": [{
"_id": "Batch 1",
"rego": "1QX-WA-123"
}]
}
Personally I would be initiating the aggregation from the "plates" collection instead where the initial $match conditions can filter the date range more cleanly. Getting your desired output is then a simple matter of "unwinding" the resulting "accounts" matches and "inverting" the content.
Easy enough with MongoDB 3.6 features which you must have in order to use $lookup with $expr. We even don't need that form for $lookup here:
db.plates.aggregate([
{ "$match": {
"date": {
"$gte": new Date("2016-01-01").getTime(),
"$lte": new Date("2019-01-01").getTime()
}
}},
{ "$lookup": {
"from": "accounts",
"localField": "accounts.text",
"foreignField": "_id",
"as": "accounts"
}},
{ "$unwind": "$accounts" },
{ "$group": {
"_id": "$accounts",
"plates": { "$push": { "_id": "$_id", "rego": "$rego" } }
}},
{ "$replaceRoot": {
"newRoot": {
"$mergeObjects": ["$_id", { "plates": "$plates" }]
}
}}
])
This of course is an "INNER JOIN" which would only return "accounts" entries where the matc
Doing the "join" from the "accounts" collection means you need additional handling to remove the non-matching entries from the "accounts" array within the "plates" collection:
db.accounts.aggregate([
{ "$lookup": {
"from": "plates",
"let": { "account": "$_id" },
"pipeline": [
{ "$match": {
"date": {
"$gte": new Date("2016-01-01").getTime(),
"$lte": new Date("2019-01-01").getTime()
},
"$expr": { "$in": [ "$$account", "$accounts.text" ] }
}},
{ "$project": { "_id": 1, "rego": 1 } }
],
"as": "plates"
}}
])
Note that the $match on the "date" properties should be expressed as a regular query condition instead of within the $expr block for optimal performance of the query.
The $in is used to compare the "array" of "$accounts.text" values to the local variable defined for the "_id" value of the "accounts" document being joined to. So the first argument to $in is the "single" value and the second is the "array" of just the "text" values which should be matching.
This is also notably a "LEFT JOIN" which returns all "accounts" regardless of whether there are any matching "plates" to the conditions, and therefore you can possibly end up with an empty "plates" array in the results returned. You can filter those out if you didn't want them, but where that was the case the former query form is really far more efficient than this one since the relation is defined and we only ever deal with "plates" which would meet the criteria.
Either method returns the same response from the data provided in the question:
{
"_id" : "Acc1",
"date" : 1516374000000,
"createdAt" : 1513810712802,
"plates" : [
{
"_id" : "Batch 1",
"rego" : "1QX-WA-123"
}
]
}
Which direction you actually take that from really depends on whether the "LEFT" or "INNER" join form is what you really want and also where the most efficient query conditions can be made for the items you actually want to select.
Hmm, not sure how you tried $in, but it works for me:
{
$lookup: {
from: 'plates',
let: { 'accountId': '$_id' },
pipeline: [{
'$match': {
'$expr': { '$and': [
{ '$in': [ '$$accountId', '$accounts.text'] },
{ '$gte': [ '$date', ISODate ("2016-01-01T00:00:00.000Z").getTime() ] },
{ '$lte': [ '$date', ISODate ("2019-01-01T00:00:00.000Z").getTime() ] }
]}
},
}],
as: 'cusips'
}
}
This is a long question. If you bother answering, I will be extra grateful.
I have some time series data that I am trying to query to create various charts. The data format isn't the most simple, but I think my aggregation pipeline is getting a bit out of hand. I am planning to use charts.js to visualise the data on the client.
I will post a sample of my data below as well as my pipeline, with the desired output.
My question is in two parts - answering either one could solve the problem.
Does charts.js accept data formats other than an array of numbers per row? This would mean my pipeline could try to do less.
My pipeline doesn't quite get to the result I need. Can you recommend any alterations to get the correct result from my pipeline? Is there is a simpler way to get my desired output format?
Sample data
Here is a real data sample - a brand with one facebook account and one twitter account. There is some data for some dates in June. Lots of null day and month fields have been omitted.
Brand
[{
"_id": "5943f427e7c11ac3ad3652b0",
"name": "Brand1",
"facebookAccounts": [
"5943f427e7c11ac3ad3652ac",
],
"twitterAccounts": [
"5943f427e7c11ac3ad3652aa",
],
}]
FacebookAccounts
[
{
"_id" : "5943f427e7c11ac3ad3652ac"
"name": "Brand 1 Name",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"likes": 904025,
},
{
"date": "2017-06-17T00:00:00.000Z",
"likes": null,
},
{
"date": "2017-06-18T00:00:00.000Z",
"likes": 904345,
},
],
},
],
}
]
}
]
Twitter accounts
[
{
"_id": "5943f427e7c11ac3ad3652aa",
"name": "Brand 1 Name",
"vendorId": "twitterhandle",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"followers": 69390,
},
{
"date": "2017-06-17T00:00:00.000Z",
"followers": 69397,
{
"date": "2017-06-18T00:00:00.000Z",
"followers": 69428,
},
{
"date": "2017-06-19T00:00:00.000Z",
"followers": 69457,
},
]
},
],
}
]
}
]
The query
For this example, I want, for each brand, a daily sum of facebook likes and twitter followers between June 16th and June 18th. So here, the required format is:
{
brand: Brand1,
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [973415, 69397, 973773]
}
The pipeline
The pipeline seems more convoluted due to the population, but I accept that complexity and it is necessary. Here are the steps:
db.getCollection('brands').aggregate([
{ $match: { _id: { $in: [ObjectId("5943f427e7c11ac3ad3652b0") ] } } },
// Unwind all relevant account types. Make one row per account
{ $project: {
accounts: { $setUnion: [ '$facebookAccounts', '$twitterAccounts' ] } ,
name: '$name'
}
},
{ $unwind: '$accounts' },
// populate the accounts.
// These transform the arrays of facebookAccount ObjectIds into the objects described above.
{ $lookup: { from: 'facebookaccounts', localField: 'accounts', foreignField: '_id', as: 'facebookAccounts' } },
{ $lookup: { from: 'twitteraccounts', localField: 'accounts', foreignField: '_id', as: 'twitterAccounts' } },
// unwind the populated accounts. Back to one record per account.
{ $unwind: { path: '$facebookAccounts', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts', preserveNullAndEmptyArrays: true } },
// unwind to the granularity we want. Here it is one record per day per account per brand.
{ $unwind: { path: '$facebookAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
// Filter each one between dates
{ $match: { $or: [
{ $and: [
{ 'facebookAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'facebookAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]},
{ $and: [
{ 'twitterAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'twitterAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]}
] }},
// Build stats and date arrays for each account
{ $group: {
_id: '$accounts',
brandId: { $first: '$_id' },
brandName: { $first: '$name' },
stat: {
$push: {
$sum: {
$add: [
{ $ifNull: ['$facebookAccounts.years.months.days.likes', 0] },
{ $ifNull: ['$twitterAccounts.years.months.days.followers', 0] }
]
}
}
},
date: { $push: { $ifNull: ['$facebookAccounts.years.months.days.date', '$twitterAccounts.years.months.days.date'] } } ,
}}
])
This gives me the output format
[{
_id: accountId, // facebook
brandName: 'Brand1'
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [904025, null, 904345]
},
{
_id: accountId // twitter
brandName: 'Brand1',
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [69457, 69390, 69397]
}]
So I now need to perform column-wise addition on my stat properties.And then I am stuck - I feel like there should be a more pipeline friendly way to sum these rather than column-wise addition.
Note I accept the extra work that the population required and am happy with that. Most of the repetition is done programmatically.
Thank you if you've gotten this far.
I can trim a lot of fat out of this and keep it compatible with MongoDB 3.2 ( which you must be using at least due to preserveNullAndEmptyArrays ) available operators with a few simple actions. Mostly by simply joining the arrays immediately after $lookup, which is the best place to do it:
Short Optimize
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ]
}
}},
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$unwind": "$all" },
{ "$unwind": "$all.years" },
{ "$unwind": "$all.years.months" },
{ "$unwind": "$all.years.months.days" },
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$all.years.months.days.date"
},
"total": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$all.years.months.days.likes", 0 ] },
{ "$ifNull": [ "$all.years.months.days.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$total" }
}}
])
This gives the result:
{
"_id" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
With MongoDB 3.4 we could probably speed it up a "little" more by filtering the arrays and breaking them down before we eventually $unwind to make this work across documents, or maybe even not worry about going across documents at all if the "name" from "brands" is unique. The pipeline operations to compact down the arrays "in place" though are quite cumbersome to code, if a "little" better on performance.
You seem to be doing this "per brand" or for a small sample, so it's likely of little consequence.
As for the chartjs data format, I don't seem to be able to get my hands on what I believe is a different data format to the array format here, but again this should have little bearing.
The main point I see addressed is we can easily move away from your previous output that separated the "facebook" and "twitter" data, and simply aggregate by date moving all the data together "before" the arrays are constructed.
That last point then obviates the need for further "convoluted" operations to attempt to "merge" those two documents and the arrays produced.
Alternate Optimize
As an alternate approach where this does in fact not aggregate across documents, then you can essentially do the "filter" on the array in place and then simply sum and reshape the received result in client code.
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$map": {
"input": { "$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ] },
"as": "all",
"in": {
"years": {
"$map": {
"input": "$$all.years",
"as": "year",
"in": {
"months": {
"$map": {
"input": "$$year.months",
"as": "month",
"in": {
"days": {
"$filter": {
"input": "$$month.days",
"as": "day",
"cond": {
"$and": [
{ "$gte": [ "$$day.date", new Date("2017-06-16") ] },
{ "$lte": [ "$$day.date", new Date("2017-06-18") ] }
]
}
}
}
}
}
}
}
}
}
}
}
}
}}
]).map(doc => {
doc.all = [].concat.apply([],[].concat.apply([],[].concat.apply([],doc.all.map(d => d.years)).map(d => d.months)).map(d => d.days));
doc.all = doc.all.reduce((a,b) => {
if ( a.findIndex( d => d.date.valueOf() == b.date.valueOf() ) != -1 ) {
a[a.findIndex( d => d.date.valueOf() == b.date.valueOf() )].stat += (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0);
} else {
a = a.concat([{ date: b.date, stat: (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0) }]);
}
return a;
},[]);
doc.date = doc.all.map(d => d.date);
doc.stat = doc.all.map(d => d.stat);
delete doc.all;
return doc;
})
This really leaves all the things that "need" to happen on the server, on the server. And it's then a fairly trivial task to "flatten" the array and process to "sum up" and reshape it. This would mean less load on the server, and the data returned is not really that much greater per document.
Gives the same result of course:
[
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
]
Committing to the Diet
The biggest problem you really have is with the multiple collections and the heavily nested documents. Neither of these is doing you any favors here and will with larger results cause real performance problems.
The nesting in particular is completely unnecessary as well as not being very maintainable since there are limitations to "update" where you have nested arrays. See the positional $ operator documentation, as well as many posts about this.
Instead you really want a single collection with all those "days" entries in it. You can always work with that source easily for query as well as aggregation purposes and it should look something like this:
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38097"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"likes" : 904025,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38098"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"likes" : null,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38099"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"likes" : 904345,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809a"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"followers" : 69390,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809b"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"followers" : 69397,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809c"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"followers" : 69428,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809d"),
"date" : ISODate("2017-06-19T00:00:00Z"),
"followers" : 69457,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
Combining those referenced in the brands collection as well:
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"accounts" : [
ObjectId("5943f427e7c11ac3ad3652ac"),
ObjectId("5943f427e7c11ac3ad3652aa")
]
}
Then you simply aggregate like this:
db.brands.aggregate([
{ "$lookup": {
"from": "social",
"localField": "accounts",
"foreignField": "account",
"as": "accounts"
}},
{ "$unwind": "$accounts" },
{ "$match": {
"accounts.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$accounts.date"
},
"stat": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$accounts.likes", 0 ] },
{ "$ifNull": [ "$accounts.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$stat" }
}}
])
This is actually the most efficient thing you can do, and it's mostly because of what actually happens on the server. We need to look at the "explain" output to see what happens to the pipeline here:
{
"$lookup" : {
"from" : "social",
"as" : "accounts",
"localField" : "accounts",
"foreignField" : "account",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"$and" : [
{
"date" : {
"$gte" : ISODate("2017-06-16T00:00:00Z")
}
},
{
"date" : {
"$lte" : ISODate("2017-06-18T00:00:00Z")
}
}
]
}
}
}
This is what happens when you send $lookup -> $unwind -> $match to the server as the latter two stages are "hoisted" into the $lookup itself. This reduces the results in the actual "query" run on the collection to be joined.
Without that sequence, then $lookup potentially pulls in "a lot of data" with no constraint, and would break the 16MB BSON limit under most normal loads.
So not only is the process a lot more simple in the altered form, it actually "scales" where the present structure will not. This is something that you seriously should consider.
I have 2 collections (with example documents):
reports
{
id: "R1",
type: "xyz",
}
reportfiles
{
id: "F1",
reportid: "R1",
time: ISODate("2016-06-13T14:20:25.812Z")
},
{
id: "F14",
reportid: "R1",
time: ISODate("2016-06-15T09:20:29.809Z")
}
As you can see one report may have multiple reportfiles.
I'd like to perform a query, matching a report id, returning the report document as is, plus an additional key storing as subdocument the reportfile with the most recent time (even better without reportid, as it would be redundant), e.g.
{
id: "R1",
type: "xyz",
reportfile: {
id: "F14",
reportid: "R1",
time: ISODate("2016-06-15T09:20:29.809Z")
}
}
My problem here is that every report type has its own set of properties, so using $project in an aggregation pipeline is not the best way.
So far I got
db.reports.aggregate([{
$match : 'R1'
}, {
$lookup : {
from : 'reportfiles',
localField : 'id',
foreignField : 'reportid',
as : 'reportfile'
}
}
])
returning of course as ´reportfile´ the list of all files with the given reportid. How can I efficiently filter that list to get the only element I need?
efficiently -> I tried using $unwind as next pipeline step but the resulting document was frighteningly and pointlessly long.
Thanks in advance for any suggestion!
You need to add another $project stage to your aggregation pipeline after the $lookup stage.
{ "$project": {
"id": "R1",
"type": "xyz",
"reportfile": {
"$let": {
"vars": {
"obj": {
"$arrayElemAt": [
{ "$filter": {
"input": "$reportfile",
"as": "report",
"cond": { "$eq": [ "$$report.time", { "$max": "$reportfile.time" } ] }
}},
0
]
}
},
"in": { "id": "$$obj.id", "time": "$$obj.time" }
}
}
}}
The $filter operator "filter" the $lookup result and return an array with the document that satisfy your condition. The condition here is $eq which return true when the document has the $maximum value.
The $arrayElemAt operator slice the $filter's result and return the element from the array that you then assign to a variable using the $let operator. From there, you can easily access the field you want in your result with the dot notation.
What you would require is to run the aggregation operation on the reportfile collection, do the "join" on the reports collection, pipe a $group operation to ordered (with $sort) and flattened documents (with $unwind) from the $lookup pipeline. The preceding result can then be grouped by the reportid and output the desired result using the $first accumulator aoperators.
The following demonstrates this approach:
db.reportfiles.aggregate([
{ "$match": { "reportid": "R1" } },
{
"$lookup": {
"from": 'reports',
"localField" : 'reportid',
"foreignField" : 'id',
"as": 'report'
}
},
{ "$unwind": "$report" },
{ "$sort": { "time": -1 } },
{
"$group": {
"_id": "$reportid",
"type": { "$first": "$report.type" },
"reportfile": {
"$first": {
"id": "$id",
"reportid": "$reportid",
"time": "$time"
}
}
}
}
])
Sample Output:
{
"_id" : "R1",
"type" : "xyz",
"reportfile" : {
"id" : "F14",
"reportid" : "R1",
"time" : ISODate("2016-06-15T09:20:29.809Z")
}
}
I have a collections with documents such as:
{
_id: "1234",
_class: "com.acme.classA",
a_collection: [
{
otherdata: 'somedata',
type: 'a'
},
{
otherdata: 'bar',
type: 'a'
},
{
otherdata: 'foo',
type: 'b'
}
],
lastChange: ISODate("2014-08-17T22:25:48.918Z")
}
I want to find all document by id and a subset of the sub array. for example I want to find all documents with id "1234" and a_collection.type is 'a' giving this result:
{
_id: "1234",
_class: "com.acme.classA",
a_collection: [
{
otherdata: 'somedata',
type: 'a'
},
{
otherdata: 'bar',
type: 'a'
}
],
lastChange: ISODate("2014-08-17T22:25:48.918Z")
}
I have tried this :
db.collection_name.aggregate({
$match: {
'a_collection.type': 'a'
}
},
{
$unwind: "$a_collection"
},
{
$match: {
"a_collection.type": 'a'
}
},
{
$group: {
_id: "$_id",
a_collection: {
$addToSet: "$a_collection"
},
}
}).pretty()
but this doesnt return other properties ( such as 'lastChange' )
what is the correct way to do this ?
Are you using PHP?
And is this the only way you can get the "text"?
maybe you can rewrite it that it is like an JSON element.
something like that:
{
"_id": "1234",
"_class": "com.acme.classA",
"a_collection": [
{
"otherdata": "somedata",
"type": "a"
},
{
"otherdata": "bar",
"type": "a"
},
{
"otherdata": "foo",
"type": "b"
}
]
}
Then you can use the json_decode() function from PHP to make an array and then you can search and return only the needed data.
Edit: I read read false. do you search for a funktion like this?
db.inventory.find( {
$or: [ { _id: "1234" }, { 'a_collection.type': 'a' }]
} )
[Here][1] I found the code ;) [1]: http://docs.mongodb.org/manual/tutorial/query-documents/
this is the correct query:
db.collection_name.aggregate({
$match: {
'a_collection.type': 'a'
}
},
{
$unwind: "$a_collection"
},
{
$match: {
"a_collection.type": 'a'
}
},
{
$group: {
_id: "$_id",
a_collection: {
$addToSet: "$a_collection"
},
lastChange : { $first : "$lastChange" }
}
}).pretty()
Something is very strange about your desired query (and your pipelines). First of all, _id is a reserved field with a unique index on it. The result of finding all documents with _id = "1234" can only be 0 or 1 documents. Second, to find documents with a_collection.type = "a" for some element of the array a_collection, you don't need the aggregation framework. You just need a find query:
> db.test.find({ "a_collection.type" : "a" })
So all the work here appears to be winnowing the subarray of one document down to just those elements with a_collection.type = "a". Why do you have these objects in the same document if most of what you do is split them up and eliminate some to find a result set? How common and how truly necessary is it to harvest just the array elements with a_collection.type = "a"? Perhaps you want to model your data differently so a query like
> db.test.find({ <some condition>, "a_collection.type" : "a" })
returns you the correct documents. I can't say how you can do it best with the given information, but I can say that your current approach strongly suggests revision is needed (and I'm happy to help with suggestions if you include further information or post a new question).
I would agree with the answer you have submitted yourself, but for that in MongoDB 2.6 and greater there is a better way to do this with $map and $setDifference. Which wer both introduced at that version. But where available, this is much faster in the approach:
db.collection.aggregate([
{ "$match": { "a_collection.type": "a" } },
{ "$project": {
"$setDifference": [
{ "$map": [
"input": "$a_collection",
"as": "el",
"in": {
"$cond": [
{ "$eq": [ "$$el.type", "a" ] },
"$$el",
false
]
}
]},
[false]
]
}}
])
So that has no "group" or initial "unwind" which both can be costly options, along with the $match stage. So MongoDB 2.6 does it better.