Pymongo Advanced query (embeded objects and arrays) - mongodb

I have been working on this for about a week, have learnt a lot about pymongo but still can't crack it.
I have the following JSON in mongo
{
"_id" : ObjectId("5845e25xxxxxxxxxx"),
"timestamp" : ISODate("2016-08-24T14:59:04.000+0000"),
"new_key" : "cambiar",
"Records" : [
{
"RecordType" : "WFD",
"Properties" : [
{
"Property" : {
"IsReadOnly" : "False",
"ValueType" : "System",
"Name" : "LastWrite"
},
"value" : "42"
},
{
"Property" : {
"IsReadOnly" : "True",
"ValueType" : "String",
"Name" : "Time_as_String"
},
"value" : "24-08-2016 14:59:08"
},
{
"Property" : {
"IsReadOnly" : "False",
"ValueType" : "32",
"Name" : "YES"
},
"value" : "1472065148"
there are many more properties below. I am trying to return just the "value" : 1472065148 and nothing else. I have written this query
x = dataset.find_one({"Records.Properties.Property.Name":'YES'},{"Records.Properties.Property.value":1})
but beacuase the 'value' is on the same level as all the other values, I get all of the value results, not just the one I am hoping for.
Is there a way to print only the result object after the object that matches the query???? "Name" : "YES" being the object i'm querying and "value" : "1472065148" being the object I want to print.
------------------------ ADDITIONAL PART ---------------
Above is one document which has 'name' : 'Yes' values that I want to retreive. However every Document has the same 'Name : YES' inside it. What i would like to do, is first select the document based on a different 'Name' : 'xxxx' by its value. For example above - look up 'name' : 'lastwrite' check that its value is 42 (thus selection this document and not the others) before retrieving the 'name' : 'YES' value (as in the answer you have given me).
(if this counts as a new question please let me know and I will remove it and post a new question)

The only option you have with the existing structure is to use aggregation.
db.dataset.aggregate([{
$unwind: "$Records"
}, {
$unwind: "$Records.Properties"
}, {
$match: {
"Records.Properties.Property.Name": 'YES'
}
}, {
$project: {
"_id": 0,
"value": "$Records.Properties.value"
}
}]).pretty()
Sample Response
{ "value" : "1472065148" }
Assume you were able to update your structure as follows (Removed some fields for brevity). The change here is the Records is no longer an array but just an embedded document.
db.dataset.insertMany([{
"timestamp": ISODate("2016-08-24T14:59:04.000+0000"),
"Records": {
"RecordType": "WFD",
"Properties": [{
"Property": {
"Name": "LastWrite"
},
"value": "42"
}, {
"Property": {
"Name": "YES"
},
"value": "1472065148"
}]
}
}])
Query
db.dataset.find({"Records.Properties.Property.Name":'YES'},{"Records.Properties.$":1}).pretty()
Response
{
"_id": ObjectId("5859e80591c255c059a3da50"),
"Records": {
"Properties": [{
"Property": {
"Name": "YES"
},
"value": "1472065148"
}]
}
}
Update for the additional part:
There are couple of ways you can take care of this. Things can be optimized a bit, but I'll leave that up to you.
Option 1 :
$map applies an equals comparison between the criteria passed and fields in each result element and generates an array with true and false values. $anyElementTrue inspects this array and returns true only if there is at least one true value in the array. Match stage to include only elements with matched value of true.
data field to keep the data using system variable $$ROOT.
Complete Query
db.dataset.aggregate([{
$unwind: "$Records"
}, {
$project: {
"_id": 0,
"matched": {
"$anyElementTrue": {
"$map": {
"input": "$Records.Properties",
"as": "result",
"in": {
"$and": [{
$eq: ["$$result.Property.Name", "LastWrite"]
}, {
$eq: ["$$result.value", "42"]
}]
}
}
}
},
"data": "$$ROOT"
}
}, {
"$match": {
"matched": true
}
}, {
$unwind: "$data.Records"
}, {
$unwind: "$data.Records.Properties"
}, {
$match: {
"data.Records.Properties.Property.Name": 'YES'
}
}, {
$project: {
"_id": 0,
"value": "$data.Records.Properties.value"
}
}]).pretty()
Option 2:
Better option (superior in performance) so use this if you the driver that supports $redact.
Similar to the above version but this one combines both the project and match stage into one. The $cond with $redact accounts for match and when match is found it keeps the complete tree or else discards it.
Complete Query
db.dataset.aggregate([{
$unwind: "$Records"
}, {
"$redact": {
"$cond": [{
"$anyElementTrue": {
"$map": {
"input": "$Records.Properties",
"as": "result",
"in": {
"$and": [{
$eq: ["$$result.Property.Name", "LastWrite"]
}, {
$eq: ["$$result.value", "42"]
}]
}
}
}
},
"$$KEEP",
"$$PRUNE"
]
}
}, {
$unwind: "$Records.Properties"
}, {
$match: {
"Records.Properties.Property.Name": 'YES'
}
}, {
$project: {
"_id": 0,
"value": "$Records.Properties.value"
}
}]).pretty()

Related

Querying a multi-nested array in MongoDb 3.4.2

MongoDB Version - 3.4.2
I'm trying to query within the Sitecore Analytics database, trying to retrieve all users that are associated with a given List Id.
The example dataset I have follows the default Sitecore Analytics setup:
"Tags" : {
"Entries" : {
"ContactLists" : {
"Values" : {
"0" : {
"Value" : "{1E2D1AB7-72A0-4FF7-906B-DCDC020B87D2}",
"DateTime" : ISODate("2020-10-23T17:38:13.891Z")
},
"1" : {
"Value" : "{28BECCD3-476B-4B1D-9A75-02E59EF21286}",
"DateTime" : ISODate("2018-04-18T14:22:41.763Z")
},
"2" : {
"Value" : "{2C2BB0C3-483D-490E-B93A-9155BFBBE5DC}",
"DateTime" : ISODate("2018-05-10T14:26:08.494Z")
},
"3" : {
"Value" : "{DBE480F6-E305-4B35-9E6D-CBED64F4E44F}",
"DateTime" : ISODate("2018-10-27T02:41:28.776Z")
},
}
}
}
},
I want to iterate through all the entries within Values without having to specify 0/1/2/3, avoiding the following:
db.getCollection('Contacts').find({"Tags.Entries.ContactLists.Values.1.Value": "{28BECCD3-476B-4B1D-9A75-02E59EF21286}"})
I've tried the following:
db.getCollection('Contacts').find({"Tags.Entries.ContactLists.Values": {$elemMatch : {"Value":"{28BECCD3-476B-4B1D-9A75-02E59EF21286}"}}})
db.getCollection('Contacts').find({'Tags' : {$elemMatch : {$all : ['{28BECCD3-476B-4B1D-9A75-02E59EF21286}']}}})
db.getCollection('Contacts').forEach(function (doc) {
for(var i in doc.Tags.Entries.ContactLists.Values)
{
doc.Tags.Entries.ContactLists.Values[i].Value = "{28BECCD3-476B-4B1D-9A75-02E59EF21286}";
}
})
And a few other variations which I cannot recall now. And none work.
Any ideas if this is possible or on how to do this?
I want the outcome to just show filter out the results showing only the entries containing the matching GUID
Many thanks!
Demo - https://mongoplayground.net/p/upgYxgzPwJQ
It can be done using aggregation pipeline
Use $objectToArray to convert array
Use $filter to filter the array
db.collection.aggregate([
{
$addFields: {
filteredValue: {
$filter: {
input: {
$objectToArray: "$Tags.Entries.ContactLists.Values"
},
as: "val",
cond: {
$eq: [ // filter condition
"$$val.v.Value",
"{28BECCD3-476B-4B1D-9A75-02E59EF21286}"
]
}
}
}
}
}
])
Output -
[
{
"Tags": {
"Entries": {
"ContactLists": {
"Values": {
"0": {
"DateTime": ISODate("2020-10-23T17:38:13.891Z"),
"Value": "{1E2D1AB7-72A0-4FF7-906B-DCDC020B87D2}"
},
"1": {
"DateTime": ISODate("2018-04-18T14:22:41.763Z"),
"Value": "{28BECCD3-476B-4B1D-9A75-02E59EF21286}"
},
"2": {
"DateTime": ISODate("2018-05-10T14:26:08.494Z"),
"Value": "{2C2BB0C3-483D-490E-B93A-9155BFBBE5DC}"
},
"3": {
"DateTime": ISODate("2018-10-27T02:41:28.776Z"),
"Value": "{DBE480F6-E305-4B35-9E6D-CBED64F4E44F}"
}
}
}
}
},
"_id": ObjectId("5a934e000102030405000000"),
"filteredValue": [
{
"k": "1",
"v": {
"DateTime": ISODate("2018-04-18T14:22:41.763Z"),
"Value": "{28BECCD3-476B-4B1D-9A75-02E59EF21286}"
}
}
]
}
]
You can not use $elemMatch because Values is not array, but object. You can solve the problem with Aggregation Pipeline:
$addFields to add new field Values_Array that will be array representation of Values object.
$objectToArray to transform Values object to array
$match to find all documents that has requested value in new Values_Array field
$project to specify which properties to return from the result
db.getCollection('Contacts').aggregate([
{
"$addFields": {
"Values_Array": {
"$objectToArray": "$Tags.Entries.ContactLists.Values"
}
}
},
{
"$match": {
"Values_Array.v.Value": "{28BECCD3-476B-4B1D-9A75-02E59EF21286}"
}
},
{
"$project": {
"Tags": 1
}
}
])
Here is the working example: https://mongoplayground.net/p/2gY-vu3Qrvz

Querying mongoDB for some chart data - my pipeline seems convoluted

This is a long question. If you bother answering, I will be extra grateful.
I have some time series data that I am trying to query to create various charts. The data format isn't the most simple, but I think my aggregation pipeline is getting a bit out of hand. I am planning to use charts.js to visualise the data on the client.
I will post a sample of my data below as well as my pipeline, with the desired output.
My question is in two parts - answering either one could solve the problem.
Does charts.js accept data formats other than an array of numbers per row? This would mean my pipeline could try to do less.
My pipeline doesn't quite get to the result I need. Can you recommend any alterations to get the correct result from my pipeline? Is there is a simpler way to get my desired output format?
Sample data
Here is a real data sample - a brand with one facebook account and one twitter account. There is some data for some dates in June. Lots of null day and month fields have been omitted.
Brand
[{
"_id": "5943f427e7c11ac3ad3652b0",
"name": "Brand1",
"facebookAccounts": [
"5943f427e7c11ac3ad3652ac",
],
"twitterAccounts": [
"5943f427e7c11ac3ad3652aa",
],
}]
FacebookAccounts
[
{
"_id" : "5943f427e7c11ac3ad3652ac"
"name": "Brand 1 Name",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"likes": 904025,
},
{
"date": "2017-06-17T00:00:00.000Z",
"likes": null,
},
{
"date": "2017-06-18T00:00:00.000Z",
"likes": 904345,
},
],
},
],
}
]
}
]
Twitter accounts
[
{
"_id": "5943f427e7c11ac3ad3652aa",
"name": "Brand 1 Name",
"vendorId": "twitterhandle",
"years": [
{
"date": "2017-01-01T00:00:00.000Z",
"months": [
{
"date": "2017-06-01T00:00:00.000Z",
"days": [
{
"date": "2017-06-16T00:00:00.000Z",
"followers": 69390,
},
{
"date": "2017-06-17T00:00:00.000Z",
"followers": 69397,
{
"date": "2017-06-18T00:00:00.000Z",
"followers": 69428,
},
{
"date": "2017-06-19T00:00:00.000Z",
"followers": 69457,
},
]
},
],
}
]
}
]
The query
For this example, I want, for each brand, a daily sum of facebook likes and twitter followers between June 16th and June 18th. So here, the required format is:
{
brand: Brand1,
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [973415, 69397, 973773]
}
The pipeline
The pipeline seems more convoluted due to the population, but I accept that complexity and it is necessary. Here are the steps:
db.getCollection('brands').aggregate([
{ $match: { _id: { $in: [ObjectId("5943f427e7c11ac3ad3652b0") ] } } },
// Unwind all relevant account types. Make one row per account
{ $project: {
accounts: { $setUnion: [ '$facebookAccounts', '$twitterAccounts' ] } ,
name: '$name'
}
},
{ $unwind: '$accounts' },
// populate the accounts.
// These transform the arrays of facebookAccount ObjectIds into the objects described above.
{ $lookup: { from: 'facebookaccounts', localField: 'accounts', foreignField: '_id', as: 'facebookAccounts' } },
{ $lookup: { from: 'twitteraccounts', localField: 'accounts', foreignField: '_id', as: 'twitterAccounts' } },
// unwind the populated accounts. Back to one record per account.
{ $unwind: { path: '$facebookAccounts', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts', preserveNullAndEmptyArrays: true } },
// unwind to the granularity we want. Here it is one record per day per account per brand.
{ $unwind: { path: '$facebookAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$facebookAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
{ $unwind: { path: '$twitterAccounts.years.months.days', preserveNullAndEmptyArrays: true } },
// Filter each one between dates
{ $match: { $or: [
{ $and: [
{ 'facebookAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'facebookAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]},
{ $and: [
{ 'twitterAccounts.years.months.days.date': { $gte: new Date('2017-06-16') } } ,
{ 'twitterAccounts.years.months.days.date': { $lte: new Date('2017-06-18') } }
]}
] }},
// Build stats and date arrays for each account
{ $group: {
_id: '$accounts',
brandId: { $first: '$_id' },
brandName: { $first: '$name' },
stat: {
$push: {
$sum: {
$add: [
{ $ifNull: ['$facebookAccounts.years.months.days.likes', 0] },
{ $ifNull: ['$twitterAccounts.years.months.days.followers', 0] }
]
}
}
},
date: { $push: { $ifNull: ['$facebookAccounts.years.months.days.date', '$twitterAccounts.years.months.days.date'] } } ,
}}
])
This gives me the output format
[{
_id: accountId, // facebook
brandName: 'Brand1'
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [904025, null, 904345]
},
{
_id: accountId // twitter
brandName: 'Brand1',
date: ["2017-06-16T00:00:00.000Z", "2017-06-17T00:00:00.000Z", "2017-06-18T00:00:00.000Z"],
stat: [69457, 69390, 69397]
}]
So I now need to perform column-wise addition on my stat properties.And then I am stuck - I feel like there should be a more pipeline friendly way to sum these rather than column-wise addition.
Note I accept the extra work that the population required and am happy with that. Most of the repetition is done programmatically.
Thank you if you've gotten this far.
I can trim a lot of fat out of this and keep it compatible with MongoDB 3.2 ( which you must be using at least due to preserveNullAndEmptyArrays ) available operators with a few simple actions. Mostly by simply joining the arrays immediately after $lookup, which is the best place to do it:
Short Optimize
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ]
}
}},
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$unwind": "$all" },
{ "$unwind": "$all.years" },
{ "$unwind": "$all.years.months" },
{ "$unwind": "$all.years.months.days" },
{ "$match": {
"all.years.months.days.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$all.years.months.days.date"
},
"total": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$all.years.months.days.likes", 0 ] },
{ "$ifNull": [ "$all.years.months.days.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$total" }
}}
])
This gives the result:
{
"_id" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
With MongoDB 3.4 we could probably speed it up a "little" more by filtering the arrays and breaking them down before we eventually $unwind to make this work across documents, or maybe even not worry about going across documents at all if the "name" from "brands" is unique. The pipeline operations to compact down the arrays "in place" though are quite cumbersome to code, if a "little" better on performance.
You seem to be doing this "per brand" or for a small sample, so it's likely of little consequence.
As for the chartjs data format, I don't seem to be able to get my hands on what I believe is a different data format to the array format here, but again this should have little bearing.
The main point I see addressed is we can easily move away from your previous output that separated the "facebook" and "twitter" data, and simply aggregate by date moving all the data together "before" the arrays are constructed.
That last point then obviates the need for further "convoluted" operations to attempt to "merge" those two documents and the arrays produced.
Alternate Optimize
As an alternate approach where this does in fact not aggregate across documents, then you can essentially do the "filter" on the array in place and then simply sum and reshape the received result in client code.
db.brands.aggregate([
{ "$lookup": {
"from": "facebookaccounts",
"localField": "facebookAccounts",
"foreignField": "_id",
"as": "facebookAccounts"
}},
{ "$lookup": {
"from": "twitteraccounts",
"localField": "twitterAccounts",
"foreignField": "_id",
"as": "twitterAccounts"
}},
{ "$project": {
"name": 1,
"all": {
"$map": {
"input": { "$concatArrays": [ "$facebookAccounts", "$twitterAccounts" ] },
"as": "all",
"in": {
"years": {
"$map": {
"input": "$$all.years",
"as": "year",
"in": {
"months": {
"$map": {
"input": "$$year.months",
"as": "month",
"in": {
"days": {
"$filter": {
"input": "$$month.days",
"as": "day",
"cond": {
"$and": [
{ "$gte": [ "$$day.date", new Date("2017-06-16") ] },
{ "$lte": [ "$$day.date", new Date("2017-06-18") ] }
]
}
}
}
}
}
}
}
}
}
}
}
}
}}
]).map(doc => {
doc.all = [].concat.apply([],[].concat.apply([],[].concat.apply([],doc.all.map(d => d.years)).map(d => d.months)).map(d => d.days));
doc.all = doc.all.reduce((a,b) => {
if ( a.findIndex( d => d.date.valueOf() == b.date.valueOf() ) != -1 ) {
a[a.findIndex( d => d.date.valueOf() == b.date.valueOf() )].stat += (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0);
} else {
a = a.concat([{ date: b.date, stat: (b.hasOwnProperty('likes')) ? (b.likes || 0) : (b.followers || 0) }]);
}
return a;
},[]);
doc.date = doc.all.map(d => d.date);
doc.stat = doc.all.map(d => d.stat);
delete doc.all;
return doc;
})
This really leaves all the things that "need" to happen on the server, on the server. And it's then a fairly trivial task to "flatten" the array and process to "sum up" and reshape it. This would mean less load on the server, and the data returned is not really that much greater per document.
Gives the same result of course:
[
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"date" : [
ISODate("2017-06-16T00:00:00Z"),
ISODate("2017-06-17T00:00:00Z"),
ISODate("2017-06-18T00:00:00Z")
],
"stat" : [
973415,
69397,
973773
]
}
]
Committing to the Diet
The biggest problem you really have is with the multiple collections and the heavily nested documents. Neither of these is doing you any favors here and will with larger results cause real performance problems.
The nesting in particular is completely unnecessary as well as not being very maintainable since there are limitations to "update" where you have nested arrays. See the positional $ operator documentation, as well as many posts about this.
Instead you really want a single collection with all those "days" entries in it. You can always work with that source easily for query as well as aggregation purposes and it should look something like this:
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38097"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"likes" : 904025,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38098"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"likes" : null,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac38099"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"likes" : 904345,
"__t" : "Facebook",
"account" : ObjectId("5943f427e7c11ac3ad3652ac")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809a"),
"date" : ISODate("2017-06-16T00:00:00Z"),
"followers" : 69390,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809b"),
"date" : ISODate("2017-06-17T00:00:00Z"),
"followers" : 69397,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809c"),
"date" : ISODate("2017-06-18T00:00:00Z"),
"followers" : 69428,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
{
"_id" : ObjectId("5948cd5cd6eb0b7d6ac3809d"),
"date" : ISODate("2017-06-19T00:00:00Z"),
"followers" : 69457,
"__t" : "Twitter",
"account" : ObjectId("5943f427e7c11ac3ad3652aa")
}
Combining those referenced in the brands collection as well:
{
"_id" : ObjectId("5943f427e7c11ac3ad3652b0"),
"name" : "Brand1",
"accounts" : [
ObjectId("5943f427e7c11ac3ad3652ac"),
ObjectId("5943f427e7c11ac3ad3652aa")
]
}
Then you simply aggregate like this:
db.brands.aggregate([
{ "$lookup": {
"from": "social",
"localField": "accounts",
"foreignField": "account",
"as": "accounts"
}},
{ "$unwind": "$accounts" },
{ "$match": {
"accounts.date": {
"$gte": new Date("2017-06-16"), "$lte": new Date("2017-06-18")
}
}},
{ "$group": {
"_id": {
"brand": "$name",
"date": "$accounts.date"
},
"stat": {
"$sum": {
"$sum": [
{ "$ifNull": [ "$accounts.likes", 0 ] },
{ "$ifNull": [ "$accounts.followers", 0 ] }
]
}
}
}},
{ "$sort": { "_id": 1 } },
{ "$group": {
"_id": "$_id.brand",
"date": { "$push": "$_id.date" },
"stat": { "$push": "$stat" }
}}
])
This is actually the most efficient thing you can do, and it's mostly because of what actually happens on the server. We need to look at the "explain" output to see what happens to the pipeline here:
{
"$lookup" : {
"from" : "social",
"as" : "accounts",
"localField" : "accounts",
"foreignField" : "account",
"unwinding" : {
"preserveNullAndEmptyArrays" : false
},
"matching" : {
"$and" : [
{
"date" : {
"$gte" : ISODate("2017-06-16T00:00:00Z")
}
},
{
"date" : {
"$lte" : ISODate("2017-06-18T00:00:00Z")
}
}
]
}
}
}
This is what happens when you send $lookup -> $unwind -> $match to the server as the latter two stages are "hoisted" into the $lookup itself. This reduces the results in the actual "query" run on the collection to be joined.
Without that sequence, then $lookup potentially pulls in "a lot of data" with no constraint, and would break the 16MB BSON limit under most normal loads.
So not only is the process a lot more simple in the altered form, it actually "scales" where the present structure will not. This is something that you seriously should consider.

$elemMatch against two Array elements if one fails

A bit odd but this is what I am looking for.
I have an array as follow:
Document 1:
Items: [
{
"ZipCode": "11111",
"ZipCode4" "1234"
}
Document 2:
Items: [
{
"ZipCode": "11111",
"ZipCode4" "0000"
}
I would like to use a single query, and send a filter on ZipCode = 1111 && ZipCode4 = 4321, if this fails, the query should look for ZipCode = 1111 && ZipCode4: 0000
Is there a way to do this in a single query ? or do I need to make 2 calls to my database ?
For matching both data set (11111/4321) and (11111/0000), you can use $or and $and with $elemMatch like the following :
db.test.find({
$or: [{
$and: [{
"Items": {
$elemMatch: { "ZipCode": "11111" }
}
}, {
"Items": {
$elemMatch: { "ZipCode4": "4321" }
}
}]
}, {
$and: [{
"Items": {
$elemMatch: { "ZipCode": "11111" }
}
}, {
"Items": {
$elemMatch: { "ZipCode4": "0000" }
}
}]
}]
})
As you want conditional staging, this is not possible but we can get closer to it like this :
db.test.aggregate([{
$match: {
$or: [{
$and: [{ "Items.ZipCode": "11111" }, { "Items.ZipCode4": "4321" }]
}, {
$and: [{ "Items.ZipCode": "11111" }, { "Items.ZipCode4": "0000" }]
}]
}
}, {
$project: {
Items: 1,
match: {
"$map": {
"input": "$Items",
"as": "val",
"in": {
"$cond": [
{ $and: [{ "$eq": ["$$val.ZipCode", "11111"] }, { "$eq": ["$$val.ZipCode4", "4321"] }] },
true,
false
]
}
}
}
}
}, {
$unwind: "$match"
}, {
$group: {
_id: "$match",
data: {
$push: {
_id: "$_id",
Items: "$Items"
}
}
}
}])
The first $match is for selecting only the items we need
The $project will build a new field that check if this items is from the 1st set of data (11111/4321) or the 2nd set of data (11111/0000).
The $unwind is used to remove the array generated by $map.
The $group group by set of data
So in the end you will have an output like the following :
{ "_id" : true, "data" : [ { "_id" : ObjectId("58af69ac594b51730a394972"), "Items" : [ { "ZipCode" : "11111", "ZipCode4" : "4321" } ] }, { "_id" : ObjectId("58af69ac594b51730a394974"), "Items" : [ { "ZipCode" : "11111", "ZipCode4" : "4321" } ] } ] }
{ "_id" : false, "data" : [ { "_id" : ObjectId("58af69ac594b51730a394971"), "Items" : [ { "ZipCode" : "11111", "ZipCode4" : "0000" } ] } ] }
Your application logic can check if there is _id:true in this output array, just take the corresponding data field for _id:true. If there is _id:false in this object take the corresponding data field for _id:false.
In the last $group, you can also use $addToSet to builds 2 field data1 & data2 for both type of data set but this will be painful to use as it will add null object to the array for each one of the opposite type :
"$addToSet": {
"$cond": [
{ "$eq": ["$_id", true] },
"$data",
null
]
}
Here is a gist

MongoDB query with conditional group by statement

I need to export customer records from database of mongoDB. Exported customer records should not have duplicated values. "firstName+lastName+code" is the key to DE-duped the record and If there are two records present in database with same key then I need to give preference to source field with value other than email.
customer (id,firstName,lastName,code,source) collection is this.
If there are record 3 records with same unique key and 3 different sources then i need to choose only one record between 2 sources(TV,internet){or if there are n number of sources i need the one record only}not with the 'email'(as email will be choosen when only one record is present with the unique key and source is email)
query using:
db.customer.aggregate([
{
"$match": {
"active": true,
"dealerCode": { "$in": ["111391"] },
"source": { "$in": ["email", "TV", "internet"] }
}
},
{
$group: {
"_id": {
"firstName": "$personalInfo.firstName",
"lastName": "$personalInfo.lastName",
"code": "$vehicle.code"
},
"source": {
$addToSet: { "source": "$source" }
}
}
},
{
$redact:
{
$cond: [
{ $eq: [{ $ifNull: ["$source", "other"] }, "email"] },
"$$PRUNE",
"$$DESCEND"
]
}
},
{
$project:
{
"source":
{
$map:
{
"input": {
$cond: [
{ $eq: [{ $size: "$source" }, 0] },
[{ "source": "email" }],
"$source"
]
},
"as": "inp",
"in": "$$inp.source"
}
},
"record": { "_id": 1 }
}
}
])
sample output:
{ "_id" : { "firstName" : "sGI6YaJ36WRfI4xuJQzI7A==", "lastName" : "99eQ7i+uTOqO8X+IPW+NOA==", "code" : "1GTHK23688F113955" }, "source" : ["internet"] }
{ "_id" : { "firstName" : "WYDROTF/9vs9O7XhdIKd5Q==", "lastName" : "BM18Uq/ltcbdx0UJOXh7Sw==", "code" : "1G4GE5GV5AF180133" }, "source" : ["internet"] }
{ "_id" : { "firstName" : "id+U2gYNHQaNQRWXpe34MA==", "lastName" : "AIs1G33QnH9RB0nupJEvjw==", "code" : "1G4GE5EV0AF177966" }, "source" : ["internet"] }
{ "_id" : { "firstName" : "qhreJVuUA5l8lnBPVhMAdw==", "lastName" : "petb0Qx3YPfebSioY0wL9w==", "code" : "1G1AL55F277253143" }, "source" : ["TV"] }
{ "_id" : { "firstName" : "qhreJVuUA5l8lnBPVhMAdw==", "lastName" : "6LB/NmhbfqTagbOnHFGoog==", "code" : "1GCVKREC0EZ168134" }, "source" : ["TV", "internet"] }
This is a problem with this query please suggest :(
Your code doesn't work, because $cond is not an accumulator operator. Only these accumulator operators, can be used in a $group stage.
Assuming your records contain not more than two possible values of source as you mention in your question, you could add a conditional $project stage and modify the $group stage as,
Code:
db.customer.aggregate([
{
$group: {
"_id": {
"id": "$id",
"firstName": "$firstName",
"lastName": "$lastName",
"code": "$code"
},
"sourceA": { $first: "$source" },
"sourceB": { $last: "$source" }
}
},
{
$project: {
"source": {
$cond: [
{ $eq: ["$sourceA", "email"] },
"$sourceB",
"$sourceA"
]
}
}
}
])
In case there can be more that two possible values for source, then you could do the following:
Group by the id, firstName, lastName and code. Accumulate
the unique values of source, using the $addToSet operator.
Use $redact to keep only the values other than email.
Project the required fields, if the source array is empty(all the elements have been removed), add a
value email to it.
Unwind the source field to list it as a field and not an array.
(optional)
Code:
db.customer.aggregate([
{
$group: {
"_id": {
"id": "$id",
"firstName": "$firstName",
"lastName": "$lastName",
"code": "$code"
},
"sourceArr": { $addToSet: { "source": "$source" } }
}
},
{
$redact: {
$cond: [
{ $eq: [{ $ifNull: ["$source", "other"] }, "email"] },
"$$PRUNE",
"$$DESCEND"
]
}
},
{
$project: {
"source": {
$map: {
"input":
{
$cond: [
{ $eq: [{ $size: "$sourceArr" }, 0] },
[{ "source": "item" }],
"$sourceArr"]
},
"as": "inp",
"in": "$$inp.source"
}
}
}
}
])

MongoDb aggregate and group by two fields depending on values

I want to aggregate over a collection where a type is given. If the type is foo I want to group by the field author, if the type is bar I want to group by user.
All this should happen in one query.
Example Data:
{
"_id": 1,
"author": {
"someField": "abc",
},
"type": "foo"
}
{
"_id": 2,
"author": {
"someField": "abc",
},
"type": "foo"
}
{
"_id": 3,
"user": {
"someField": "abc",
},
"type": "bar"
}
This user field is only existing if the type is bar.
So basically something like that... tried to express it with an $or.
function () {
var results = db.vote.aggregate( [
{ $or: [ {
{ $match : { type : "foo" } },
{ $group : { _id : "$author", sumAuthor : {$sum : 1} } } },
{ { $match : { type : "bar" } },
{ $group : { _id : "$user", sumUser : {$sum : 1} } }
} ] }
] );
return results;
}
Does someone have a good solution for this?
I think it can be done by
db.c.aggregate([{
$group : {
_id : {
$cond : [{
$eq : [ "$type", "foo"]
}, "author", "user"]
},
sum : {
$sum : 1
}
}
}]);
The solution below can be cleaned up a bit...
For "bar" (note: for "foo", you have to change a bit)
db.vote.aggregate(
{
$project:{
user:{ $ifNull: ["$user", "notbar"]},
type:1
}
},
{
$group:{
_id:{_id:"$user.someField"},
sumUser:{$sum:1}
}
}
)
Also note: In you final answer, anything that is not of type "bar" will have an _id=null
What you want here is the $cond operator, which is a ternary operator returning a specific value where the condition is true or false.
db.vote.aggregate([
{ "$group": {
"_id": null,
"sumUser": {
"$sum": {
"$cond": [ { "$eq": [ "$type", "user" ] }, 1, 0 ]
}
},
"sumAuhtor": {
"$sum": {
"$cond": [ { "$eq": [ "$type", "auhtor" ] }, 1, 0 ]
}
}
}}
])
This basically tests the "type" of the current document and decides whether to pass either 1 or 0 to the $sum operation.
This also avoids errant grouping should the "user" and "author" fields contain the same values as they do in your example. The end result is a single document with the count of both types.