MongoDB: adding fields based on partial match query - expression vs query - mongodb

So I have one collection that I'd like to query/aggegate. The query is made up of several parts that are OR'ed together. For every part of the query, I have a specific set of fields that need to be shown.
So my hope was to do this with an aggregate, that will $match the queries OR'ed together all at once, and then use $project with $cond to see what fields are needed. The problem here is that $cond uses expressions, while the $match uses queries. Which is a problem since some query features are not available as an expression. So a simple conversion is not an option.
So I need another solution..
- I could just make an aggregate per separate query, because there I know what fields to match, and them merger the results together. But this will not work if I use pagination in the queries (limit/skip etc).
- find some other way to tag every document so I can (afterwards) remove any fields not needed. It might not be super efficient, but would work. No clue yet how to do that
- figure out a way to make queries that are only made of expressions. For my purpose that might be good enough, and it would mean a rewrite of the query parser. It could work, but is not ideal.

So This is the next incarnation right here. It will deduplicate and merge records and finally transform it back again to something resembling a normal query result:
db.getCollection('somecollection').aggregate(
[
{
"$facet": {
"f1": [
{
"$match": {
<some query 1>
},
{
"$project: {<some fixed field projection>}
}
],
"f2": [
{
"$match": {
<some query 1>
}
},
{
"$project: {<some fixed field projection>}
}
]
}
},
{
$project: {
"rt": { $concatArrays: [ "$f1", "$f2"] }
}
},
{ $unwind: { path: "$rt"} },
{ $replaceRoot: {newRoot:"$rt"}},
{ $group: {_id: "$_id", items: {$push: {item:"$$ROOT"} } }},
{
$project: {
"rt": { $mergeObjects: "$items" }
}
},
{ $replaceRoot: {newRoot:"$rt.item"}},
]
);
There might still be some optimisation to be, so any comments are welcome

I found an extra option using $facet. This way, I can make a facet for every group opf fields/subqueries. This seems to work fine, except that the result is a single document with a bunch of arrays. not yet sure how to convert that back to multiple documents.

okay, so now I have it figured out. I'm not sure yet about all of the intricacies of this solution, but it seems to work in general. Here an example:
db.getCollection('somecollection').aggregate(
[
{
"$facet": {
"f1": [
{
"$match": {
<some query 1>
},
{
"$project: {<some fixed field projection>
}
],
"f2": [
{
"$match": {
<some query 1>
}
},
{
"$project: {<some fixed field projection>
}
]
}
},
{
$project: {
"rt": { $concatArrays: [ "$f1", "$f2"] }
}
},
{ $unwind: { path: "$rt"} },
{ $replaceRoot: {newRoot:"$rt"}}
]
);

Related

Mongoose - Find object with any key and specific subkey

Let's say I have a Mongo database that contains objects such as :
[
{
"test": {
"123123": {
"someField": null
}
}
},
{
"test": {
"323143": {
"someField": "lalala"
},
"121434": {
"someField": null
}
}
},
{
"test": {
"4238023": {
"someField": "afafa"
}
}
},
]
As you can see, the keys right under "test" can vary.
I want to find all documents that have at least one someField that is not null.
Something like find : "test.*.someField": { $ne: null } ( * represents any value here)
How can i do this in mongoose ? I'm thinking an aggregation pipeline will be needed here but not exactly sure how.
Constraints :
I don't have much control over the db schema in this scenario.
Ideally i don't want to have to do this logic in nodeJS, I would like to query directly via the db.
The trickiest part here is that you cannot search keys that match a pattern. Luckily there is a workaround. Yes, you do need an aggregation pipeline.
Let's look at an individual document:
{
"test": {
"4238023": {
"someField": "afafa"
}
}
}
We need to query someField, but to get to it, we need to somehow circumvent 4238023 because it varies with each document. What if we could break that test object down and look at it presented like so:
{
"k": "4238023",
"v": {
"someField": "afafa"
}
}
Suddenly, it get a heck of a lot easier to query it. Well, mongodb aggreation offers a function called $objectToArray which does exactly that.
So what we are going to do is:
Convert the test object into an array for each document.
Match only documents where AT LEAST ONE v.someField is not null.
Put it back together to look as your original documents, minus the ones that do not match the null criterion.
So, here is the pipeline you need:
db.collection.aggregate([
{
"$project": {
"arr": {
"$objectToArray": "$$ROOT.test"
}
}
},
{
"$match": {
arr: {
$elemMatch: {
"v.someField": {
$ne: null
}
}
}
}
},
{
"$project": {
"_id": 1,
"test": {
$arrayToObject: "$arr"
}
}
}
])
Playground: https://mongoplayground.net/p/b_VNuOLgUb2
Note that in mongoose you will run this aggregation the same way you would do it in a terminal... well plus the .then.
YourCollection.aggregate([
...
...
])
.then(result => console.log(result))

How to update and return all array elements which matches a condition?

I tried the below and understood $ returns first matching element.
vardate=newDate();
date.setDate(date.getDate()-30);
db.getCollection('status').find({
'data.end_ts': {
'$lte': date
},
$or: [
{
"data.risk_status": 'inactive'
},
{
"data.risk_status": 'expired'
}
]
},
{
"data.$": 1
})
Then I planned to remove projection and do the removal job at java.
Here, the problem is that I need to remove and insert into another collection. Hence, I can't just use delete.
I came up with another way so that I can avoid conditions at java.
db.getCollection('status').aggregate([
{
"$match": {
$or: [
{
"data.risk_status": 'inactive'
},
{
"data.risk_status": 'expired'
}
]
}
},
{
$unwind: "$data"
},
{
$match: {
'datas.end_ts': {
'$lte': date
}
}
},
{
$group:{
"_id":"$_id",
"a":{$push:"$$ROOT"}
}
},
{
$project:{
"_id":1,
"a.data":1
}
}
])
])
Is there any other way which deletes and returns the docs. So that I just can save the returned doc to other collection.
Can I use $out here to do that? I am not sure. Any help which reduces the network round trip time is desirable.
Yes, of course you can use $out to push a new collections. Since you need to add some data from one collection to another collection, $out helps efficiently and reduce programmatical time and codes.
Aggregation aggregation = Aggregation.newAggregation(
match(Criteria.where("data.risk_status").in("inactive","expired")),
unwind("$data"),
// all other stages
o->new Document("$out","NEW_COLLECTION_NAME")
).withOptions(AggregationOptions.builder().allowDiskUse(Boolean.TRUE).build());
Note : $out must be the last stage of aggregation. Ref $out.

conditional 'from' in mongodb lookup

I'm trying to lookup in multiple collections, based on a specific field.
for example if field type equals to 1, lookup the collection from Admin and if type equals to 2, lookup from Client.I know the following query is incorrect, but i just want to show what i mean.
db.User.aggregate([
{
"$lookup":{
"localField":"ID",
"from":{"$cond": { if: { "type":1 } ,then: "Admin", else: "Client"} },
"foreignField":"ID",
"as":"newUser"
},
{
"$unwind":"$newUser"
}
}])
Any help will be appreciated.
Bad news, you cant, the only solution is to use $facet and have 2 separated pipelines.
As you probably imagine this is not a great solution as it wastes resources on the redundant pipeline.
I'm not sure if you can involve some code but if you can it is your best option.
$facet pipeline draft:
db.User.aggregate([
{
$facet: {
user: [
{
"$lookup":{
"localField":"ID",
"from":Client,
"foreignField":"ID",
"as":"newUser"
},
},
{
"$unwind":"$newUser"
}],
admin: [
{
"$lookup":{
"localField":"ID",
"from":Admin,
"foreignField":"ID",
"as":"newUser"
},
},
{
"$unwind":"$newUser"
}],
}
},
{
$match: {
use "correct" user here..
}
}
])

How to use $regex inside $or as an Aggregation Expression

I have a query which allows the user to filter by some string field using a format that looks like: "Where description of the latest inspection is any of: foo or bar". This works great with the following query:
db.getCollection('permits').find({
'$expr': {
'$let': {
vars: {
latestInspection: {
'$arrayElemAt': ['$inspections', {
'$indexOfArray': ['$inspections.inspectionDate', {
'$max': '$inspections.inspectionDate'
}]
}]
}
},
in: {
'$in': ['$$latestInspection.description', ['Fire inspection on property', 'Health inspection']]
}
}
}
})
What I want is for the user to be able to use wildcards which I turn into regular expressions: "Where description of the latest inspection is any of: Health inspection or Found a * at the property".
The regex I get, don't need help with that. The problem I'm facing is, apparently the aggregation $in operator does not support matching by regular expressions. So I thought I'd build this using $or since the docs don't say I can't use regex. This was my best attempt:
db.getCollection('permits').find({
'$expr': {
'$let': {
vars: {
latestInspection: {
'$arrayElemAt': ['$inspections', {
'$indexOfArray': ['$inspections.inspectionDate', {
'$max': '$inspections.inspectionDate'
}]
}]
}
},
in: {
'$or': [{
'$$latestInspection.description': {
'$regex': /^Found a .* at the property$/
}
}, {
'$$latestInspection.description': 'Health inspection'
}]
}
}
}
})
Except I'm getting the error:
"Unrecognized expression '$$latestInspection.description'"
I'm thinking I can't use $$latestInspection.description as an object key but I'm not sure (my knowledge here is limited) and I can't figure out another way to do what I want. So you see I wasn't even able to get far enough to see if I can use $regex in $or. I appreciate all the help I can get.
Everything inside $expr is an aggregation expression, and the documentation may not "say you cannot explicitly", but the lack of any named operator and the JIRA issue SERVER-11947 certainly say that. So if you need a regular expression then you really have no other option than using $where instead:
db.getCollection('permits').find({
"$where": function() {
var description = this.inspections
.sort((a,b) => b.inspectionDate.valueOf() - a.inspectionDate.valueOf())
.shift().description;
return /^Found a .* at the property$/.test(description) ||
description === "Health Inspection";
}
})
You can still use $expr and aggregation expressions for an exact match, or just keep the comparison within the $where anyway. But at this time the only regular expressions MongoDB understands is $regex within a "query" expression.
If you did actually "require" an aggregation pipeline expression that precludes you from using $where, then the only current valid approach is to first "project" the field separately from the array and then $match with the regular query expression:
db.getCollection('permits').aggregate([
{ "$addFields": {
"lastDescription": {
"$arrayElemAt": [
"$inspections.description",
{ "$indexOfArray": [
"$inspections.inspectionDate",
{ "$max": "$inspections.inspectionDate" }
]}
]
}
}},
{ "$match": {
"lastDescription": {
"$in": [/^Found a .* at the property$/,/Health Inspection/]
}
}}
])
Which leads us to the fact that you appear to be looking for the item in the array with the maximum date value. The JavaScript syntax should be making it clear that the correct approach here is instead to $sort the array on "update". In that way the "first" item in the array can be the "latest". And this is something you can do with a regular query.
To maintain the order, ensure new items are added to the array with $push and $sort like this:
db.getCollection('permits').updateOne(
{ "_id": _idOfDocument },
{
"$push": {
"inspections": {
"$each": [{ /* Detail of inspection object */ }],
"$sort": { "inspectionDate": -1 }
}
}
}
)
In fact with an empty array argument to $each an updateMany() will update all your existing documents:
db.getCollection('permits').updateMany(
{ },
{
"$push": {
"inspections": {
"$each": [],
"$sort": { "inspectionDate": -1 }
}
}
}
)
These really only should be necessary when you in fact "alter" the date stored during updates, and those updates are best issued with bulkWrite() to effectively do "both" the update and the "sort" of the array:
db.getCollection('permits').bulkWrite([
{ "updateOne": {
"filter": { "_id": _idOfDocument, "inspections._id": indentifierForArrayElement },
"update": {
"$set": { "inspections.$.inspectionDate": new Date() }
}
}},
{ "updateOne": {
"filter": { "_id": _idOfDocument },
"update": {
"$push": { "inspections": { "$each": [], "$sort": { "inspectionDate": -1 } } }
}
}}
])
However if you did not ever actually "alter" the date, then it probably makes more sense to simply use the $position modifier and "pre-pend" to the array instead of "appending", and avoiding any overhead of a $sort:
db.getCollection('permits').updateOne(
{ "_id": _idOfDocument },
{
"$push": {
"inspections": {
"$each": [{ /* Detail of inspection object */ }],
"$position": 0
}
}
}
)
With the array permanently sorted or at least constructed so the "latest" date is actually always the "first" entry, then you can simply use a regular query expression:
db.getCollection('permits').find({
"inspections.0.description": {
"$in": [/^Found a .* at the property$/,/Health Inspection/]
}
})
So the lesson here is don't try and force calculated expressions upon your logic where you really don't need to. There should be no compelling reason why you cannot order the array content as "stored" to have the "latest date first", and even if you thought you needed the array in any other order then you probably should weigh up which usage case is more important.
Once reodered you can even take advantage of an index to some extent as long as the regular expressions are either anchored to the beginning of string or at least something else in the query expression does an exact match.
In the event you feel you really cannot reorder the array, then the $where query is your only present option until the JIRA issue resolves. Which is hopefully actually for the 4.1 release as currently targeted, but that is more than likely 6 months to a year at best estimate.

aggregating metrics data in mongodb

I'm trying to pull report data out of a realtime metrics system inspired by the NYC MUG/SimpleReach schema, and maybe my mind is still stuck in SQL mode.
The data is stored in a document like so...
{
"_id": ObjectId("5209683b915288435894cb8b"),
"account_id": 922,
"project_id": 22492,
"stats": {
"2009": {
"04": {
"17": {
"10": {
"sum": {
"impressions": 11
}
},
"11": {
"sum": {
"impressions": 603
}
},
},
},
},
}}
and I've been trying different variations of the aggregation pipeline with no success.
db.metrics.aggregate({
$match: {
'project_id':22492
}}, {
$group: {
_id: "$project_id",
'impressions': {
//This works, but doesn't sum up the data...
$sum: '$stats.2009.04.17.10.sum.impressions'
/* none of these work.
$sum: ['$stats.2009.04.17.10.sum.impressions',
'$stats.2009.04.17.11.sum.impressions']
$sum: {'$stats.2009.04.17.10.sum.impressions',
'$stats.2009.04.17.11.sum.impressions'}
$sum: '$stats.2009.04.17.10.sum.impressions',
'$stats.2009.04.17.11.sum.impressions'
*/
}
}
any help would be appreciated.
(ps. does anyone have any ideas on how to do date range searches using this document schema? )
$group is designed to be applied to many documents, but here we only have one matched document.
Instead, $project could be used to sum up specific fields, like this:
db.metrics.aggregate(
{ $match: {
'project_id':22492
}
},
{ $project: {
'impressions': {
$add: [
'$stats.2009.04.17.10.sum.impressions',
'$stats.2009.04.17.11.sum.impressions'
]
}
}
})
I don't think there is an elegant way to do date range searches with this schema, because MongoDB operations/predictions are designed to be applied on values, rather than keys in a document. If I understand correctly, the most interesting point in the slides you mentioned is to cache/pre-aggregate metrics when updating. That's a good idea, but could be implemented with another schema. For example, using date and time with indexes, which are supported by MongoDB, might be a good choice for range searches. Even aggregation framework supports data operations, giving more flexibility.