Updating matched array by identifier with multiple names [duplicate] - mongodb

I have a large DB with various inconsistencies. One of the items I would like to clear up is changing the country status based on the population.
A Sample of the data is:
{ "_id" : "D", "name" : "Deutschland", "pop" : 70000000, "country" : "Large Western" }
{ "_id" : "E", "name" : "Eire", "pop" : 4500000, "country" : "Small Western" }
{ "_id" : "G", "name" : "Greenland", "pop" : 30000, "country" : "Dependency" }
{ "_id" : "M", "name" : "Mauritius", "pop" : 1200000, "country" : "Small island"}
{ "_id" : "L", "name" : "Luxembourg", "pop" : 500000, "country" : "Small Principality" }
Obviously I would like to change the country field go something more uniform, based on population size.
I've tried this approach, but obviously missing some way of tying into an update of the country field.
db.country.updateMany( { case : { $lt : ["$pop" : 20000000] }, then : "Small country" }, { case : { $gte : ["$pop" : 20000000] }, then : "Large country" }
Edit: Posted before I was finished writing.
I was thinking to use $cond functionality, to basically return if true, do X, if false, do y, while using the updateMany.
Is this possible, or is there a workaround?

You really want want bulkWrite() using two "updateMany" statements within it instead. Aggregation expressions cannot be used to do "alternate selection" in any form of update statement.
db.country.bulkWrite([
{ "updateMany": {
"filter": { "pop": { "$lt": 20000000 } },
"update": { "$set": { "country": "Small Country" } }
}},
{ "updateMany": {
"filter": { "pop": { "$gt": 20000000 } },
"update": { "$set": { "country": "Large Country" } }
}}
])
There is still an outstanding "feature request" on SERVER-6566 for "conditional syntax", but this is not yet resolved. The "bulk" API was actually introduced after this request was raised, and really can be adapted as shown to do more or less the same thing.
Also using $out in an aggregation statement as was otherwise suggested is not an option to "update" and can only write to a "new collection" at present. The slated change from MongoDB 4.2 onwards would allow $out to actually "update" an existing collection, however this would only be where the collection to be updated is different from any other collection used within the gathering of data from the aggregation pipeline. So it is not possible to use an aggregation pipeline to update the same collection as what you are reading from.
In short, use bulkWrite().

Related

For each document retrieve object with $max field from array

I have the following documents in my collection. Each document contains historical weather data about a specific location:
{
'location':'new york',
'history':[
{'timestamp':1524542400, 'temp':79, 'wind_speed':1, 'wind_direction':'SW'}
{'timestamp':1524548400, 'temp':80, 'wind_speed':2, 'wind_direction':'SW'}
{'timestamp':1524554400, 'temp':82, 'wind_speed':3, 'wind_direction':'S'}
{'timestamp':1524560400, 'temp':78, 'wind_speed':4, 'wind_direction':'S'}
]
},
{
'location':'san francisco',
'history':[
{'timestamp':1524542400, 'temp':80, 'wind_speed':5, 'wind_direction':'SW'}
{'timestamp':1524548400, 'temp':81, 'wind_speed':6, 'wind_direction':'SW'}
{'timestamp':1524554400, 'temp':82, 'wind_speed':7, 'wind_direction':'S'}
{'timestamp':1524560400, 'temp':73, 'wind_speed':8, 'wind_direction':'S'}
]
},
{
'location':'miami',
'history':[
{'timestamp':1524542400, 'temp':84, 'wind_speed':9, 'wind_direction':'SW'}
{'timestamp':1524548400, 'temp':85, 'wind_speed':10, 'wind_direction':'SW'}
{'timestamp':1524554400, 'temp':86, 'wind_speed':11, 'wind_direction':'S'}
{'timestamp':1524560400, 'temp':87, 'wind_speed':12, 'wind_direction':'S'}
]
}
I would like to get a list of the most recent weather data for each location (more or less) like so:
{
'location':'new york',
'history':{'timestamp':1524560400, 'temp':78, 'wind_speed':4, 'wind_direction':'S'}
},
{
'location':'san francisco',
'history':{'timestamp':1524560400, 'temp':73, 'wind_speed':8, 'wind_direction':'S'}
},
{
'location':'miami',
'history':{'timestamp':1524560400, 'temp':87, 'wind_speed':12, 'wind_direction':'S'}
}
I was pretty sure it needed some sort of $group aggregate but can't figure out how to select an entire object by $max:<field>. For example the below query only returns the max timestamp itself, without any of the accompanying fields.
db.collection.aggregate([{
'$unwind': '$history'
}, {
'$group': {
'_id': '$name',
'timestamp': {
'$max': '$history.timestamp'
}
}
}])
returns
{ "_id" : "new york", "timestamp" : 1524560400 }
{ "_id" : "san franciscoeo", "timestamp" : 1524560400 }
{ "_id" : "miami", "timestamp" : 1524560400 }
The actual collection and arrays are very large so client side processing won't be ideal. Any help would be much appreciated.
Well as the author of the answer you found, I think we can actually do a bit better with modern MongoDB versions.
Single match per document
In short we can actually apply $max to your particular case, used with $indexOfArray and $arrayElemAt to extract the matched value:
db.collection.aggregate([
{ "$addFields": {
"history": {
"$arrayElemAt": [
"$history",
{ "$indexOfArray": [ "$history.timestamp", { "$max": "$history.timestamp" } ] }
]
}
}}
])
Which will return you:
{
"_id" : ObjectId("5ae9175564de8a00a66b3974"),
"location" : "new york",
"history" : {
"timestamp" : 1524560400,
"temp" : 78,
"wind_speed" : 4,
"wind_direction" : "S"
}
}
{
"_id" : ObjectId("5ae9175564de8a00a66b3975"),
"location" : "san francisco",
"history" : {
"timestamp" : 1524560400,
"temp" : 73,
"wind_speed" : 8,
"wind_direction" : "S"
}
}
{
"_id" : ObjectId("5ae9175564de8a00a66b3976"),
"location" : "miami",
"history" : {
"timestamp" : 1524560400,
"temp" : 87,
"wind_speed" : 12,
"wind_direction" : "S"
}
}
That is of course without actually needing to "group" anything and simply find the $max value from within each document, as you seem to be trying to do. This avoids you needing to "mangle" any other document output by forcing it through a $group or indeed an $unwind.
The usage essentially is that the $max returns the "maximum" value from the specified array property since $history.timestamp is a short way of notating to extract "just those values" from within the objects of the array.
This is used in comparison with the same "list of values" to determine the matching "index" via $indexOfArray, which takes an array as it's first argument and the value to match as the second.
The $arrayElemAt operator also takes an array as it's first argument, here we use the full "$history" array since we want to extract the "full object". Which we do by the "returned index" value of the $indexOfArray operator.
"Multiple" matches per document
Of course that's fine for "single" matches, but if you wanted to expand that to "multiple" matches of the same $max value, then you would use $filter instead:
db.collection.aggregate([
{ "$addFields": {
"history": {
"$filter": {
"input": "$history",
"cond": { "$eq": [ "$$this.timestamp", { "$max": "$history.timestamp" } ] }
}
}
}}
])
Which would output:
{
"_id" : ObjectId("5ae9175564de8a00a66b3974"),
"location" : "new york",
"history" : [
{
"timestamp" : 1524560400,
"temp" : 78,
"wind_speed" : 4,
"wind_direction" : "S"
}
]
}
{
"_id" : ObjectId("5ae9175564de8a00a66b3975"),
"location" : "san francisco",
"history" : [
{
"timestamp" : 1524560400,
"temp" : 73,
"wind_speed" : 8,
"wind_direction" : "S"
}
]
}
{
"_id" : ObjectId("5ae9175564de8a00a66b3976"),
"location" : "miami",
"history" : [
{
"timestamp" : 1524560400,
"temp" : 87,
"wind_speed" : 12,
"wind_direction" : "S"
}
]
}
The main difference being of course that the "history" property is still an "array" since that is what $filter will produce. Also noting of course that if there were in fact "multiple" entries with the same timestamp value, then this would of course return them all and not just the "first index" matched.
The comparison is basically done instead against "each" array element to see if the "current" ( "$$this" ) object has the specified property which matches the $max result, and ultimately returning only those array elements which are a match for the supplied condition.
These are essentially your "modern" approaches which avoid the overhead of $unwind, and indeed $sort and $group where they may not be needed. Of course they are not needed for just dealing with individual documents.
If however you really need to $group across "multiple documents" by a specific grouping key and consideration of values "inside" the array, then the initial approach outlined as you discovered is actually the fit for that scenario, as ultimately you "must" $unwind to deal with items "inside" an array in such a way. And also with consideration "across documents".
So be mindful to use stages like $group and $unwind only where you actually need to and where "grouping" is your actual intent. If you are just looking to find something "in the document", then there are far more efficient ways to do this without all the additional overhead that those stages bring with them to processing.

How to combine Documents in aggregation pipeline with MongoDB Java driver 3.6?

I am using an aggregation pipeline with the MongoDB Java driver version 3.6. If I have documents that look something like:
doc1 --
{
"CAR": {
"VIN": "ASDF1234",
"YEAR": "2018",
"MAKE": "Honda",
"MODEL": "Accord"
},
"FEATURES": [
{
"AUDIO": "MP3",
"TIRES": "All Season",
"BRAKES": "ABS"
}
]
}
doc2 --
{
"CAR": {
"VIN": "ASDF1234",
"AVAILABILITY": "In Stock"
}
}
And if I submit a query like:
collection.aggregate(
Arrays.asList(
Aggregates.match(
and(
in("CAR.VIN", vinList),
or(
eq("CAR.MAKE", carMake),
eq("CAR.AVAILABILITY", carAvailability),
)
)
)
)
)
Let us assume that there are exactly two different records for which the "CAR.VIN" criteria match for every VIN, and I am going to get two results. Rather than deal with two results each time, I would like to merge the documents so that the result looks like this:
{
"CAR": {
"VIN": "ASDF1234",
"YEAR": "2018",
"MAKE": "Honda",
"MODEL": "Accord",
"AVAILABILITY": "In Stock"
},
"FEATURES": [
{
"AUDIO": "MP3",
"TIRES": "All Season",
"BRAKES": "ABS"
}
]
}
The example where I have two and only two results trivializes my need for this. Imagine that vinList is a list of 10000 values, and it might return 2 x 10000 documents. When I return an AggregateIterable to the client that is calling my code, I do not want to impose the requirement that they have to group or collate the results in any way, but that they will receive one document for each result that has all of the information that they will want to parse, cleanly and easily.
Of course, people will suggest that the data is simply combined into one document with all of the data in the MongoDB collection. For reasons that I cannot control, there are two separate documents corresponding to each VIN in the same collection, and that is something that I am unable to change. There is a value in our system that makes this more reasonable than it might seem, so please don't focus on this apparent problem with the data.
I am trying, with not much luck, to utilize the Aggretes.group() operation to merge the fields in my aggregation pipeline. Accumulators.push seems to be the closest operation to what I need, but I do not want to complicate the document structure with extra arrays, etc. Is there a straightforward approach that I am not seeing?
you can try $mergeObjects added in mongo v3.6
db.cc.aggregate(
[
{
$group: {
_id : "$CAR.VIN",
CAR : {$mergeObjects : "$CAR"},
FEATURES : {$mergeObjects : {$arrayElemAt : ["$FEATURES", 0 ]}}
}
}
]
).pretty()
result
{
"_id" : "ASDF1234",
"CAR" : {
"VIN" : "ASDF1234",
"YEAR" : "2018",
"MAKE" : "Honda",
"MODEL" : "Accord",
"AVAILABILITY" : "In Stock"
},
"FEATURES" : {
"AUDIO" : "MP3",
"TIRES" : "All Season",
"BRAKES" : "ABS"
}
}
>
to get features as array
db.cc.aggregate(
[
{
$group: {
_id : "$CAR.VIN",
CAR : {$mergeObjects : "$CAR"},
FEATURES : {$push : {$arrayElemAt : ["$FEATURES", 0 ]}}
}
}
]
).pretty()
result
{
"_id" : "ASDF1234",
"CAR" : {
"VIN" : "ASDF1234",
"YEAR" : "2018",
"MAKE" : "Honda",
"MODEL" : "Accord",
"AVAILABILITY" : "In Stock"
},
"FEATURES" : [
{
"AUDIO" : "MP3",
"TIRES" : "All Season",
"BRAKES" : "ABS"
},
null
]
}
>

Perform a search on main collection field and array of objects simultaneously

I have my document structure as below:
{
"codeId" : 8.7628945723895E13, // long numeric value stored in scientific notation by Mongodb
"problemName" : "Hardware Problem",
"problemErrorCode" : "97695686856",
"status" : "active",
"problemDescription" : "ghdsojgnhsdjgh sdojghsdjoghdghd i0dhgjodshgddsgsdsdfghsdfg",
"subProblems" : [
{
"codeId" : 8.76289457238896E14,
"problemName" : "Some problem",
"problemErrorCode" : "57790389503490249640",
"problemDescription" : "This is edited",
"status" : "active",
"_id" : ObjectId("589476eeae39b20b1c15535b")
},
...
]
}
I have a search field which should search by codeId which basically serves as parentCodeID in search fields as shown below
Now, along with parentIdCode I want to search for codeId, problemCode, problemName and problemDescription as well.
How do I query the submodules with a regex search and at same time tag some parent field with "$or" clause etc. to achieve this ?
You can try something like this.
query = {
'$or': [{
"codeId":somevalue
}, {
"subProblems.codeId": {
"$regex": searchValue,
"$options": "i"
}
}, {
//rest of sub modules fields
}]
};

Mongodb Update/Upsert array exact match

I have a collection :
gStats : {
"_id" : "id1",
"criteria" : ["key1":"value1", "key2":"value2"],
"groups" : [
{"id":"XXXX", "visited":100, "liked":200},
{"id":"YYYY", "visited":30, "liked":400}
]
}
I want to be able to update a document of the stats Array of a given array of criteria (exact match).
I try to do this on 2 steps :
Pull the stat document from the array of a given "id" :
db.gStats.update({
"criteria" : {$size : 2},
"criteria" : {$all : [{"key1" : "2096955"},{"value1" : "2015610"}]}
},
{
$pull : {groups : {"id" : "XXXX"}}
}
)
Push the new document
db.gStats.findAndModify({
query : {
"criteria" : {$size : 2},
"criteria" : {$all : [{"key1" : "2015610"}, {"key2" : "2096955"}]}
},
update : {
$push : {groups : {"id" : "XXXX", "visited" : 29, "liked" : 144}}
},
upsert : true
})
The Pull query works perfect.
The Push query gives an error :
2014-12-13T15:12:58.571+0100 findAndModifyFailed failed: {
"value" : null,
"errmsg" : "exception: Cannot create base during insert of update. Cause
d by :ConflictingUpdateOperators Cannot update 'criteria' and 'criteria' at the
same time",
"code" : 12,
"ok" : 0
} at src/mongo/shell/collection.js:614
Neither query is working in reality. You cannot use a key name like "criteria" more than once unless under an operator such and $and. You are also specifying different fields (i.e groups) and querying elements that do not exist in your sample document.
So hard to tell what you really want to do here. But the error is essentially caused by the first issue I mentioned, with a little something extra. So really your { "$size": 2 } condition is being ignored and only the second condition is applied.
A valid query form should look like this:
query: {
"$and": [
{ "criteria" : { "$size" : 2 } },
{ "criteria" : { "$all": [{ "key1": "2015610" }, { "key2": "2096955" }] } }
]
}
As each set of conditions is specified within the array provided by $and the document structure of the query is valid and does not have a hash-key name overwriting the other. That's the proper way to write your two conditions, but there is a trick to making this work where the "upsert" is failing due to those conditions not matching a document. We need to overwrite what is happening when it tries to apply the $all arguments on creation:
update: {
"$setOnInsert": {
"criteria" : [{ "key1": "2015610" }, { "key2": "2096955" }]
},
"$push": { "stats": { "id": "XXXX", "visited": 29, "liked": 144 } }
}
That uses $setOnInsert so that when the "upsert" is applied and a new document created the conditions specified here rather than using the field values set in the query portion of the statement are used instead.
Of course, if what you are really looking for is truly an exact match of the content in the array, then just use that for the query instead:
query: {
"criteria" : [{ "key1": "2015610" }, { "key2": "2096955" }]
}
Then MongoDB will be happy to apply those values when a new document is created and does not get confused on how to interpret the $all expression.

Ensure Unique indexes in embedded doc in mongodb

Is there a way to make a subdocument within a list have a unique field in mongodb?
document structure:
{
"_id" : "2013-08-13",
"hours" : [
{
"hour" : "23",
"file" : [
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
]
}
]
}
I want to make sure that the document's hours.hour value has a unique item when inserted. The issue is hours is a list. Can you ensureIndex in this way?
Indexes are not the tool for ensuring uniqueness in an embedded array, rather they are used across documents to ensure that certain fields do not repeat there.
As long as you can be certain that the content you are adding does not differ from any other value in any way then you can use the $addToSet operator with update:
db.collection.update(
{ "_id": "2013-08-13", "hour": 23 },
{ "$addToSet": {
"hours.$.file": {
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
}}
)
So that document would not be added as there is already an element matching those exact values within the target array. If the content was different (and that means any part of the content, then a new item would be added.
For anything else you would need to maintain that manually by loading up the document and inspecting the elements of the array. Say for a different "filename" with exactly the same timestamp.
Problems with your Schema
Now the question is answered I want to point out the problems with your schema design.
Dates as strings are "horrible". You may think you need them but you do not. See the aggregation framework date operators for more on this.
You have nested arrays, which generally should be avoided. The general problems are shown in the documentation for the positional $ operator. That says you only get one match on position, and that is always the "top" level array. So updating beyond adding things as shown above is going to be difficult.
A better schema pattern for you is to simply do this:
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
If that is in it's own collection then you can always actually use indexes to ensure uniqueness. The aggregation framework can break down the date parts and hours where needed.
Where you must have that as part of another document then try at least to avoid the nested arrays. This would be acceptable but not as flexible as separating the entries:
{
"_id" : "2013-08-13",
"hours" : {
"23": [
{
"date_added" : ISODate("2014-04-03T18:54:36.400Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.410Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.402Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
},
{
"date_added" : ISODate("2014-04-03T18:54:36.671Z"),
"name" : "1376434800_file_output_2014-03-10-09-27_44.csv"
}
]
}
}
It depends on your intended usage, the last would not allow you to do any type of aggregation comparison across hours within a day. Not in any simple way. The former does this easily and you can still break down selections by day and hour with ease.
Then again, if you are only ever appending information then your existing schema should be find. But be aware of the possible issues and alternatives.