Ordering fields from find query with projection - mongodb

I have a Mongo find query that works well to extract specific fields from a large document like...
db.profiles.find(
{ "profile.ModelID" : 'LZ241M4' },
{
_id : 0,
"profile.ModelID" : 1,
"profile.AVersion" : 2,
"profile.SVersion" : 3
}
);
...this produces the following output. Note how the SVersion comes before the AVersion in the document even though my projection asked for AVersion before SVersion.
{ "profile" : { "ModelID" : "LZ241M4", "SVersion" : "3.5", "AVersion" : "4.0.3" } }
{ "profile" : { "ModelID" : "LZ241M4", "SVersion" : "4.0", "AVersion" : "4.0.3" } }
...the problem is that I want the output to be...
{ "profile" : { "ModelID" : "LZ241M4", "AVersion" : "4.0.3", "SVersion" : "3.5" } }
{ "profile" : { "ModelID" : "LZ241M4", "AVersion" : "4.0.3", "SVersion" : "4.0" } }
What do I have to do get the Mongo JavaScript shell to present the results of my query in the field order that I specify?

I have achieved it by projecting the fields using aliases, instead of including and excluding by 0 and 1s.
Try this:
{
_id : 0,
"profile.ModelID" :"$profile.ModelID",
"profile.AVersion":"$profile.AVersion",
"profile.SVersion":"$profile.SVersion"
}

I get it now. You want to return results ordered by "fields" rather the value of a fields.
Simple answer is that you can't do this. Maybe its possible with the new aggregation framework. But this seems overkill just to order fields.
The second object in a find query is for including or excluding returned fields not for ordering them.
{
_id : 0, // 0 means exclude this field from results
"profile.ModelID" : 1, // 1 means include this field in the results
"profile.AVersion" :2, // 2 means nothing
"profile.SVersion" :3, // 3 means nothing
}
Last point, you shouldn't need to do this, who cares what order the fields come-back in.
You application should be able to make use of the fields it needs regardless of the order the fields are in.

Another solution I applied to achieve this is the following:
db.profiles
.find({ "profile.ModelID" : 'LZ241M4' })
.toArray()
.map(doc => ({
profile: {
ModelID: doc.profile.ModelID,
AVersion: doc.profile.AVersion,
SVersion: doc.profile.SVersion
}
}))

Since version 2.6 (that came out in 2014) MongoDB preserves the order of the document fields following the write operation (source).
P.S. If you are using Python you might find this interesting.

Related

multi updating a key along the documents of a collection using pymongo

I have lots of documents inside a collection.
The structure of each of the documents inside the collection is as it follows:
{
"_id" : ObjectId(....),
"valor" : {
"AB" : {
"X" : 0.0,
"Y" : 142.6,
},
"FJ" : {
"X" : 0.2,
"Y" : 3.33
....
The collection has currently about 200 documents and I have noticed that one of the keys inside valor has the wrong name. In this case we will say "FJ" shall be "JOF" in all the docs of the collection.
Im pretty sure it is possible to change the key in all the docs using the update function of pymongo. The problem I am facing is that when I visit the online doc available https://docs.mongodb.com/v3.0/reference/method/db.collection.update/ only explains how to change the values(which I would like to remain how they currently are and change only the keys).
This is what I have tried:
def multi_update(spec_key,key_updte):
rdo=col.update((valor.spec_key),{"$set":(valor.key_updte)},multi=True)
return rdo
print(multi_update('FJ','JOF'))
But outputs name 'valor' is not defined . I thought I shall use valor.specific_key to access to the corresponding json
how can I update a key only along the docs of the collection?
You have two problems. First, valor is not an identifier in your Python code, it's a field name of a MongoDB document. You need to quote it in single or double quotes in Python in order to make it a string and use it in a PyMongo update expression.
Your second problem is, MongoDB's update command doesn't allow you set one field to the value of another, nor to rename a field. However, you can reshape all the documents in your collection using the aggregate command with a $project stage and store the results in a second collection using a $out stage.
Here's a complete example to play with:
db = MongoClient().test
collection = db.collection
collection.delete_many({})
collection.insert_one({
"valor" : {
"AB" : {
"X" : 0.0,
"Y" : 142.6,
},
"FJ" : {
"X" : 0.2,
"Y" : 3.33}}})
collection.aggregate([{
"$project": {
"valor": {
"AB": "$valor.AB",
"FOJ": "$valor.FJ"
}
}
}, {
"$out": "collection2"
}])
This is the dangerous part. First, check that "collection2" has all the documents you want, in the desired shape. Then:
collection.drop()
db.collection2.rename("collection")
import pprint
pprint.pprint(collection.find_one())

Use MongoDB aggregation to find set intersection of two sets within the same document

I'm trying to use the Mongo aggregation framework to find where there are records that have different unique sets within the same document. An example will best explain this:
Here is a document that is not my real data, but conceptually the same:
db.house.insert(
{
houseId : 123,
rooms: [{ name : 'bedroom',
owns : [
{name : 'bed'},
{name : 'cabinet'}
]},
{ name : 'kitchen',
owns : [
{name : 'sink'},
{name : 'cabinet'}
]}],
uses : [{name : 'sink'},
{name : 'cabinet'},
{name : 'bed'},
{name : 'sofa'}]
}
)
Notice that there are two hierarchies with similar items. It is also possible to use items that are not owned. I want to find documents like this one: where there is a house that uses something that it doesn't own.
So far I've built up the structure using the aggregate framework like below. This gets me to 2 sets of distinct items. However I haven't been able to find anything that could give me the result of a set intersection. Note that a simple count of set size will not work due to something like this: ['couch', 'cabinet'] compare to ['sofa', 'cabinet'].
{'$unwind':'$uses'}
{'$unwind':'$rooms'}
{'$unwind':'$rooms.owns'}
{'$group' : {_id:'$houseId',
use:{'$addToSet':'$uses.name'},
own:{'$addToSet':'$rooms.owns.name'}}}
produces:
{ _id : 123,
use : ['sink', 'cabinet', 'bed', 'sofa'],
own : ['bed', 'cabinet', 'sink']
}
How do I then find the set intersection of use and own in the next stage of the pipeline?
You were not very far from the full solution with aggregation framework - you needed one more thing before the $group step and that is something that would allow you to see if all the things that are being used match up with something that is owned.
Here is the full pipeline
> db.house.aggregate(
{'$unwind':'$uses'},
{'$unwind':'$rooms'},
{'$unwind':'$rooms.owns'},
{$project: { _id:0,
houseId:1,
uses:"$uses.name",
isOkay:{$cond:[{$eq:["$uses.name","$rooms.owns.name"]}, 1, 0]}
}
},
{$group: { _id:{house:"$houseId",item:"$uses"},
hasWhatHeUses:{$sum:"$isOkay"}
}
},
{$match:{hasWhatHeUses:0}})
and its output on your document
{
"result" : [
{
"_id" : {
"house" : 123,
"item" : "sofa"
},
"hasWhatHeUses" : 0
}
],
"ok" : 1
}
Explanation - once you unwrap both arrays you now want to flag the elements where used item is equal to owned item and give them a non-0 "score". Now when you regroup things back by houseId you can check if any used items didn't get a match. Using 1 and 0 for score allows you to do a sum and now a match for item which has sum 0 means it was used but didn't match anything in "owned". Hope you enjoyed this!
So here is a solution not using the aggregation framework. This uses the $where operator and javascript. This feels much more clunky to me, but it seems to work so I wanted to put it out there if anyone else comes across this question.
db.houses.find({'$where':
function() {
var ownSet = {};
var useSet = {};
for (var i=0;i<obj.uses.length;i++){
useSet[obj.uses[i].name] = true;
}
for (var i=0;i<obj.rooms.length;i++){
var room = obj.rooms[i];
for (var j=0;j<room.owns.length;j++){
ownSet[room.owns[j].name] = true;
}
}
for (var prop in ownSet) {
if (ownSet.hasOwnProperty(prop)) {
if (!useSet[prop]){
return true;
}
}
}
for (var prop in useSet) {
if (useSet.hasOwnProperty(prop)) {
if (!ownSet[prop]){
return true;
}
}
}
return false
}
})
For MongoDB 2.6+ Only
As of MongoDB 2.6, there are set operations available in the project pipeline stage. The way to answer this problem with the new operations is:
db.house.aggregate([
{'$unwind':'$uses'},
{'$unwind':'$rooms'},
{'$unwind':'$rooms.owns'},
{'$group' : {_id:'$houseId',
use:{'$addToSet':'$uses.name'},
own:{'$addToSet':'$rooms.owns.name'}}},
{'$project': {int:{$setIntersection:["$use","$own"]}}}
]);

Add new field to all documents in a nested array

I have a database of person documents. Each has a field named photos, which is an array of photo documents. I would like to add a new 'reviewed' flag to each of the photo documents and initialize it to false.
This is the query I am trying to use:
db.person.update({ "_id" : { $exists : true } }, {$set : {photos.reviewed : false} }, false, true)
However I get the following error:
SyntaxError: missing : after property id (shell):1
Is this possible, and if so, what am I doing wrong in my update?
Here is a full example of the 'person' document:
{
"_class" : "com.foo.Person",
"_id" : "2894",
"name" : "Pixel Spacebag",
"photos" : [
{
"_id" : null,
"thumbUrl" : "http://site.com/a_s.jpg",
"fullUrl" : "http://site.com/a.jpg"
},
{
"_id" : null,
"thumbUrl" : "http://site.com/b_s.jpg",
"fullUrl" : "http://site.com/b.jpg"
}]
}
Bonus karma for anyone who can tell me a cleaner why to update "all documents" without using the query { "_id" : { $exists : true } }
For those who are still looking for the answer it is possible with MongoDB 3.6 with the all positional operator $[] see the docs:
db.getCollection('person').update(
{},
{ $set: { "photos.$[].reviewed" : false } },
{ multi: true}
)
Is this possible, and if so, what am I doing wrong in my update?
No. In general MongoDB is only good at doing updates on top-level objects.
The exception here is the $ positional operator. From the docs: Use this to find an array member and then manipulate it.
However, in your case you want to modify all members in an array. So that is not what you need.
Bonus karma for anyone who can tell me a cleaner why to update "all documents"
Try db.coll.update(query, update, false, true), this will issue a "multi" update. That last true is what makes it a multi.
Is this possible,
You have two options here:
Write a for loop to perform the update. It will basically be a nested for loop, one to loop through the data, the other to loop through the sub-array. If you have a lot of data, you will want to write this is your driver of choice (and possibly multi-thread it).
Write your code to handle reviewed as nullable. Write the data such that if it comes across a photo with reviewed undefined then it must be false. Then you can set the field appropriately and commit it back to the DB.
Method #2 is something you should get used to. As your data grows and you add fields, it becomes difficult to "back-port" all of the old data. This is similar to the problem of issuing a schema change in SQL when you have 1B items in the DB.
Instead just make your code resistant against the null and learn to treat it as a default.
Again though, this is still not the solution you seek.
You can do this
(null, {$set : {"photos.reviewed" : false} }, false, true)
The first parameter is null : no specification = any item in the collection.
"photos.reviewed" should be declared as string to update subfield.
You can do like this:
db.person.update({}, $set:{name.surname:null}, false, true);
Old topic now, but this just worked fine with Mongo 3.0.6:
db.users.update({ _id: ObjectId("55e8969119cee85d216211fb") },
{ $set: {"settings.pieces": "merida"} })
In my case user entity looks like
{ _id: 32, name: "foo", ..., settings: { ..., pieces: "merida", ...} }

Multiple update of embedded documents' properties

I have the following collection:
{
"Milestones" : [
{ "ActualDate" : null,
"Index": 0,
"Name" : "milestone1",
"TargetDate" : ISODate("2011-12-13T22:00:00Z"),
"_id" : ObjectId("4ee89ae7e60fc615c42e28d1")},
{ "ActualDate" : null,
"Index" : 0,
"Name" : "milestone2",
"TargetDate" : ISODate("2011-12-13T22:00:00Z"),
"_id" : ObjectId("4ee89ae7e60fc615c42e28d2") } ]
,
"Name" : "a", "_id" : ObjectId("4ee89ae7e60fc615c42e28ce")
}
I want to update definite documents: that have specified _id, List of Milestones._id and ActualDate is null.
I dotnet my code looks like:
var query = Query.And(new[] { Query.EQ("_id", ObjectId.Parse(projectId)),
Query.In("Milestones._id", new BsonArray(values.Select(ObjectId.Parse))),
Query.EQ("Milestones.ActualDate", BsonNull.Value) });
var update = Update.Set("Milestones.$.ActualDate", DateTime.Now.Date);
Coll.Update(query, update, UpdateFlags.Multi, SafeMode.True);
Or in native code:
db.Projects.update({ "_id" : ObjectId("4ee89ae7e60fc615c42e28ce"), "Milestones._id" : { "$in" : [ObjectId("4ee89ae7e60fc615c42e28d1"), ObjectId("4ee89ae7e60fc615c42e28d2"), ObjectId("4ee8a648e60fc615c41d481e")] }, "Milestones.ActualDate" : null },{ "$set" : { "Milestones.$.ActualDate" : ISODate("2011-12-13T22:00:00Z") } }, false, true)
But only the first item is being updated.
This is not possible in current moment. Flag multi in update means update of multiple root documents. Positional operator can match only one nested array item. There is such feature in mongodb jira. You can vote up and wait.
Current solution can be only load document, update as you wish and save back or multiple atomic update for each nested array id.
From documentation at mongodb.org:
Currently the $ operator only applies to the first matched item in the
query
As answered by Andrew Orsich, this is not possible for the moment, at least not as you wish. But loading the document, modifying the array then saving it back will work. The risk is that some other process could modify the array in the meantime, so you would overwrite its changes. To avoid this, you can use optimistic locking, especially if the array is not modified every second.
load the document, including a new attribute: milestones_version
modify the array as needed
save back to mongodb, but now add a query constraint on the milestones_version, and increment it:
db.Projects.findAndModify({
query: {
_id: your_project_id,
milestones_version: expected_milestones_version
},
update: {
$set: {
Milestones: modified_milestones
},
$inc: {
milestones_version: 1
}
},
new: 1
})
If another process modified the milestones array (and hence the milestones_version) before we did, then this command will do nothing and simply return null. We just need to reload the document and try again. If the array is not modified every second, then this will be very rare and will not have any impact on performance.
The main problem with this solution is that you have to edit every Project, one by one (no multi: true). You could still write a javascript function and have it run on the server though.
According to their JIRA page "This new feature is available starting with the MongoDB 3.5.12 development version, and included in the MongoDB 3.6 production version"
https://jira.mongodb.org/browse/SERVER-1243

In MongoDB mapreduce, how can I flatten the values object?

I'm trying to use MongoDB to analyse Apache log files. I've created a receipts collection from the Apache access logs. Here's an abridged summary of what my models look like:
db.receipts.findOne()
{
"_id" : ObjectId("4e57908c7a044a30dc03a888"),
"path" : "/videos/1/show_invisibles.m4v",
"issued_at" : ISODate("2011-04-08T00:00:00Z"),
"status" : "200"
}
I've written a MapReduce function that groups all data by the issued_at date field. It summarizes the total number of requests, and provides a breakdown of the number of requests for each unique path. Here's an example of what the output looks like:
db.daily_hits_by_path.findOne()
{
"_id" : ISODate("2011-04-08T00:00:00Z"),
"value" : {
"count" : 6,
"paths" : {
"/videos/1/show_invisibles.m4v" : {
"count" : 2
},
"/videos/1/show_invisibles.ogv" : {
"count" : 3
},
"/videos/6/buffers_listed_and_hidden.ogv" : {
"count" : 1
}
}
}
}
How can I make the output look like this instead:
{
"_id" : ISODate("2011-04-08T00:00:00Z"),
"count" : 6,
"paths" : {
"/videos/1/show_invisibles.m4v" : {
"count" : 2
},
"/videos/1/show_invisibles.ogv" : {
"count" : 3
},
"/videos/6/buffers_listed_and_hidden.ogv" : {
"count" : 1
}
}
}
It's not currently possible, but I would suggest voting for this case: https://jira.mongodb.org/browse/SERVER-2517.
Taking the best from previous answers and comments:
db.items.find().hint({_id: 1}).forEach(function(item) {
db.items.update({_id: item._id}, item.value);
});
From http://docs.mongodb.org/manual/core/update/#replace-existing-document-with-new-document
"If the update argument contains only field and value pairs, the update() method replaces the existing document with the document in the update argument, except for the _id field."
So you need neither to $unset value, nor to list each field.
From https://docs.mongodb.com/manual/core/read-isolation-consistency-recency/#cursor-snapshot
"MongoDB cursors can return the same document more than once in some situations. ... use a unique index on this field or these fields so that the query will return each document no more than once. Query with hint() to explicitly force the query to use that index."
AFAIK, by design Mongo's map reduce will spit results out in "value tuples" and I haven't seen anything that will configure that "output format". Maybe the finalize() method can be used.
You could try running a post-process that will reshape the data using
results.find({}).forEach( function(result) {
results.update({_id: result._id}, {count: result.value.count, paths: result.value.paths})
});
Yep, that looks ugly. I know.
You can do Dan's code with a collection reference:
function clean(collection) {
collection.find().forEach( function(result) {
var value = result.value;
delete value._id;
collection.update({_id: result._id}, value);
collection.update({_id: result.id}, {$unset: {value: 1}} ) } )};
A similar approach to that of #ljonas but no need to hardcode document fields:
db.results.find().forEach( function(result) {
var value = result.value;
delete value._id;
db.results.update({_id: result._id}, value);
db.results.update({_id: result.id}, {$unset: {value: 1}} )
} );
All the proposed solutions are far from optimal. The fastest you can do so far is something like:
var flattenMRCollection=function(dbName,collectionName) {
var collection=db.getSiblingDB(dbName)[collectionName];
var i=0;
var bulk=collection.initializeUnorderedBulkOp();
collection.find({ value: { $exists: true } }).addOption(16).forEach(function(result) {
print((++i));
//collection.update({_id: result._id},result.value);
bulk.find({_id: result._id}).replaceOne(result.value);
if(i%1000==0)
{
print("Executing bulk...");
bulk.execute();
bulk=collection.initializeUnorderedBulkOp();
}
});
bulk.execute();
};
Then call it:
flattenMRCollection("MyDB","MyMRCollection")
This is WAY faster than doing sequential updates.
While experimenting with Vincent's answer, I found a couple of problems. Basically, if you perform updates within a foreach loop, this will move the document to the end of the collection and the cursor will reach that document again (example). This can be circumvented if $snapshot is used. Hence, I am providing a Java example below.
final List<WriteModel<Document>> bulkUpdate = new ArrayList<>();
// You should enable $snapshot if performing updates within foreach
collection.find(new Document().append("$query", new Document()).append("$snapshot", true)).forEach(new Block<Document>() {
#Override
public void apply(final Document document) {
// Note that I used incrementing long values for '_id'. Change to String if
// you used string '_id's
long docId = document.getLong("_id");
Document subDoc = (Document)document.get("value");
WriteModel<Document> m = new ReplaceOneModel<>(new Document().append("_id", docId), subDoc);
bulkUpdate.add(m);
// If you used non-incrementing '_id's, then you need to use a final object with a counter.
if(docId % 1000 == 0 && !bulkUpdate.isEmpty()) {
collection.bulkWrite(bulkUpdate);
bulkUpdate.removeAll(bulkUpdate);
}
}
});
// Fixing bug related to Vincent's answer.
if(!bulkUpdate.isEmpty()) {
collection.bulkWrite(bulkUpdate);
bulkUpdate.removeAll(bulkUpdate);
}
Note : This snippet takes an average of 7.4 seconds to execute on my machine with 100k records and 14 attributes (IMDB dataset). Without batching, it takes an average of 25.2 seconds.