Add field if not exist to document in Mongo - mongodb

Source Doc
{
"_id" : "12345",
"LastName" : "Smith",
"FirstName" : "Fred",
"ProfileCreated" : NumberLong(1447118831860),
"DropOut" : false,
}
New Doc
{
"_id" : "12345",
"LastName" : "Smith",
"FirstName" : "Fred",
"ProfileCreated" : NumberLong(1447118831860),
"DropOut" : true,
"LatestConsultation" : false,
}
I have two collections which share a lot of the same document ID's and fields but over time the new documents will have fields added to them and or completely new documents with new ID's will get created.
I think I know how to handle new documents using $setOnInsert and upsert = true but I'm not sure how best to handle the addition of new fields. The behavior I require for documents that exists in both collection matched on _id with new fields is to add the new field to the document without modifying the values of any of the other fields even if they have changed as in the example where the DropOut value has changed. The resulting document I require is.
Result document
{
"_id" : "12345",
"LastName" : "Smith",
"FirstName" : "Fred",
"ProfileCreated" : NumberLong(1447118831860),
"DropOut" : false,
"LatestConsultation" : false,
}
What is the best and most performatic way to achive this? Also if this can somehow be combined into a single statement that also includes the addition of documents that exists in the new collection but not in the source collection that would be amazing :-)
PS. I am using Pymongo so a Pymongo example would be even better but I can translate a mongo shell example.

Not sure is this is possible with an atomic update. However, you could string in some mixed operations and tackle this in such a way that you iterate the new collection and for each document in the new collection:
Use the _id field to query the old collection. Use the findOne() method to return a document from the old collection that matches on the _id from the new collection.
Extend the new doc with the old doc by adding the new fields which do not exist in the old document.
Update the new collection with this merged document.
The following basic mongo shell example demonstrates the algorithm above:
function merge(from, to) {
var obj = {};
if (!from) {
from = {};
} else {
obj = from;
}
for (var key in to) {
if (!from.hasOwnProperty(key)) {
obj[key] = to[key];
}
}
return obj;
}
db.new_collection.find({}).snapshot().forEach(function(doc){
var old_doc = db.old_collection.findOne({ "_id": doc._id }),
merged_doc = merge(old_doc, doc);
db.new_collection.update(
{ "_id": doc._id },
{ "$set": merged_doc }
);
});
For dealing with large collections, better leverage your updates using the bulk API which offers better performance and efficient update operations done through
sending the update requests in bulk rather than each update operation for every request (which is slow). The method to use is the bulkWrite() function, which can be applied in the above example as:
function merge(from, to) {
var obj = {};
if (!from) {
from = {};
} else {
obj = from;
}
for (var key in to) {
if (!from.hasOwnProperty(key)) {
obj[key] = to[key];
}
}
return obj;
}
var ops = [];
db.new_collection.find({}).snapshot().forEach(function(doc){
var old_doc = db.old_collection.findOne({ "_id": doc._id }),
merged_doc = merge(old_doc, doc);
ops.push({
"updateOne": {
"filter": { "_id": doc._id },
"update": { "$set": merged_doc }
}
});
if (ops.length === 1000) {
db.new_collection.bulkWrite(ops);
ops = [];
}
});
if (ops.length > 0) db.new_collection.bulkWrite(ops);
Or for MongoDB 2.6.x and 3.0.x releases use this version of Bulk operations:
var bulk = db.new_collection.initializeUnorderedBulkOp(),
counter = 0;
db.new_collection.find({}).snapshot().forEach(function(doc){
var old_doc = db.old_collection.findOne({ "_id": doc._id }),
merged_doc = merge(old_doc, doc);
bulk.find({ "_id": doc._id }).updateOne({ "$set": merged_doc });
if (counter % 1000 === 0) {
bulk.execute();
bulk = db.new_collection.initializeUnorderedBulkOp();
}
});
if (counter % 1000 !== 0 ) bulk.execute();
The Bulk operations API in both cases will help reduce the IO load on the server by sending the requests only once in every 1000 documents in the collection to process.

Related

mongodb delete nested object without knowledge of object nodes

For the below document, I am trying to delete the node which contains id = 123
{
'_id': "1234567890",
"image" : {
"unknown-node-1" : {
"id" : 123
},
"unknown-node-2" : {
"id" : 124
}
}
}
Result should be as below.
{
'_id': "1234567890",
"image" : {
"unknown-node-2" : {
"id" : 124
}
}
}
The below query achieves the result. But i have to know the unknown-node-1 in advance. How can I achieve the results without pre-knowledge of node, but only
info that I have is image.*.id = 123
(* means unknown node)
Is it possible in mongo? or should I do these find on my app code.
db.test.update({'_id': "1234567890"}, {$unset: {'image.unknown-node-1': ""}})
Faiz,
There is no operator to help match and project a single key value pair without knowing the key. You'll have to write post processing code to scan each one of the documents to find the node with the id and then perform your removal.
If you have the liberty of changing your schema, you'll have more flexibilty. With a document design like this:
{
'_id': "1234567890",
"image" : [
{"id" : 123, "name":"unknown-node-1"},
{"id" : 124, "name":"unknown-node-2"},
{"id" : 125, "name":"unknown-node-3"}
]
}
You could remove documents from the array like this:
db.collectionName.update(
{'_id': "1234567890"},
{ $pull: { image: { id: 123} } }
)
This would result in:
{
'_id': "1234567890",
"image" : [
{"id" : 124, "name":"unknown-node-2"},
{"id" : 125, "name":"unknown-node-3"}
]
}
With your current schema, you will need a mechanism to get a list of the dynamic keys that you need to assemble the query before doing the update and one way of doing this would be with MapReduce. Take for instance the following map-reduce operation which will populate a separate collection with all the keys as the _id values:
mr = db.runCommand({
"mapreduce": "test",
"map" : function() {
for (var key in this.image) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "test_keys"
})
To get a list of all the dynamic keys, run distinct on the resulting collection:
> db[mr.result].distinct("_id")
[ "unknown-node-1", "unknown-node-2" ]
Now given the list above, you can assemble your query by creating an object that will have its properties set within a loop. Normally if you knew the keys beforehand, your query will have this structure:
var query = {
"image.unknown-node-1.id": 123
},
update = {
"$unset": {
"image.unknown-node-1": ""
}
};
db.test.update(query, update);
But since the nodes are dynamic, you will have to iterate the list returned from the mapReduce operation and for each element, create the query and update parameters as above to update the collection. The list could be huge so for maximum efficiency and if your MongoDB server is 2.6 or newer, it would be better to take advantage of using a write commands Bulk API that allow for the execution of bulk update operations which are simply abstractions on top of the server to make it easy to build bulk operations and thus get perfomance gains with your update over large collections. These bulk operations come mainly in two flavours:
Ordered bulk operations. These operations execute all the operation in order and error out on the first write error.
Unordered bulk operations. These operations execute all the operations in parallel and aggregates up all the errors. Unordered bulk operations do not guarantee order of execution.
Note, for older servers than 2.6 the API will downconvert the operations. However it's not possible to downconvert 100% so there might be some edge cases where it cannot correctly report the right numbers.
In your case, you could implement the Bulk API update operation like this:
mr = db.runCommand({
"mapreduce": "test",
"map" : function() {
for (var key in this.image) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "test_keys"
})
// Get the dynamic keys
var dynamic_keys = db[mr.result].distinct("_id");
// Get the collection and bulk api artefacts
var bulk = db.test.initializeUnorderedBulkOp(), // Initialize the Unordered Batch
counter = 0;
// Execute the each command, triggers for each key
dynamic_keys.forEach(function(key) {
// Create the query and update documents
var query = {},
update = {
"$unset": {}
};
query["image."+ key +".id"] = 123;
update["$unset"]["image." + key] = ";"
bulk.find(query).update(update);
counter++;
if (counter % 100 == 0 ) {
bulk.execute() {
// re-initialise batch operation
bulk = db.test.initializeUnorderedBulkOp();
}
});
if (counter % 100 != 0) { bulk.execute(); }

How can i remove empty string from a mongodb collection?

I have a "mongodb colllenctions" and I'd like to remove the "empty strings"with keys from it.
From this:
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "15",
"year_comment" : "",
}
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "",
"year_comment" : "asd",
}
I'd like to gain this result:
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year" : "15",
}
{
"_id" : ObjectId("56323d975134a77adac312c5"),
"year_comment" : "asd",
}
How could I solve it?
Please try executing following code snippet in Mongo shell which strips fields with empty or null values
var result=new Array();
db.getCollection('test').find({}).forEach(function(data)
{
for(var i in data)
{
if(data[i]==null || data[i]=='')
{
delete data[i]
}
}
result.push(data)
})
print(tojson(result))
Would start with getting a distinct list of all the keys in the collection, use those keys as your query basis and do an ordered bulk update using the Bulk API operations. The update statement uses the $unset operator to remove the fields.
The mechanism to get distinct keys list that you need to assemble the query is possible through Map-Reduce. The following mapreduce operation will populate a separate collection with all the keys as the _id values:
mr = db.runCommand({
"mapreduce": "my_collection",
"map" : function() {
for (var key in this) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "my_collection" + "_keys"
})
To get a list of all the dynamic keys, run distinct on the resulting collection:
db[mr.result].distinct("_id")
// prints ["_id", "year", "year_comment", ...]
Now given the list above, you can assemble your query by creating an object that will have its properties set within a loop. Normally your query will have this structure:
var keysList = ["_id", "year", "year_comment"];
var query = keysList.reduce(function(obj, k) {
var q = {};
q[k] = "";
obj["$or"].push(q);
return obj;
}, { "$or": [] });
printjson(query); // prints {"$or":[{"_id":""},{"year":""},{"year_comment":""}]}
You can then use the Bulk API (available with MongoDB 2.6 and above) as a way of streamlining your updates for better performance with the query above. Overall, you should be able to have something working as:
var bulk = db.collection.initializeOrderedBulkOp(),
counter = 0,
query = {"$or":[{"_id":""},{"year":""},{"year_comment":""}]},
keysList = ["_id", "year", "year_comment"];
db.collection.find(query).forEach(function(doc){
var emptyKeys = keysList.filter(function(k) { // use filter to return an array of keys which have empty strings
return doc[k]==="";
}),
update = emptyKeys.reduce(function(obj, k) { // set the update object
obj[k] = "";
return obj;
}, { });
bulk.find({ "_id": doc._id }).updateOne({
"$unset": update // use the $unset operator to remove the fields
});
counter++;
if (counter % 1000 == 0) {
// Execute per 1000 operations and re-initialize every 1000 update statements
bulk.execute();
bulk = db.collection.initializeOrderedBulkOp();
}
})
If you need to update a single blank parameter or you prefer to do parameter by parameter, you can use the mongo updateMany functionality:
db.comments.updateMany({year: ""}, { $unset : { year : 1 }})

Move mongodb data without losing destination data

I have two mongodb databases.
Development DB
{
_id:"someid",
"parent1":{
"key1":"val1",
"key2":"val2",
"key3":"val3",
"key4":"val4",
"key5":"val5",
"key6":"val6"
}
}
Production DB
{
_id:"someid",
"parent1":{
"key1":"val1",
"key2":"val2",
"key3":"val3",
"key10":"val10",
"key11":"val11",
"key12":"val12"
}
}
I want to move my Development data to production data without losing newly added keys in production.
The output should become:
{
_id:"someid",
"parent1":{
"key1":"val1",
"key2":"val2",
"key3":"val3",
"key4":"val4",
"key5":"val5",
"key6":"val6"
"key10":"val10",
"key11":"val11",
"key12":"val12"
}
}
I can't update by using db.collection.update( { _id:...} , { $set: { some_key.param2 : new_info } }, as I can't add parent to each and every key.
Depending on your eventual needs there are a couple of approaches you can take to this:
Cycle Object keys and apply updates: Being where you essentially "read" the current object and then take note of it's current state when applying individual updates per each key. Bulk operations help somewhat here:
var bulk = db.target.initializeOrderedBulkOp(),
count = 0;
db.source.find().forEach(function(doc) {
Object.keys(doc.parent1).forEach(function(key) {
var query = { "_id": doc.id };
query["parent1." + key] = { "$ne": doc.parent1[key] };
var update = { "$set": {} };
update.$set["parent1." + key] = doc.parent1[key];
bulk.find(query).updateOne(update);
query = { "_id": doc._id };
update = { "$setOnInsert": {} };
update.$setOnInsert["parent1." + key] = doc.parent1[key];
bulk.find(query).upsert().updateOne(update);
count++;
if ( count % 500 == 0 ) {
bulk.execute();
bulk = db.target.initializeOrderedBulkOp();
}
});
});
if ( count % 500 != 0 )
bulk.execute();
Use a utility to "merge" the results per key: Such as with "lodash" library as in:
db.source.find().forEach(function(doc) {
var id = doc._id;
delete doc._id;
var result = db.target.findAndModify({
"query": { "_id": id },
"update": { "$setOnInsert": doc },
"upsert": true,
"new": true
});
var merged = _.merge(result,doc);
db.target.update({ "_id": merged._id }, merged );
});
The "latter" is generally heavier in "update" and communication load though a bit lighter in overall code. You can also "tweak" this in API code where you can in fact return if the "upsert" in fact resulted in such a thing or whether the document was actually just "found", in which case a decision can be made whether to do the "merge" or not.
Of course I am "abstracting" here, as in reality you source from different "databases" and "connections" rather than just collections as is given as an example. But these are the basic model patterns to follow.

Mongo : How to convert all entries using a long timeStamp to an ISODate?

I have a current Mongo database with the accumulated entries/fields
{
name: "Fred Flintstone",
age : 34,
timeStamp : NumberLong(14283454353543)
}
{
name: "Wilma Flintstone",
age : 33,
timeStamp : NumberLong(14283454359453)
}
And so on...
Question : I want to convert all entries in the database to their corresponding ISODate instead - How does one do this?
Desired Result :
{
name: "Fred Flintstone",
age : 34,
timeStamp : ISODate("2015-07-20T14:50:32.389Z")
}
{
name: "Wilma Flintstone",
age : 33,
timeStamp : ISODate("2015-07-20T14:50:32.389Z")
}
Things I've tried
>db.myCollection.find().forEach(function (document) {
document["timestamp"] = new Date(document["timestamp"])
//Not sure how to update this document from here
db.myCollection.update(document) //?
})
Using the aggregation pipeline for update operations, simply run the following update operation:
db.myCollection.updateMany(
{ },
[
{ $set: {
timeStamp: {
$toDate: '$timeStamp'
}
} },
]
])
With you initial attempt, you were almost there, you just need to call the save() method on the modified document to update it since the method uses either the insert or the update command. In the above instance, the document contains an _id fieldand thus the save() method is equivalent to an update() operation with the upsert option set to true and the query predicate on the _id field:
db.myCollection.find().snapshot().forEach(function (document) {
document["timestamp"] = new Date(document["timestamp"]);
db.myCollection.save(document)
})
The above is similar to explicitly calling the update() method as you had previously attempted:
db.myCollection.find().snapshot().forEach(function (document) {
var date = new Date(document["timestamp"]);
var query = { "_id": document["_id"] }, /* query predicate */
update = { /* update document */
"$set": { "timestamp": date }
},
options = { "upsert": true };
db.myCollection.update(query, update, options);
})
For relatively large collection sizes, your db performance will be slow and it's recommended to use mongo bulk updates for this:
MongoDB versions >= 2.6 and < 3.2:
var bulk = db.myCollection.initializeUnorderedBulkOp(),
counter = 0;
db.myCollection.find({"timestamp": {"$not": {"$type": 9 }}}).forEach(function (doc) {
bulk.find({ "_id": doc._id }).updateOne({
"$set": { "timestamp": new Date(doc.timestamp") }
});
counter++;
if (counter % 1000 === 0) {
// Execute per 1000 operations
bulk.execute();
// re-initialize every 1000 update statements
bulk = db.myCollection.initializeUnorderedBulkOp();
}
})
// Clean up remaining operations in queue
if (counter % 1000 !== 0) bulk.execute();
MongoDB version 3.2 and newer:
var ops = [],
cursor = db.myCollection.find({"timestamp": {"$not": {"$type": 9 }}});
cursor.forEach(function (doc) {
ops.push({
"updateOne": {
"filter": { "_id": doc._id } ,
"update": { "$set": { "timestamp": new Date(doc.timestamp") } }
}
});
if (ops.length === 1000) {
db.myCollection.bulkWrite(ops);
ops = [];
}
});
if (ops.length > 0) db.myCollection.bulkWrite(ops);
It seems that there are some cumbersome things happening in mongo when trying to instantiate Date objects from NumberLong values. Mainly becasue the NumberLong values are converted to wrong representations and the fallback to current date is used.
I was fighting 2 days with mongo and finally I found the solution. The key is to convert NumberLong to Double ... and pass double values to Date constructor.
Here is the solution that uses bulb operations and work for me ...
(lastIndexedTimestamp is the collection field that is migrated to ISODate and stored in lastIndexed field. A temporary collection is created, and it is renamed to the original value in the end.)
db.annotation.aggregate( [
{ $project: {
_id: 1,
lastIndexedTimestamp: 1,
lastIndexed: { $add: [new Date(0), {$add: ["$lastIndexedTimestamp", 0]}]}
}
},
{ $out : "annotation_new" }
])
//drop annotation collection
db.annotation.drop();
//rename annotation_new to annotation
db.annotation_new.renameCollection("annotation");

MongoDB conditionally $addToSet sub-document in array by specific field

Is there a way to conditionally $addToSet based on a specific key field in a subdocument on an array?
Here's an example of what I mean - given the collection produced by the following sample bootstrap;
cls
db.so.remove();
db.so.insert({
"Name": "fruitBowl",
"pfms" : [
{
"n" : "apples"
}
]
});
n defines a unique document key. I only want one entry with the same n value in the array at any one time. So I want to be able to update the pfms array using n so that I end up with just this;
{
"Name": "fruitBowl",
"pfms" : [
{
"n" : "apples",
"mState": 1111234
}
]
}
Here's where I am at the moment;
db.so.update({
"Name": "fruitBowl",
},{
// not allowed to do this of course
// "$pull": {
// "pfms": { n: "apples" },
// },
"$addToSet": {
"pfms": {
"$each": [
{
"n": "apples",
"mState": 1111234
}
]
}
}
}
)
Unfortunately, this adds another array element;
db.so.find().toArray();
[
{
"Name" : "fruitBowl",
"_id" : ObjectId("53ecfef5baca2b1079b0f97c"),
"pfms" : [
{
"n" : "apples"
},
{
"n" : "apples",
"mState" : 1111234
}
]
}
]
I need to effectively upsert the apples document matching on n as the unique identifier and just set mState whether or not an entry already exists. It's a shame I can't do a $pull and $addToSet in the same document (I tried).
What I really need here is dictionary semantics, but that's not an option right now, nor is breaking out the document - can anyone come up with another way?
FWIW - the existing format is a result of language/driver serialization, I didn't choose it exactly.
further
I've gotten a little further in the case where I know the array element already exists I can do this;
db.so.update({
"Name": "fruitBowl",
"pfms.n": "apples",
},{
$set: {
"pfms.$.mState": 1111234,
},
}
)
But of course that only works;
for a single array element
as long as I know it exists
The first limitation isn't a disaster, but if I can't effectively upsert or combine $addToSet with the previous $set (which of course I can't) then it the only workarounds I can think of for now mean two DB round-trips.
The $addToSet operator of course requires that the "whole" document being "added to the set" is in fact unique, so you cannot change "part" of the document or otherwise consider it to be a "partial match".
You stumbled on to your best approach using $pull to remove any element with the "key" field that would result in "duplicates", but of course you cannot modify the same path in different update operators like that.
So the closest thing you will get is issuing separate operations but also doing that with the "Bulk Operations API" which is introduced with MongoDB 2.6. This allows both to be sent to the server at the same time for the closest thing to a "contiguous" operations list you will get:
var bulk = db.so.initializeOrderedBulkOp();
bulk.find({ "Name": "fruitBowl", "pfms.n": "apples": }).updateOne({
"$pull": { "pfms": { "n": "apples" } }
});
bulk.find({ "Name": "fruitBowl" }).updateOne({
"$push": { "pfms": { "n": "apples", "state": 1111234 } }
})
bulk.execute();
That pretty much is your best approach if it is not possible or practical to move the elements to another collection and rely on "upserts" and $set in order to have the same functionality but on a collection rather than array.
I have faced the exact same scenario. I was inserting and removing likes from a post.
What I did is, using mongoose findOneAndUpdate function (which is similar to update or findAndModify function in mongodb).
The key concept is
Insert when the field is not present
Delete when the field is present
The insert is
findOneAndUpdate({ _id: theId, 'likes.userId': { $ne: theUserId }},
{ $push: { likes: { userId: theUserId, createdAt: new Date() }}},
{ 'new': true }, function(err, post) { // do the needful });
The delete is
findOneAndUpdate({ _id: theId, 'likes.userId': theUserId},
{ $pull: { likes: { userId: theUserId }}},
{ 'new': true }, function(err, post) { // do the needful });
This makes the whole operation atomic and there are no duplicates with respect to the userId field.
I hope this helpes. If you have any query, feel free to ask.
As far as I know MongoDB now (from v 4.2) allows to use aggregation pipelines for updates.
More or less elegant way to make it work (according to the question) looks like the following:
db.runCommand({
update: "your-collection-name",
updates: [
{
q: {},
u: {
$set: {
"pfms.$[elem]": {
"n":"apples",
"mState": NumberInt(1111234)
}
}
},
arrayFilters: [
{
"elem.n": {
$eq: "apples"
}
}
],
multi: true
}
]
})
In my scenario, The data need to be init when not existed, and update the field If existed, and the data will not be deleted. If the datas have these states, you might want to try the following method.
// Mongoose, but mostly same as mongodb
// Update the tag to user, If there existed one.
const user = await UserModel.findOneAndUpdate(
{
user: userId,
'tags.name': tag_name,
},
{
$set: {
'tags.$.description': tag_description,
},
}
)
.lean()
.exec();
// Add a default tag to user
if (user == null) {
await UserModel.findOneAndUpdate(
{
user: userId,
},
{
$push: {
tags: new Tag({
name: tag_name,
description: tag_description,
}),
},
}
);
}
This is the most clean and fast method in the scenario.
As a business analyst , I had the same problem and hopefully I have a solution to this after hours of investigation.
// The customer document:
{
"id" : "1212",
"customerCodes" : [
{
"code" : "I"
},
{
"code" : "YK"
}
]
}
// The problem : I want to insert dateField "01.01.2016" to customer documents where customerCodes subdocument has a document with code "YK" but does not have dateField. The final document must be as follows :
{
"id" : "1212",
"customerCodes" : [
{
"code" : "I"
},
{
"code" : "YK" ,
"dateField" : "01.01.2016"
}
]
}
// The solution : the solution code is in three steps :
// PART 1 - Find the customers with customerCodes "YK" but without dateField
// PART 2 - Find the index of the subdocument with "YK" in customerCodes list.
// PART 3 - Insert the value into the document
// Here is the code
// PART 1
var myCursor = db.customers.find({ customerCodes:{$elemMatch:{code:"YK", dateField:{ $exists:false} }}});
// PART 2
myCursor.forEach(function(customer){
if(customer.customerCodes != null )
{
var size = customer.customerCodes.length;
if( size > 0 )
{
var iFoundTheIndexOfSubDocument= -1;
var index = 0;
customer.customerCodes.forEach( function(clazz)
{
if( clazz.code == "YK" && clazz.changeDate == null )
{
iFoundTheIndexOfSubDocument = index;
}
index++;
})
// PART 3
// What happens here is : If i found the indice of the
// "YK" subdocument, I create "updates" document which
// corresponds to the new data to be inserted`
//
if( iFoundTheIndexOfSubDocument != -1 )
{
var toSet = "customerCodes."+ iFoundTheIndexOfSubDocument +".dateField";
var updates = {};
updates[toSet] = "01.01.2016";
db.customers.update({ "id" : customer.id } , { $set: updates });
// This statement is actually interpreted like this :
// db.customers.update({ "id" : "1212" } ,{ $set: customerCodes.0.dateField : "01.01.2016" });
}
}
}
});
Have a nice day !