I know this has been asked before, but I have yet to find a solution that works efficiently. I am working with the MongoDB C# driver, though this is more of a general question about MongoDB operations.
I have a document structure that looks something like this:
field1: value1
field2: value2
...
users: [ {...user 1 subdocument...}, {...user 2 subdocument...}, ... ]
Some facts:
Each user subdocument includes further sub-arrays & subdocuments (so they're fairly complex).
The average users array only contains about 5 elements, but in the worst case can surpass 100.
Several thousand update operations on multiple users may be conducted per day in this system, each on one document at a time. Larger arrays will receive more frequent updates due to their data size.
I am trying to figure out how to do this efficiently. From what I've heard, you cannot directly set several array elements to new values all at once, so I had to try something else.
I tried using the $pullAll / $AddToSet + $each operations to remove the old array and replace it with a modified one. I am aware that $pullall can remove only the elements that I need as well, but I would like to preserve the order of elements.
The C# code:
try
{
WriteConcernResult wcr = collection.Update(query,
Update.Combine(Update.PullAll("users"),
Update.AddToSetEach("users", newUsers.ToArray())));
}
catch (WriteConcernException wce)
{
return wce.Message;
}
In this case newUsers is aList<BsonValue>converted to an array. However I am getting the following exception message:
Cannot update 'users' and 'users' at the same time
By the looks of it, I can't have two update statements in use on the same field in the same write operation.
I also tried Update.Set("users", newUsers.ToArray()), but apparently the Set statement doesn't work with arrays, just basic values:
Argument 2: cannot convert from 'MongoDB.Bson.BsonValue[]' to 'MongoDB.Bson.BsonValue'
So then I tried converting that array to a BsonDocument:
Update.Set("users", newUsers.ToArray().ToBsonDocument());
And got this:
An Array value cannot be written to the root level of a BSON document.
I could try replacing the whole document, but that seems like overkill and definitely not very efficient.
So the only thing I can think of now is to run two separate write operations: one to remove the unwanted old users and another to replace them with their newer versions:
WriteConcernResult wcr = collection.Update(query, Update.PullAll("users"));
WriteConcernResult wcr = collection.Update(query, Update.AddToSetEach("users", newUsers.ToArray()));
Is this my best option? Or is there another, better way of doing this?
Your code should work with a minor change:
Update.Set("users", new BsonArray(newUsers));
BsonArray is a BsonValue, where as an array of documents is not and we don't implicitly convert arrays like we do other primitive values.
this extension method solve my problem:
public static class MongoExtension
{
public static BsonArray ToBsonArray(this IEnumerable list)
{
var array = new BsonArray();
foreach (var item in list)
array.Add((BsonValue) item);
return array;
}
}
Related
My records are something like this,
{
objectID: "123123",
product_id: "456456",
categories: ['pie', 'desert']
}
I want to just replace desert with sweet in categories.
Is this possible by using partial_update_objects method?
You can't update a value but ultimately you can Remove & Add with the built-in operations. It would allow an "update" of the value (it only works if the values in the array are unique). An alternative is to get the object and compute the new value for the array to later replace it.
I have an object structure like this:
{
name: "...",
pockets: [
{
cdate: "....",
items: [...]
}
...
]
}
In an update operation, I want to add some records into the items field of the last pocket item. Using dot notation is the only way that I know to access a sub document, but I can't get what I want. So, I'm looking for something like these:
pockets.-1.items
pockets.$last.items
Is it possible to modify the last element? If yes, how?
I don't know of a way to do this using a single-line query. But you could select the record, update and then save it.
var query = <insert query here>;
var mydocs = db.mycollection.find(query);
for (var i=0 ; i<mydocs.length ; i++) {
mydocs[i].pockets[pockets.length-1].items.push('new item');
db.mycollection.save(mydoc);
}
I don't believe it is possible to do it atomically. There is a request for this functionality to be added to MongoDB.
If you can assure thread-safety in your application code, you could probably use a sequence of $pop from pockets array (that removes the last element from pockets) to variable p and then $addToSet to p.items, now you can $push p back into pockets. But if your application doesn't have a way to assure only one process may be doing this at one time, then another process could modify the array in the middle of those steps and you may end up losing that update.
You might also look into "Update if current" semantics here to see another way you can work around possible race by multiple threads issue.
I want to call a custom python function on some existing attribute of every document in the entire collection and store the result as a new key-value pair in that (same) document. May I know if there's any way to do that (since each call is independent of others) ?
I noticed cursor.forEach but can't it be done just using python efficiently ?
A simple example would be to split the string in text and store the no. of words as a new attribute.
def split_count(text):
# some complex preprocessing...
return len(text.split())
# Need something like this...
db.collection.update_many({}, {'$set': {"split": split_count('$text') }}, upsert=True)
But it seems like setting a new attribute in a document based on the value of another attribute in the same document is not possible this way yet. This post is old but the issues seem to be still open.
I found a way to call any custom python function on a collection using parallel_scan in PyMongo.
def process_text(cursor):
for row in cursor.batch_size(200):
# Any complex preprocessing here...
split_text = row['text'].split()
db.collection.update_one({'_id': row['_id']},
{'$set': {'split_text': split_text,
'num_words': len(split_text) }},
upsert=True)
def preprocess(num_threads=4):
# Get up to max 'num_threads' cursors.
cursors = db.collection.parallel_scan(num_threads)
threads = [threading.Thread(target=process_text, args=(cursor,)) for cursor in cursors]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
This is not really faster than cursor.forEach (but not that slow either), but it helps me execute any arbitrarily complex python code and save the results from within Python itself.
Also if I have an array of ints in one of the attributes, doing cursor.forEach converts them to floats which I don't want. So I preferred this way.
But I would be glad to know if there're any better ways than this :)
It is quite unlikely that it will ever be efficient to do this kind of thing in python. This is because the document would have to make a round trip and go through the python function on the client machine.
In your example code, you are passing the result of a function to a mongodb update query, which won't work. You can't run any python code inside mongodb queries on the db server.
As the answer to you linked question suggests, this type of action has to be performed in the mongo shell. e.g:
db.collection.find().snapshot().forEach(
function (elem) {
splitLength = elem.text.split(" ").length
db.collection.update(
{
_id: elem._id
},
{
$set: {
split: splitLength
}
}
);
}
);
So i need some advice as to what i'm doing incorrectly.
My database is setup up exactly like a file system consisting of folders and files.
It begins with a folder, but can have a relatively infinite number of subfolders and or files.
{
"name":"folder1",
"uniqueID":"zzz0",
"subcontents": [ {"name":"subfolder1", "uniqueID":"zzz1"},
{"name":"subfile1", "uniqueID":"zzz2"},
{"name":"subfile2", "uniqueID":"zzz3"},
{"name":"subfolder2", "subcontents": [...etc...], "uniqueID":"zzz4"},
]
}
Each folder/file document have a uniqueID so that I can reference to it (seen above zzz#). My question is, can I make a mongoDB query to pull out only a single document?
Like say for example db.fileSystemCollection.find({"uniqueID":"zzz4"}) and it would give me the following result? Do i have to use indexes to do this? I've been trying but the query returns empty every time.
intended result ---> {"name":"subfolder2", "subcontents": [...etc...], "uniqueID":"zzz4"}
[EDIT]
Based on the responses below, I will consider an XML database instead on mongoDB. The json structure cant be rearranged to work with MongoDB (too much data).
Short answer is no, as it's stated by Chris.
Your embedded representation of a tree is really good for intuitive understanding (and implementation as well). But if you want to allow effective searches on your tree using indices in MongoDB, you might consider another ways for tree storage. A bunch of ways is listed at http://docs.mongodb.org/manual/tutorial/model-tree-structures/
Please keep in mind that every representation has its own pros and cons depending on your access patterns.
Since for filesystem-like structure it's likely to have the ability to find all the sub contents of a given folder, you may use child references pattern for this:
{
"name":"folder1",
"uniqueID":"zzz0",
"subcontents": [ "zzz1",
"zzz2",
"zzz3",
"zzz4"
]
}
{
"name":"subfolder1",
"uniqueID":"zzz1"
}
...
No; searching for {uniqueID: "zzz4"} will only get you documents whose top-level uniqueID matches.
What you probably want is to maintain an array on the document which lists all the unique IDs in that tree. So your document would be:
{
"name":"folder1",
"uniqueID":"zzz0",
"idList": ["zzz0", "zzz1", "zzz2", "zzz3", "zzz4"],
"subcontents": [ {"name":"subfolder1", "uniqueID":"zzz1"},
{"name":"subfile1", "uniqueID":"zzz2"},
{"name":"subfile2", "uniqueID":"zzz3"},
{"name":"subfolder2", "subcontents": [...etc...], "uniqueID":"zzz4"},
]
}
Then you can index that:
db.fileSystemCollection.ensureIndex({"idList": 1})
Then you can find on it:
db.fileSystemCollection.find({"idList": "zzz4})
That'll return you those documents.
As an aside, if you're trying to store files in Mongo, have you looked at GridFS?
Im trying to 'compare' all documents between 2 collections, which will return true only and if only all documents inside 2 collections are exactly equal.
I've been searching for the methods on the collection, but couldnt find one that can do this.
I experimented something like these in the mongo shell, but not working as i expected :
db.test1 == db.test2
or
db.test1.to_json() == db.test2.to_json()
Please share your thoughts ! Thank you.
You can try using mongodb eval combined with your custom equals function, something like this.
Your methods don't work because in the first case you are comparing object references, which are not the same. In the second case, there is no guarantee that to_json will generate the same string even for the objects that are the same.
Instead, try something like this:
var compareCollections = function(){
db.test1.find().forEach(function(obj1){
db.test2.find({/*if you know some properties, you can put them here...if don't, leave this empty*/}).forEach(function(obj2){
var equals = function(o1, o2){
// here goes some compare code...modified from the SO link you have in the answer.
};
if(equals(ob1, obj2)){
// Do what you want to do
}
});
});
};
db.eval(compareCollections);
With db.eval you ensure that code will be executed on the database server side, without fetching collections to the client.