FindAndUpdate first 5 documents - mongodb

I am looking to a way to FindAndModify not more than 5 documents in MongoDB.
This is collection for queue which will be processed from multiple workers, so I want to put it into single query.
While I cannot control amount of updates in UpdateOptions parameter, is it possible to limit number of rows which will be found in filterDefinition?

Problem 1: findAndModify() can only update a single document at a time, as per the documentation. This is an inherent limit in MongoDB's implementation.
Problem 2: There is no way to update a specific number of arbitrary documents with a simple update() query of any kind. You can update one or all depending on the boolean value of your multi option, but that's it.
If you want to update up to 5 documents at a time, you're going to have to retrieve these documents first then update them, or update them individually in a foreach() call. Either way, you'll either be using something like:
db.collection.update(
{_id: {$in: [ doc1._id, doc2._id, ... ]}},
{ ... },
{multi: true}
);
Or you'll be using something like:
db.collection.find({ ... }).limit(5).forEach(function(doc) {
//do something to doc
db.collection.update({_id: doc._id}, doc);
});
Whichever approach you choose to take, it's going to be a workaround. Again, this is an inherent limitation.

Related

Mongo findAndUpdateMany atomically

Let's say there are 10000 documents in a collection. I have 3 app nodes doing something with those documents. I want one document to only be processed once. How I've currently done it is that in app there's a loop which queries the collection with findOneAndUpdate which finds document where claimed=false and at the same time updates them to claimed=true. It works, but the problem with this is querying documents one by one is slow. What I'd like to do is "find up to 100 documents where claimed=false and at the same time update them to claimed=true". I need this to be atomic to avoid race conditions where multiple app nodes claim the same document. But from Mongo's documentation I can't find anything like findManyAndUpdate(). In SQL worlds it's basically select for update skip locked. Is there something like this? Maybe I can utilise Mongo's transactions somehow?
Assuming "find up to" a soft limit, you can run 2 queries:
db.collection.find({claimed:false}, {_id:1}).limit(100)
to get all _ids into an array ids, then
db.collection.updateMany({claimed: false, _id: {$in: `ids`}}, {$set: {claimed: true}})
It will update 0 to 100 documents depending on concurrent updates.
UPDATE
I guess I missed the point that you actually need to retrieve the documents too, not only update them.
There is no options but update them individually. Select 100:
db.collection.find({claimed:false}).limit(100)
Then iterate for each _id:
db.collection.updateOne({_id: id, claimed:false}, {$set: {claimed:true}})
The result of each update contains modifiedCount with value 1 or 0. Discard the documents that were not modified, they were claimed by the concurrent update.

MongoDB $in operator array max length

In Meteor I use MongoDB to store a collection of Objects. There is around 500k docs inserted.
I use Objets.find({ "_id": { "$in": objIds } }); Where objIds is an array of _id. This works fine when I have an array length of 1000 but when I try with 13145 _ids the app stops responding.
Obviously there's already an index on the _id field and also this search probably won't ever happen but I'm not sure if this is normal behavior. Is there a max length for the $in operator? Couldn't find one in the documentation.
Here's my publish in Meteor :
Meteor.publish('objetsByIds', function objetsByIdsPublication(objIds) {
return Objets.find({ "_id": { "$in": objIds } });
})
Don't know much about Meteor, BUT, MongoDB uses cursors when retrieving large amounts of data, and depending on the driver implementation is how Meteor handle this.
Though you could take a look at cursors here, other idea that comes to my mind is to divide the query. So if you know 1000 works well, make a loop where, using mod, you make the results be 1000 documents long.

MongoDB: Update field with size of embedded array

I have a collection of documents with an array (set in this case): my_array
I'm adding things to this set periodically
collection.find({ "_id": target_id })
.upsert().update({ '$addToSet':{ 'my_array': new_entry }})
Many of the logical operations I perform on this DB are based on this sub-array's size. So I've created a field (indexed) called len_of_array. The index is quite critical to my use case.
In the case where this is a true array and not a set, the $incr would work beautifully in the same update
However, since the sub-collection is a set, the length of the collection, my_array, may or may not have changed.
My current solution:
Call this periodically for each target_id, but this requires performing a find in order to get the correct len_of_array
collection.find({ '_id': target_id})
.upsert().update({ '$set':{ 'len_of_array': new_length }})
My Question
Is there a way to set a field of a document to the indexed size of a sub-array in the same document in a single update?
You don't need the field length_of_array in order to query by its size. There is the $size operator. Which would save you the periodical update, too.
Let's say you want to find all documents for which the length of my_array is greater than 2:
db.coll.find({ "my_array":{ "$size" :{ "$gt": 2 }}})

Iterating over distinct items in one field in MongoDB

I have a very large collection (~7M items) in MongoDB, primarily consisting of documents with three fields.
I'd like to be able to iterate over all the unique values for one of the fields, in an expedient manner.
Currently, I'm querying for just that field, and then processing the returned results by iterating on the cursor for uniqueness. This works, but it's rather slow, and I suspect there must be a better way.
I know mongo has the db.collection.distinct() function, but this is limited by the maximum BSON size (16 MB), which my dataset exceeds.
Is there any way to iterate over something similar to the db.collection.distinct(), but using a cursor or some other method, so the record-size limit isn't as much of an issue?
I think maybe something like the map/reduce functionality would possibly be suited for this kind of thing, but I don't really understand the map-reduce paradigm in the first place, so I have no idea what I'm doing. The project I'm working on is partially to learn about working with different database tools, so I'm rather inexperienced.
I'm using PyMongo if it's relevant (I don't think it is). This should be mostly dependent on MongoDB alone.
Example:
For this dataset:
{"basePath" : "foo", "internalPath" : "Neque", "itemhash": "49f4c6804be2523e2a5e74b1ffbf7e05"}
{"basePath" : "foo", "internalPath" : "porro", "itemhash": "ffc8fd5ef8a4515a0b743d5f52b444bf"}
{"basePath" : "bar", "internalPath" : "quisquam", "itemhash": "cf34a8047defea9a51b4a75e9c28f9e7"}
{"basePath" : "baz", "internalPath" : "est", "itemhash": "c07bc6f51234205efcdeedb7153fdb04"}
{"basePath" : "foo", "internalPath" : "qui", "itemhash": "5aa8cfe2f0fe08ee8b796e70662bfb42"}
What I'd like to do is iterate over just the basePath field. For the above dataset, this means I'd iterate over foo, bar, and baz just once each.
I'm not sure if it's relevant, but the DB I have is structured so that while each field is not unique, the aggregate of all three is unique (this is enforced with an index).
The query and filter operation I'm currently using (note: I'm restricting the query to a subset of the items to reduce processing time):
self.log.info("Running path query")
itemCursor = self.dbInt.coll.find({"basePath": pathRE}, fields={'_id': False, 'internalPath': False, 'itemhash': False}, exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
for item in itemCursor:
# print item
items.add(item["basePath"])
self.log.info("total unique items = %s", len(items))
Running the same query with self.dbInt.coll.distinct("basePath") results in OperationFailure: command SON([('distinct', u'deduper_collection'), ('key', 'basePath')]) failed: exception: distinct too big, 16mb cap
Ok, here is the solution I wound up using. I'd add it as an answer, but I don't want to detract from the actual answers that got me here.
reStr = "^%s" % fqPathBase
pathRE = re.compile(reStr)
self.log.info("Running path query")
pipeline = [
{ "$match" :
{
"basePath" : pathRE
}
},
# Group the keys
{"$group":
{
"_id": "$basePath"
}
},
# Output to a collection "tmp_unique_coll"
{"$out": "tmp_unique_coll"}
]
itemCursor = self.dbInt.coll.aggregate(pipeline, allowDiskUse=True)
itemCursor = self.dbInt.db.tmp_unique_coll.find(exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
retItems = 0
for item in itemCursor:
retItems += 1
items.add(item["_id"])
self.log.info("Recieved items = %d", retItems)
self.log.info("total unique items = %s", len(items))
General performance compared to my previous solution is about 2X in terms of wall-clock time. On a query that returns 834273 items, with 11467 uniques:
Original method(retreive, stuff into a python set to enforce uniqueness):
real 0m22.538s
user 0m17.136s
sys 0m0.324s
Aggregate pipeline method :
real 0m9.881s
user 0m0.548s
sys 0m0.096s
So while the overall execution time is only ~2X better, the aggregation pipeline is massively more performant in terms of actual CPU time.
Update:
I revisited this project recently, and rewrote the DB layer to use a SQL database, and everything was much easier. A complex processing pipeline is now a simple SELECT DISTINCT(colName) WHERE xxx operation.
Realistically, MongoDB and NoSQL databases in general are vary much the wrong database type for what I'm trying to do here.
From the discussion points so far I'm going to take a stab at this. And I'm also noting that as of writing, the 2.6 release for MongoDB should be just around the corner, good weather permitting, so I am going to make some references there.
Oh and the FYI that didn't come up in chat, .distinct() is an entirely different animal that pre-dates the methods used in the responses here, and as such is subject to many limitations.
And this soltion is finally a solution for 2.6 up, or any current dev release over 2.5.3
The alternative for now is use mapReduce because the only restriction is the output size
Without going into the inner workings of distinct, I'm going to go on the presumption that aggregate is doing this more efficiently [and even more so in upcoming release].
db.collection.aggregate([
// Group the key and increment the count per match
{$group: { _id: "$basePath", count: {$sum: 1} }},
// Hey you can even sort it without breaking things
{$sort: { count: 1 }},
// Output to a collection "output"
{$out: "output"}
])
So we are using the $out pipeline stage to get the final result that is over 16MB into a collection of it's own. There you can do what you want with it.
As 2.6 is "just around the corner" there is one more tweak that can be added.
Use allowDiskUse from the runCommand form, where each stage can use disk and not be subject to memory restrictions.
The main point here, is that this is nearly live for production. And the performance will be better than the same operation in mapReduce. So go ahead and play. Install 2.5.5 for you own use now.
A MapReduce, in the current version of Mongo would avoid the problems of the results exceeding 16MB.
map = function() {
if(this['basePath']) {
emit(this['basePath'], 1);
}
// if basePath always exists you can just call the emit:
// emit(this.basePath);
};
reduce = function(key, values) {
return Array.sum(values);
};
For each document the basePath is emitted with a single value representing the count of that value. The reduce simply creates the sum of all the values. The resulting collection would have all unique values for basePath along with the total number of occurrences.
And, as you'll need to store the results to prevent an error using the out option which specifies a destination collection.
db.yourCollectionName.mapReduce(
map,
reduce,
{ out: "distinctMR" }
)
#Neil Lunn 's answer could be simplified:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}])
$project filters fields for you. In particular, '_id': 0 filters out the _id field.
Result still too large? Batch it with $limit and $skip:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}, {'$limit': X}, {'$skip': Y}])
I think the most scalable solution is to perform a query for each unique value. The queries must be executed one after the other, and each query will give you the "next" unique value based on the previous query result. The idea is that the query will return you one single document, that will contain the unique value that you are looking for. If you use the proper projection, mongo will just use the index loaded into memory without having to read from disk.
You can define this strategy using $gt operator in mongo, but you must take into account values like null or empty strings, and potentially discard them using the $ne or $nin operator. You can also extend this strategy using multiple keys, using operators like $gte for one key and $gt for the other.
This strategy should give you the distinct values of a string field in alphabetical order, or distinct numerical values sorted ascendingly.

How to set array length after updating it via $addToSet in mongodb?

Document structure looks like this,
{
blacklists:[] // elements should be unique
blacklistsLength:0 // length of blacklists
}
Adding sets of value to blacklists is easy.
db.posts.update({_id:...}, {$addtoSet:{blacklists:{$each:['peter', 'bob', 'steven']}}});
But How can I update blacklistLength at the same time to reflect the changes?
This is not possible. Either you have
Update the length seperately using a subsequent findAndModify
command or
You can do it per name and rewrite the query using a negation in
your criteria and $push rather than $addToSet (not necessarily
needed but a lot faster with large blacklists since addToSet is
always o(n) regardless of indexes) :
db.posts.update({_id:..., blacklists:{$ne:'peter'}}, {$push:{blacklists:{'peter'}},$inc:{blacklistsLength: 1}});
The latter being perfectly safe since the list and the length are adjusted atomically but obviously has slightly degraded performance. Since it also has the benefit of better overall performance due to the $push versus $addToSet performance issue on large arrays (and blacklists tend to become huge and remember that the $push version of the update uses an index on blacklist in the update criteria while $addToSet will NOT use an index during it's set scan) it is generally the best solution.
Would the following not work?
db.posts.update({_id:...}, {
$addtoSet:{blacklists:{$each:['peter', 'bob', 'steven']}},
$set: {blacklistsLength: ['peter', 'bob', 'steven'].length}
});
I had a similar problem, please see the discussion here: google groups mongo
As you can notice, following to this discussion, a bug was open:
Mongo Jira
As you upsert items into the database, simply query the item to see if it's in your embedded array. That way, you're avoiding pushing duplicate items, and only incrementing the counter as you add new items.
q = {'blacklists': {'$nin': ['blacklist_to_insert'] }}
u = {
'$push' : {'blacklists': { 'blacklist_to_insert' } },
'$inc' : {'total_blacklists': 1 }
}
o = { 'upsert' : true }
db.posts.update(q,u,o)