How to set array length after updating it via $addToSet in mongodb? - mongodb

Document structure looks like this,
{
blacklists:[] // elements should be unique
blacklistsLength:0 // length of blacklists
}
Adding sets of value to blacklists is easy.
db.posts.update({_id:...}, {$addtoSet:{blacklists:{$each:['peter', 'bob', 'steven']}}});
But How can I update blacklistLength at the same time to reflect the changes?

This is not possible. Either you have
Update the length seperately using a subsequent findAndModify
command or
You can do it per name and rewrite the query using a negation in
your criteria and $push rather than $addToSet (not necessarily
needed but a lot faster with large blacklists since addToSet is
always o(n) regardless of indexes) :
db.posts.update({_id:..., blacklists:{$ne:'peter'}}, {$push:{blacklists:{'peter'}},$inc:{blacklistsLength: 1}});
The latter being perfectly safe since the list and the length are adjusted atomically but obviously has slightly degraded performance. Since it also has the benefit of better overall performance due to the $push versus $addToSet performance issue on large arrays (and blacklists tend to become huge and remember that the $push version of the update uses an index on blacklist in the update criteria while $addToSet will NOT use an index during it's set scan) it is generally the best solution.

Would the following not work?
db.posts.update({_id:...}, {
$addtoSet:{blacklists:{$each:['peter', 'bob', 'steven']}},
$set: {blacklistsLength: ['peter', 'bob', 'steven'].length}
});

I had a similar problem, please see the discussion here: google groups mongo
As you can notice, following to this discussion, a bug was open:
Mongo Jira

As you upsert items into the database, simply query the item to see if it's in your embedded array. That way, you're avoiding pushing duplicate items, and only incrementing the counter as you add new items.
q = {'blacklists': {'$nin': ['blacklist_to_insert'] }}
u = {
'$push' : {'blacklists': { 'blacklist_to_insert' } },
'$inc' : {'total_blacklists': 1 }
}
o = { 'upsert' : true }
db.posts.update(q,u,o)

Related

MongoDB pagination using cursors without sorting

When looking for pagination techniques on the internet, one usually finds two ways :
Offset-based pagination :
Collection.find(
{ where_this: "equals that" },
{ skip: 15, limit: 5 }
)
Cursor-based pagination :
Collection.find(
{ where_this: "equals that", _id: { $gt: cursor }},
{ sort: { _id: 1 }}
)
But is there a way to have cursor-based pagination without sorting the collection according to that cursor ? Like, telling Mongo "Alright, I want the 5 next items after that _id, no matter in which order the _ids are, just give me 5 items after you see that _id". Something along those lines :
Collection.find(
{ where_this: "equals that", _id: { $must_come_after: cursor }},
{ sort: { other_field: 1 }}
)
It is not always possible to use the field you're sorting with as the cursor. First of all, because these fields can be of different types and you might allow your app users to sort, for example, tables as they please. With a strongly-typed API framework like GraphQL, this would be a mess to handle. Secondly, you could have two or more equal values for that field following each other in the sorted collection. If your pages split in the middle, asking for the next page will either give you duplicates or ignore items.
Is there a way to do that ? How is it usually done to allow custom sorting fields without offset-based pagination ? Thanks.
When we talk about "paginating a result set", the implicit assumption is that the result set stays the same throughout the process. In order for the result set to stay the same, it must normally be sorted. Without specifying an order, the database is free to return the documents in any order, and this order may change from one retrieval to the next. Reordering of documents that a user is paginating through creates a poor user experience.
you could have two or more equal values for that field following each other in the sorted collection. If your pages split in the middle, asking for the next page will either give you duplicates or ignore items.
The database can return the documents which compare equal in any order, and this order can change between adjacent queries. This is why when sorting on a field that has a low cardinality, it is a good idea to add another field with a high cardinality to the sort expression to ensure the documents are returned in a stable order.
But is there a way to have cursor-based pagination without sorting the collection according to that cursor ?
You can encode offset in a cursor identifier and use skip/limit.

Iterating over distinct items in one field in MongoDB

I have a very large collection (~7M items) in MongoDB, primarily consisting of documents with three fields.
I'd like to be able to iterate over all the unique values for one of the fields, in an expedient manner.
Currently, I'm querying for just that field, and then processing the returned results by iterating on the cursor for uniqueness. This works, but it's rather slow, and I suspect there must be a better way.
I know mongo has the db.collection.distinct() function, but this is limited by the maximum BSON size (16 MB), which my dataset exceeds.
Is there any way to iterate over something similar to the db.collection.distinct(), but using a cursor or some other method, so the record-size limit isn't as much of an issue?
I think maybe something like the map/reduce functionality would possibly be suited for this kind of thing, but I don't really understand the map-reduce paradigm in the first place, so I have no idea what I'm doing. The project I'm working on is partially to learn about working with different database tools, so I'm rather inexperienced.
I'm using PyMongo if it's relevant (I don't think it is). This should be mostly dependent on MongoDB alone.
Example:
For this dataset:
{"basePath" : "foo", "internalPath" : "Neque", "itemhash": "49f4c6804be2523e2a5e74b1ffbf7e05"}
{"basePath" : "foo", "internalPath" : "porro", "itemhash": "ffc8fd5ef8a4515a0b743d5f52b444bf"}
{"basePath" : "bar", "internalPath" : "quisquam", "itemhash": "cf34a8047defea9a51b4a75e9c28f9e7"}
{"basePath" : "baz", "internalPath" : "est", "itemhash": "c07bc6f51234205efcdeedb7153fdb04"}
{"basePath" : "foo", "internalPath" : "qui", "itemhash": "5aa8cfe2f0fe08ee8b796e70662bfb42"}
What I'd like to do is iterate over just the basePath field. For the above dataset, this means I'd iterate over foo, bar, and baz just once each.
I'm not sure if it's relevant, but the DB I have is structured so that while each field is not unique, the aggregate of all three is unique (this is enforced with an index).
The query and filter operation I'm currently using (note: I'm restricting the query to a subset of the items to reduce processing time):
self.log.info("Running path query")
itemCursor = self.dbInt.coll.find({"basePath": pathRE}, fields={'_id': False, 'internalPath': False, 'itemhash': False}, exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
for item in itemCursor:
# print item
items.add(item["basePath"])
self.log.info("total unique items = %s", len(items))
Running the same query with self.dbInt.coll.distinct("basePath") results in OperationFailure: command SON([('distinct', u'deduper_collection'), ('key', 'basePath')]) failed: exception: distinct too big, 16mb cap
Ok, here is the solution I wound up using. I'd add it as an answer, but I don't want to detract from the actual answers that got me here.
reStr = "^%s" % fqPathBase
pathRE = re.compile(reStr)
self.log.info("Running path query")
pipeline = [
{ "$match" :
{
"basePath" : pathRE
}
},
# Group the keys
{"$group":
{
"_id": "$basePath"
}
},
# Output to a collection "tmp_unique_coll"
{"$out": "tmp_unique_coll"}
]
itemCursor = self.dbInt.coll.aggregate(pipeline, allowDiskUse=True)
itemCursor = self.dbInt.db.tmp_unique_coll.find(exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
retItems = 0
for item in itemCursor:
retItems += 1
items.add(item["_id"])
self.log.info("Recieved items = %d", retItems)
self.log.info("total unique items = %s", len(items))
General performance compared to my previous solution is about 2X in terms of wall-clock time. On a query that returns 834273 items, with 11467 uniques:
Original method(retreive, stuff into a python set to enforce uniqueness):
real 0m22.538s
user 0m17.136s
sys 0m0.324s
Aggregate pipeline method :
real 0m9.881s
user 0m0.548s
sys 0m0.096s
So while the overall execution time is only ~2X better, the aggregation pipeline is massively more performant in terms of actual CPU time.
Update:
I revisited this project recently, and rewrote the DB layer to use a SQL database, and everything was much easier. A complex processing pipeline is now a simple SELECT DISTINCT(colName) WHERE xxx operation.
Realistically, MongoDB and NoSQL databases in general are vary much the wrong database type for what I'm trying to do here.
From the discussion points so far I'm going to take a stab at this. And I'm also noting that as of writing, the 2.6 release for MongoDB should be just around the corner, good weather permitting, so I am going to make some references there.
Oh and the FYI that didn't come up in chat, .distinct() is an entirely different animal that pre-dates the methods used in the responses here, and as such is subject to many limitations.
And this soltion is finally a solution for 2.6 up, or any current dev release over 2.5.3
The alternative for now is use mapReduce because the only restriction is the output size
Without going into the inner workings of distinct, I'm going to go on the presumption that aggregate is doing this more efficiently [and even more so in upcoming release].
db.collection.aggregate([
// Group the key and increment the count per match
{$group: { _id: "$basePath", count: {$sum: 1} }},
// Hey you can even sort it without breaking things
{$sort: { count: 1 }},
// Output to a collection "output"
{$out: "output"}
])
So we are using the $out pipeline stage to get the final result that is over 16MB into a collection of it's own. There you can do what you want with it.
As 2.6 is "just around the corner" there is one more tweak that can be added.
Use allowDiskUse from the runCommand form, where each stage can use disk and not be subject to memory restrictions.
The main point here, is that this is nearly live for production. And the performance will be better than the same operation in mapReduce. So go ahead and play. Install 2.5.5 for you own use now.
A MapReduce, in the current version of Mongo would avoid the problems of the results exceeding 16MB.
map = function() {
if(this['basePath']) {
emit(this['basePath'], 1);
}
// if basePath always exists you can just call the emit:
// emit(this.basePath);
};
reduce = function(key, values) {
return Array.sum(values);
};
For each document the basePath is emitted with a single value representing the count of that value. The reduce simply creates the sum of all the values. The resulting collection would have all unique values for basePath along with the total number of occurrences.
And, as you'll need to store the results to prevent an error using the out option which specifies a destination collection.
db.yourCollectionName.mapReduce(
map,
reduce,
{ out: "distinctMR" }
)
#Neil Lunn 's answer could be simplified:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}])
$project filters fields for you. In particular, '_id': 0 filters out the _id field.
Result still too large? Batch it with $limit and $skip:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}, {'$limit': X}, {'$skip': Y}])
I think the most scalable solution is to perform a query for each unique value. The queries must be executed one after the other, and each query will give you the "next" unique value based on the previous query result. The idea is that the query will return you one single document, that will contain the unique value that you are looking for. If you use the proper projection, mongo will just use the index loaded into memory without having to read from disk.
You can define this strategy using $gt operator in mongo, but you must take into account values like null or empty strings, and potentially discard them using the $ne or $nin operator. You can also extend this strategy using multiple keys, using operators like $gte for one key and $gt for the other.
This strategy should give you the distinct values of a string field in alphabetical order, or distinct numerical values sorted ascendingly.

How to do efficent query to replace $exists in MongoDB

I have a MongoDB collection with various data in it. (about millions)
These data have a data struct like {k: {a:1,b:2,c:{},...}} and I don't know extactly what in it.
Now I wanna do a counting on this collection to return me the total elements in the collection that k is not empty by using {k:{$exists:true}} but that's turns out very slow ...
Then I add an index on k and trying to query by : {k:{$gt:{}} but that's not return the correct results.
So, how to do this counting on the collection now?
Note that I don't know the data structure of k.
If you are using a version before version 2, $exists is not able to use an index.
See this answer: https://stackoverflow.com/a/7503114/131809
So, try upgrading your version of MongoDB
From the docs:
Before v2.0, $exists is not able to use an index. Indexes on other
fields are still used.
$exists is not very efficient even with an
index, and esp. with {$exists:true} since it will effectively have to
scan all indexed values.
The second part of that is perhaps the important bit.
It sounds like sparse index may be the key here...
db.collection.count({k:{$ne:null}})
By the way use sparse index on k.
db.collection.ensureIndex({k:1}, {sparse: true});
Try using $ne : null
So, as per your code example:
{k:{$ne : null}}

Time Complexity of $addToset vs $push when element does not exist in the Array

Given: Connection is Safe=True so Update's return will contain update information.
Say I have a documents that look like:
[{'a': [1]}, {'a': [2]}, {'a': [1,2]}]
And I issue:
coll.update({}, {'$addToSet': {'a':1}}, multi=True)
The result would be:
{u'connectionId': 28,
u'err': None,
u'n': 3,
u'ok': 1.0,
u'updatedExisting': True
}
Even when come documents already have that value. To avoid this I could issue a command.
coll.update({'a': {'$ne': 1}}, {'$push': {'a':1}}, multi=True)
What's the Time Complexity Comparison for $addToSet vs. $push with a $ne check ?
Looks like $addToSet is doing the same thing as your command: $push with a $ne check. Both would be O(N)
https://github.com/mongodb/mongo/blob/master/src/mongo/db/ops/update_internal.cpp
if speed is really important then why not use a hash:
instead of:
{'$addToSet': {'a':1}}
{'$addToSet': {'a':10}}
use:
{$set: {'a.1': 1}
{$set: {'a.10': 1}
Edit
Ok since I read your question wrong all along it turns out that actually you are looking at two different queries and judging the time complexity between them.
The first query being:
coll.update({}, {'$addToSet': {'a':1}}, multi=True)
And the second being:
coll.update({'a': {'$ne': 1}}, {'$push': {'a':1}}, multi=True)
First problem springs to mind here, no indexes. $addToSet, being an update modifier, I do not believe it uses an index as such you are doing a full table scan to accomplish what you need.
In reality you are looking for all documents that do not have 1 in a already and looking to $push the value 1 to that a array.
So 2 points to the second query even before we get into time complexity here because the first query:
Does not use indexes
Would be a full table scan
Would then do a full array scan (with no index) to $addToSet
So I have pretty much made my mind up here that the second query is what your looking for before any of the Big O notation stuff.
There is a problem to using big O notation to explain the time complexity of each query here:
I am unsure of what perspective you want, whether it is per document or for the whole collection.
I am unsure of indexes as such. Using indexes will actually create a Log algorithm on a however not using indexes does not.
However the first query would look something like: O(n) per document since:
The $addToSet would need to iterate over each element
The $addToSet would then need to do an O(1) op to insert the set if it does not exist. I should note I am unsure whether the O(1) is cancelled out or not (light reading suggests my version), I have cancelled it out here.
Per collection, without the index it would be: O(2n2) since the complexity of iterating a will expodentially increase with every new document.
The second query, without indexes, would look something like: O(2n2) (O(n) per document) I believe since $ne would have the same problems as $addToSet without indexes. However with indexes I believe this would actually be O(log n log n) (O(log n) per document) since it would first find all documents with a in then all documents without 1 in their set based upon the b-tree.
So based upon time complexity and the notes at the beginning I would say query 2 is better.
If I am honest I am not used to explaining in "Big O" Notation so this is experimental.
Hope it helps,
Adding my observation in difference between addToSet and push from bulk update of 100k documents.
when you are doing bulk update. addToSet will be executed separately.
for example,
bulkInsert.find({x:y}).upsert().update({"$set":{..},"$push":{ "a":"b" } , "$setOnInsert": {} })
will first insert and set the document. And then it executes addToSet query.
I saw clear difference of 10k between
db.collection_name.count() #gives around 40k
db.collection_name.count({"a":{$in:["b"]}}) # it gives only around 30k
But when replaced $addToSet with $push. both count query returned same value.
note: when you're not concerned about duplicate entry in array. you can go with $push.

"Field name duplication not allowed with modifiers" on update

I get a "Field name duplication not allowed with modifiers" error while trying to update a field(s) in Mongo. An example:
> db.test.insert({test: "test1", array: [0]});
> var testFetch = db.test.findOne({test: "test1"});
> db.test.update(testFetch,
{$push: {array: 1}, //push element to end of key "array"
$pop: {array: -1} //pop element from the start of key "array"
});
Field name duplication not allowed with modifiers
Is there no way to perform this atomic operation? I don't want to be doing two separate updates for this.
There's an outstanding issue for this on Mongo's ticket system: http://jira.mongodb.org/browse/SERVER-1050
Looks like it's scheduled for this year. Your scenario is definitely a sensible scenario, but it's also tied to a bunch of edge cases. What if you $push and $pop on an empty array? What's expected? What do you want if you $push and $pull?
I don't want to be doing two separate updates for this.
I know that doing this really has "code smell", but is it a complete blocker for using this solution? Is the "double-update" going to completely destroy server performance?