mongodb: field sorted by number of occurances - mongodb

I have a collection with each document representing a virtual auction. I want to find the most common item ID for a given time period. In SQL, I'd SELECT item, COUNT(*) as count with GROUP BY item and the usual sorting and limits. Is there a mongodb equivalent to this?

MongoDB has several options here.
In version 2.1.0+ you can use the new Aggregation Framework. There's a conversion chart right here.
In older versions you can use either a Map / Reduce
For simple versions of aggregation you ca use the special aggregation operators.
Each of these options will have a different syntax and a different speed.
In any of these cases, you will likely find these options relatively slow. Map / Reduce jobs are intended to be run "off-line", generally as a "cron job" or "scheduled task". Note that if you plan to do this a lot, you will likely want to pre-aggregate this data.

Or you can use MapReduce and then sort the output collection or you can use the command group, but you will need to do most of the sorting and limit on the client side.
Example for group command:
db.coll.group(
{key: { a:true, b:true },
cond: { active:1 },
reduce: function(obj,prev) { prev.csum += obj.c; },
initial: { csum: 0 }
});
It's also worth to mention that in the next stable version of mongodb (2.2) they will release a new aggregation framework, which will make this operations much easier.

Related

This question is regarding the match and sort oprimization is MongoDb

{
"_id" : ObjectId("62c3aa311984f666ef75d1n7"),
"eventCode" : "332",
"time" : 1657008013000.0,
"dat" : "61558575921c023a93f81362",
}
This is how a document looks like, now I need to calculate some value for which I am using aggregation pipeline and I am using the match and sort operators first, what I am using is.
$match: {
dat: { $regex: "^" + eventStat.dat },
time: {
$gte: eventStat.time.from,
$lte: eventStat.time.to,
},
},
$sort: { time: 1 }
So I am using this two opeartors in the pipeline first,
Now Mongodb Document says that aggregation will always implement match first before sort but in some cases it performs sort first, I am not sure but I think that happens when there is a index on field key used in sort not present in match and Mongodb decides it better to sort first.
Here I am using time in both match and sort so I want to know that is there still any case possible where sort might happen before match?
If yes, I read that a dummy project operator can force it to match first but what exactly is a dummy project opeartor?
Most questions about how the database is executing a query can be answered (or at least further reasoned about) by inspecting the explain plan(s) associated with the operation(s). Let's first address a few of your statements directly before turning to inspect explain plans ourselves.
Now Mongodb Document says that aggregation will always implement match first before sort
Where does it say this?
In general, all databases are required to provide results that are semantically valid relative to the query that the client issued. This gets mentioned often when SQL is being discussed as it is a "declarative language". This means that users describe what data they want rather than how to retrieve that data.
MongoDB's aggregation framework is a bit less declarative than SQL. Or said another way, the aggregation framework is a little more descriptive in how to do things. This is because the ordering that the stages are defined in for a pipeline help define the semantics of the results. If, for example, one were to $project out a field first and then attempt to use that (no longer present) field in a subsequent stage (such as a $match or $group), MongoDB would not make any adjustments to how it processes the pipeline to make that field available to that later stage. This is because the user specifically requested the removal of that stage earlier in the pipeline which is part of the semantics for the overall pipeline.
Based on this (and another factor that we will talk about next), I would be surprised to see any documentation suggesting that the database always performs a match stage before a sort stage.
but in some cases it performs sort first, I am not sure but I think that happens when there is a index on field key used in sort not present in match and Mongodb decides it better to sort first.
Again returning to generalizations about all databases, one of their primary jobs is to return data to clients as efficiently as possible. So as long as their approach at executing the query does not logically change the results based on the semantics expressed by the client in the query, the database can gather the results in any manner that it thinks will be the most effective.
For aggregation specifically, this most commonly means that stages will either get reordered or combined altogether for execution. Some of the changes that the database will attempt to do are outlined on the Aggregation Pipeline Optimization page.
Logically, filtering data and then sorting it yields the same results as sorting the data and then filtering it. So indeed, one of the optimizations outlined on that page is indeed reordering $match and $sort stages.
The important thing to keep in mind here is mentioned at the very top of that page. The database "attempts to reshape the pipeline for improved performance", but how effective these adjustments are depend on other factors. The biggest factor for many of these is the presence (or absence) of an associated index to support the (reordered) pipeline.
Here I am using time in both match and sort so I want to know that is there still any case possible where sort might happen before match?
Unless you are explicitly forcing the database to use a particular plan (such as by hinting it), there is always a chance that it will choose to do something unexpected. Databases are quite good at picking optimal plans though and are always improving with each new release, so ideally we'd leave the system to do its work and not try to do that work for the database (with hints or otherwise). In your particular situation, I believe we can design an approach that is highly optimized for both the $match and the $sort setting it up for success.
If yes, I read that a dummy project operator can force it to match first but what exactly is a dummy project opeartor?
It sounds like this is also asking about other ways in which we could manually influence plan selection. We are going to stay away from that as it is fragile, not something we should rely on long term, and unnecessary for our purposes anyway.
Inspecting Explain
So what happens if we have an index on { time: 1 } and we run the aggregation? Well, the explain output (on 6.0) shows us the following:
queryPlanner: {
parsedQuery: {
'$and': [
{ time: { '$lte': 100 } },
{ time: { '$gte': 0 } },
{ dat: { '$regex': '^ABC' } }
]
},
...
winningPlan: {
stage: 'FETCH',
filter: { dat: { '$regex': '^ABC' } },
inputStage: {
stage: 'IXSCAN',
keyPattern: { time: 1 },
indexBounds: { time: [ '[0, 100]' ] }
...
}
},
Notice that there is no $sort stage at all. What has happened is that the database realized that it could use the { time: 1 } index to do two things at the same time:
Filter the data according to the range predicates on the time field.
Walk the index in the requested sort order without having to manually do so.
So if we go back to the main original question of whether aggregation will perform the match or sort first, we now see that a third option is for the database to do both activities them at the same time!
At the very least, you should have an index on { time: 1 }.
Ideally you would instead have a compound index on the other field (dat) as well. There is a bit of a wrinkle here in that you are currently applying a regex operator against the field. If the filter were a direct equality match, the guidance would be easy (prepend dat: 1 as the first key in the compound index).
Without knowing more about your situation, it's unclear which of the two compound indexes the database could use more effectively to support this operation. If the regex filter on dat is highly selective, then { dat: 1, time: 1 } will probably be ideal. It will require a manual sort, but that can all be done after scanning the index before retrieving the full documents. If the regex filter on dat is not very selective, then { time: 1, dat: 1 } may be ideal. This would prevent the need to manually sort, but will result in some additional index key scanning.
In either case, examining explain output may be helpful in finding the approach that is best suited for your particular situation.

MongoDB - how to get fields fill-rates as quickly as possible?

We have a very big MongoDB collection of documents with some pre-defined fields that can either have a value or not.
We need to gather fill-rates of those fields, we wrote a script that goes over all documents and counts fill-rates for each, problem is it takes a long time to process all documents.
Is there a way to use db.collection.aggregate or db.collection.mapReduce to run such a script server-side?
Should it have significant performance improvements?
Will it slow down other usages of that collection (e.g. holding a major lock)?
Answering my own question, I was able to migrate my script using a cursor to scan the whole collection, to a map-reduce query, and running on a sample of the collection it seems it's at least twice as fast using the map-reduce.
Here's how the old script worked (in node.js):
var cursor = collection.find(query, projection).sort({_id: 1}).limit(limit);
var next = function() {
cursor.nextObject(function(err, doc) {
processDoc(doc, next);
});
};
next();
and this is the new script:
collection.mapReduce(
function () {
var processDoc = function(doc) {
...
};
processDoc(this);
},
function (key, values) {
return Array.sum(values)
},
{
query : query,
out: {inline: 1}
},
function (error, results) {
// print results
}
);
processDoc stayed basically the same, but instead of incrementing a counter on a global stats object, I do:
emit(field_name, 1);
running old and new on a sample of 100k, old took 20 seconds, new took 8.
some notes:
map-reduce's limit option doesn't work on sharded collections, I had to query for _id : { $gte, $lte} to create the sample size needed.
map-reduce's performance boost option: jsMode : true doesn't work on sharded collections as well (might have improve performance even more), it might work to run it manually on each shard to gain that feature.
As I understood what you want to achieve is compute something on your documents, after that you have a new "document" that can be queried. You don't need to store the "new values" computed.
If you don't need to write your "new values" inside that documents, you can use Aggregation Framework.
Aggregations operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result.
https://docs.mongodb.com/manual/aggregation/
Since Aggregation Framework has a lot of features i can't give you more informations about how to resolve your issue.

How to make distinct operation more quickly in mongodb

There are 30,000,000 records in one collection.
when I use distinct command on this collection by java, it takes about 4 minutes, the result's count is about 40,000.
Is mongodb's distinct operation so inefficiency?
and how can I make it more efficient?
Is mongodb's distinct operation so inefficiency?
At 30m records? I would say 4 minutes is actually quite good, I think that's just as fast, maybe a little faster than SQL does it.
I would probably test this in other databases before saying it is inefficient.
However, one way of looking at performance is to see if the field is indexed first and if that index is in RAM or can be loaded without page thrashing. Distinct() can use an index so long as the field has an index.
and how can I make it more efficient?
You could use a couple of methods:
Incremental map reduce to distinct the main collection once every, say, 5 mins to a unique collection
And Pre-aggregate the unique collection on save by saving to two collections, one detail and one unique
Those are the two most viable methods of getting around this performantly.
Edit
Distinct() is not outdated and if it fits your needs is actually more performant than $group since it can use an index.
The .distinct() operation is an old one, as is .group(). In general these have been superseded by .aggregate() which should be generally used in preference to these actions:
db.collection.aggregate([
{ "$group": {
"_id": "$field",
"count": { "$sum": 1 }
}
)
Substituting "$field" with whatever field you wish to get a distinct count from. The $ prefixes the field name to assign the value.
Look at the documentation and especially $group for more information.

Iterating over distinct items in one field in MongoDB

I have a very large collection (~7M items) in MongoDB, primarily consisting of documents with three fields.
I'd like to be able to iterate over all the unique values for one of the fields, in an expedient manner.
Currently, I'm querying for just that field, and then processing the returned results by iterating on the cursor for uniqueness. This works, but it's rather slow, and I suspect there must be a better way.
I know mongo has the db.collection.distinct() function, but this is limited by the maximum BSON size (16 MB), which my dataset exceeds.
Is there any way to iterate over something similar to the db.collection.distinct(), but using a cursor or some other method, so the record-size limit isn't as much of an issue?
I think maybe something like the map/reduce functionality would possibly be suited for this kind of thing, but I don't really understand the map-reduce paradigm in the first place, so I have no idea what I'm doing. The project I'm working on is partially to learn about working with different database tools, so I'm rather inexperienced.
I'm using PyMongo if it's relevant (I don't think it is). This should be mostly dependent on MongoDB alone.
Example:
For this dataset:
{"basePath" : "foo", "internalPath" : "Neque", "itemhash": "49f4c6804be2523e2a5e74b1ffbf7e05"}
{"basePath" : "foo", "internalPath" : "porro", "itemhash": "ffc8fd5ef8a4515a0b743d5f52b444bf"}
{"basePath" : "bar", "internalPath" : "quisquam", "itemhash": "cf34a8047defea9a51b4a75e9c28f9e7"}
{"basePath" : "baz", "internalPath" : "est", "itemhash": "c07bc6f51234205efcdeedb7153fdb04"}
{"basePath" : "foo", "internalPath" : "qui", "itemhash": "5aa8cfe2f0fe08ee8b796e70662bfb42"}
What I'd like to do is iterate over just the basePath field. For the above dataset, this means I'd iterate over foo, bar, and baz just once each.
I'm not sure if it's relevant, but the DB I have is structured so that while each field is not unique, the aggregate of all three is unique (this is enforced with an index).
The query and filter operation I'm currently using (note: I'm restricting the query to a subset of the items to reduce processing time):
self.log.info("Running path query")
itemCursor = self.dbInt.coll.find({"basePath": pathRE}, fields={'_id': False, 'internalPath': False, 'itemhash': False}, exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
for item in itemCursor:
# print item
items.add(item["basePath"])
self.log.info("total unique items = %s", len(items))
Running the same query with self.dbInt.coll.distinct("basePath") results in OperationFailure: command SON([('distinct', u'deduper_collection'), ('key', 'basePath')]) failed: exception: distinct too big, 16mb cap
Ok, here is the solution I wound up using. I'd add it as an answer, but I don't want to detract from the actual answers that got me here.
reStr = "^%s" % fqPathBase
pathRE = re.compile(reStr)
self.log.info("Running path query")
pipeline = [
{ "$match" :
{
"basePath" : pathRE
}
},
# Group the keys
{"$group":
{
"_id": "$basePath"
}
},
# Output to a collection "tmp_unique_coll"
{"$out": "tmp_unique_coll"}
]
itemCursor = self.dbInt.coll.aggregate(pipeline, allowDiskUse=True)
itemCursor = self.dbInt.db.tmp_unique_coll.find(exhaust=True)
self.log.info("Query complete. Processing")
self.log.info("Query returned %d items", itemCursor.count())
self.log.info("Filtering returned items to require uniqueness.")
items = set()
retItems = 0
for item in itemCursor:
retItems += 1
items.add(item["_id"])
self.log.info("Recieved items = %d", retItems)
self.log.info("total unique items = %s", len(items))
General performance compared to my previous solution is about 2X in terms of wall-clock time. On a query that returns 834273 items, with 11467 uniques:
Original method(retreive, stuff into a python set to enforce uniqueness):
real 0m22.538s
user 0m17.136s
sys 0m0.324s
Aggregate pipeline method :
real 0m9.881s
user 0m0.548s
sys 0m0.096s
So while the overall execution time is only ~2X better, the aggregation pipeline is massively more performant in terms of actual CPU time.
Update:
I revisited this project recently, and rewrote the DB layer to use a SQL database, and everything was much easier. A complex processing pipeline is now a simple SELECT DISTINCT(colName) WHERE xxx operation.
Realistically, MongoDB and NoSQL databases in general are vary much the wrong database type for what I'm trying to do here.
From the discussion points so far I'm going to take a stab at this. And I'm also noting that as of writing, the 2.6 release for MongoDB should be just around the corner, good weather permitting, so I am going to make some references there.
Oh and the FYI that didn't come up in chat, .distinct() is an entirely different animal that pre-dates the methods used in the responses here, and as such is subject to many limitations.
And this soltion is finally a solution for 2.6 up, or any current dev release over 2.5.3
The alternative for now is use mapReduce because the only restriction is the output size
Without going into the inner workings of distinct, I'm going to go on the presumption that aggregate is doing this more efficiently [and even more so in upcoming release].
db.collection.aggregate([
// Group the key and increment the count per match
{$group: { _id: "$basePath", count: {$sum: 1} }},
// Hey you can even sort it without breaking things
{$sort: { count: 1 }},
// Output to a collection "output"
{$out: "output"}
])
So we are using the $out pipeline stage to get the final result that is over 16MB into a collection of it's own. There you can do what you want with it.
As 2.6 is "just around the corner" there is one more tweak that can be added.
Use allowDiskUse from the runCommand form, where each stage can use disk and not be subject to memory restrictions.
The main point here, is that this is nearly live for production. And the performance will be better than the same operation in mapReduce. So go ahead and play. Install 2.5.5 for you own use now.
A MapReduce, in the current version of Mongo would avoid the problems of the results exceeding 16MB.
map = function() {
if(this['basePath']) {
emit(this['basePath'], 1);
}
// if basePath always exists you can just call the emit:
// emit(this.basePath);
};
reduce = function(key, values) {
return Array.sum(values);
};
For each document the basePath is emitted with a single value representing the count of that value. The reduce simply creates the sum of all the values. The resulting collection would have all unique values for basePath along with the total number of occurrences.
And, as you'll need to store the results to prevent an error using the out option which specifies a destination collection.
db.yourCollectionName.mapReduce(
map,
reduce,
{ out: "distinctMR" }
)
#Neil Lunn 's answer could be simplified:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}])
$project filters fields for you. In particular, '_id': 0 filters out the _id field.
Result still too large? Batch it with $limit and $skip:
field = 'basePath' # Field I want
db.collection.aggregate( [{'$project': {field: 1, '_id': 0}}, {'$limit': X}, {'$skip': Y}])
I think the most scalable solution is to perform a query for each unique value. The queries must be executed one after the other, and each query will give you the "next" unique value based on the previous query result. The idea is that the query will return you one single document, that will contain the unique value that you are looking for. If you use the proper projection, mongo will just use the index loaded into memory without having to read from disk.
You can define this strategy using $gt operator in mongo, but you must take into account values like null or empty strings, and potentially discard them using the $ne or $nin operator. You can also extend this strategy using multiple keys, using operators like $gte for one key and $gt for the other.
This strategy should give you the distinct values of a string field in alphabetical order, or distinct numerical values sorted ascendingly.

MongoDB Aggregation as slow as MapReduce?

I'm just starting out with mongo db and trying to make some simple things. I filled up my database with a collections of data containing the "item" property. I wanted to try to count how much time every item is in the collection
example of a document:
{ "_id" : ObjectId("50dadc38bbd7591082d920f0"), "item" : "Pons", "lines" : 37 }
So I designed these two functions for doing MapReduce (written in python using pymongo)
all_map = Code("function () {"
" emit(this.item, 1);"
"}")
all_reduce = Code("function (key, values) {"
" var sum = 0;"
" values.forEach(function(value){"
" sum += value;"
" });"
" return sum;"
"}")
This worked like a charm, so I began filling the collection. At around 30.000 documents, the mapreduce already lasts longer than a second... Because NoSQL is bragging about speed I thought I must have been doing something wrong!
A Question here at Stack Overflow made me check out the Aggregation feature of mongodb. So I tried to use the group + sum + sort thingies. Came up with this:
db.wikipedia.aggregate(
{ $group: { _id: "$item", count: { $sum: 1 } } },
{ $sort: {count: 1} }
)
This code works just fine and gives me the same results as the mapreduce set, but it is just as slow. Am I doing something wrong? Do I really need to use other tools like hadoop to get a better performance?
I will place an answer basically summing up my comments. I cannot speak for other techs like Hadoop since I have not yet had the pleasure of finding time to use them but I can speak for MongoDB.
Unfortunately you are using two of the worst operators for any database: computed fields and grouping (or distinct) on a full table scan. The aggregation framework in this case must compute the field, group and then in-memory ( http://docs.mongodb.org/manual/reference/aggregation/#_S_sort ) sort the computed field. This is an extremely inefficient task for MongoDB to perform, in fact most likely any database.
There is no easy way to do this in real-time in line to your own application. Map reduce could be a way out if you didn't need to return the results immediately but since I am guessing you don't really want to wait for this kind of stuff the default method is just to eradicate the group altogether.
You can do this by pre-aggregation. So you can create another collection of grouped_wikipedia and in your application you manage this using an upsert() with atomic operators like $set and $inc (to count the occurrences) to make sure you only get one row per item. This is probably the most sane method of solving this problem.
This does however raise another problem of having to manage this extra collection alongside the detail collection wikipedia but I believe this to be a unavoidable side effect of getting the right performance here. The benefits will be greater than the loss of having to manage the extra collection.