Why do MongoDB views not support MapReduce? - mongodb

I am developing a database with MongoDB and would like to treat views as read-only collections. Particularly, I would really like to run map-reduce functions on a view. So my questions include:
Why do views not support map-reduce?
Are there plans to give map-reduce functionality to views in the future?
Is there a workaround that would allow me to run map-reduce on query results?

Views in MongoDB are not materialized so querying a view involves running the aggregation pipeline that you specified when you defined the view. This means that you can do further aggregations on the view via aggregation pipeline, but you cannot use map/reduce which cannot run aggregation stages as part of its execution.
Most things that you can do with map/reduce in MongoDB can be done with aggregation pipeline (though certainly not all). I would recommend seeing how far you can get using purely aggregation rather than map-reduce.

Related

Flexibility of map-reduce in mongoDB

The MongoDB documentation says about map-reduce:
For most aggregation operations, the Aggregation Pipeline provides better performance and more coherent interface. However, map-reduce operations provide some flexibility that is not presently available in the aggregation pipeline.
Does it mean that there are some aggregation operations that cannot be performed in the usual MongoDB aggregation framework but are possible using map-reduce?
In particular I'm looking for an example of map-reduce that cannot be implemented in the MongoDB aggregation framework
Thanks!
An example of "flexibility". Basically, if you have any logic, that does not fit into standard aggregation operators, map-reduce is the only option to do it serverside.

Pipeline vs MapReduce in MongoDB

When do I use prefer MapReduce over Pipeline in MongoDB or vice versa? I feel most of the aggregation operations are suitable for pipeline. What kind of complexity of the problem or what use case should make me go for MapReduce.
As a general rule of thumb: When you can do it with the aggregation pipeline, you should.
One reason is that the aggregation pipeline is able to use indexes and internal optimizations between the aggregation steps which are just not possible with MapReduce.
Aggregation is also a lot more secure when the operation is triggered by user input. When there are any user-supplied parameters to your query, MapReduce forces you to create javascript functions through string concatenation. This opens the door for dangerous Javascript code injection vulnerabilities. The APIs used for creating aggregation pipeline objects (in most programming languages!) usually has fewer such obvious pitfalls.
There are, however, still a few cases which can not be done easily or not at all with aggregation. For these cases, MapReduce has still a reason to exist.
Another limitation of the aggregation framework is that the intermediate dataset after each aggregation step is limited to 100MB unless you use the allowDiskUse option, which really slows down the query. MapReduce usually behaves a lot better when you need to work with a really large dataset.

(Real time) Small data aggregation MongoDB: triggers?

What is a reliable and efficient way to aggregate small data in MongoDB?
Currently, my data that needs to be aggregated is under 1 GB, but can go as high as 10 GB. I'm looking for a real time strategy or near real time (aggregation every 15 minutes).
It seems like the likes of Map/Reduce, Hadoop, Storm are all over kill. I know that triggers don't exist, but I found this one post that may be ideal for my situation. Is creating a trigger in MongoDB an ideal solution for real time small data aggregation?
MongoDB has two built-in options for aggregating data - the aggregation framework and map-reduce.
The aggregation framework is faster (executing as native C++ code as opposed to a JavaScript map-reduce job) but more limited in the sorts of aggregations that are supported. Map-reduce is very versatile and can support very complex aggregations but is slower than the aggregation framework and can be more difficult to code.
Either of these would be a good option for near real time aggregation.
One further consideration to take into account is that as of the 2.4 release the aggregation framework returns a single document containing its results and is therefore limited to returning 16MB of data. In contrast, MongoDB map-reduce jobs have no such limitation and may output directly to a collection. In the upcoming 2.6 release of MongoDB, the aggregation framework will also gain the ability to output directly to a collection, using the new $out operator.
Based on the description of your use case, I would recommend using map-reduce as I assume you need to output more than 16MB of data. Also, note that after the first map-reduce run you may run incremental map-reduce jobs that run only on the data that is new/changed and merge the results into the existing output collection.
As you know, MongoDB doesn't support triggers, but you may easily implement triggers in the application by tailing the MongoDB oplog. This blog post and this SO post cover the topic well.

Running MongoDB Queries in Map/Reduce

Is it possible to run MongoDB commands like a query to grab additional data or to do an update from with in MongoDB's MapReduce command. Either in the Map or the Reduce function?
Is this completely ludicrous to do anyways? Currently I have some documents that refer to separate collections using the MongoDB DBReference command.
Thanks for the help!
Is it possible to run MongoDB commands... from within MongoDB's MapReduce command.
In theory, this is possible. In practice there are lots of problems with this.
Problem #1: exponential work. M/R is already pretty intense and poorly logged. Adding queries can easily make M/R run out of control.
Problem #2: context. Imagine that you're running a sharded M/R and you are querying into an unsharded collection. Does the current context even have that connection?
You're basically trying to implement JOIN logic and MongoDB has no joins. Instead, you may need to build the final data in a couple of phases by running a few loops on a few sets of data.

When do I need map reduce for database queries?

In CouchDB you always have to use map reduce to query results.
In MongoDB you can their query methods for retrieving data, but they also let you do map-reduce.
I wonder, when do I actually need map-reduce?
Are those query methods different from map-reduce or are they just wrappers for map-reduce functions?
MapReduce is needed for aggregations in MongoDB. The normal queries follow a very different (and much faster) code path and they should always be used for real-time operations. MapReduce is definitely not intended for real-time, it's more for batch jobs.
Technically, you could write all your queries using MapReduce, but that would be both painful and slow.