Can you add an aggregate pipeline to document .save() action? - mongodb

I use mongoose with mongodb and while updating a document, I first find the document, modify the resultant document object and then do a .save() on the document.
Now I want to add an aggregate pipeline to the save operation so as to control the document response better and so I was wondering if this possible.
I read that the update query can have the pipeline attached to it but does that also apply to the save action?

as far as I am concern in the current version of MongoDB(4.4), the only methods that allow aggregate pipelines are those concerning updateAndModify, and Update. Thus, the limit use that mongoose might bring to this subject. What I would recommend in your case is that you use the aggregation pipeline with Model.findOneAndUpdate(). Here is an example that you might follow: example of aggregate using Model.findOneAndUpdate()
Also you might notice that this is the documentation for MongoDB and not Mongoose. I tend to find it difficult to find useful information for more specific use cases like this one in the docs of Mongoose, therefore the link in MongoDB. It will work the same as with a model from Mongoose so take a shot!

Related

Does mongodb use index search in lookup stage?

I'm querying a collection with aggregate function in MongoDB and I have to look up some other collections in its aggregation. But I have a question about it:
Does MongoDB use indexes for foreignField? I wasn't able to figure this out and I searched
everywhere for this but I didn't get my answer. It must certainly use indexes for it but I just want to be sure.
The best way to determine how the database is executing a query is to generate and examine the explain output for the operation. With aggregations that include the $lookup stage specifically you will want to use the more verbose .explain("executionStats") mode. You may also utilize the $indexStats operator to confirm that the usage count of the intended index is increasing.
The best answer we can give based on the limited information in the question is: MongoDB will probably use the index. Query execution behavior, including index usage, depends on the situation and the version. If you provide more information in your question, then we can provide more specific information. There is also some details about index usage on the $lookup documentation page.

Does Panache support pagination in MongoDB?

Does Panache support pagination? I can't seem to find any related methods. I only found .batchSize()
After this call I'm working with an AggregateIterable. (http://mongodb.github.io/mongo-java-driver/3.12/javadoc/com/mongodb/client/AggregateIterable.html)
MyPanacheMongoModel.mongoCollection().aggregate(Arrays.asList(sort1, group, sort2, project, replaceRoot))
I believe I could just add some more stages to my aggregation, but I was looking for a clean solution.
Just like you have added all the other operations, you can add the skip and limit operation as well. Since you are executing an aggregation query by providing all the operations, it does not matter if it is Panache. It will be converted to bson and get executed.

In MongoDB, is there a way to update many documents and get the documents that were modified in a single call?

I'm working with the Mongo Java Driver, but looking through Mongo's documentation, it doesn't look driver specific.
update(filter, update) can update multiple documents but returns a WriteResult which only provides flags/counts.
findOneAndUpdate(filter, update) returns the actual document that was modified, but it can only update one document at a time.
Is there no way to do this in one call? If not, the client would have to call find(filter), then update(filter, update), then find(...) with a new filter matching the IDs obtained in the initial find (since the update can potentially change document values that were in the initial filter).
Is there a better way?
I am unaware of any write commands that return a cursor, which is essentially what you are asking for, nor am I seeing anything relevant in driver source.

Using MapReduce as a Stage in Mongo DB Aggregation Pipeline

I want to use Mongo DB MapReduce functionality along with Aggregation Query.
The below are the stages which I see could be part of the Aggregation pipeline.
Filter docs for which the user has access based on content in the docs and
passed security context(roles of the user)
(Using $REDACT)
Filter based on one or more criteria (Using MATCH)
Tokenize the words in returned docs based on above filtering and populate a
collection (Using MAPREDUCE) (OR) return the docs inline
Query the populated collection/returned docs inline for words based on user
criteria using like query(REGEX) and return the words along with their
locations
I able to achieve steps 1,2 and 4 in the aggregation pipeline.
I am able to achieve Stage 3, separately by using mapreduce functionality in Mongo DB.
I want to make the mapreduce operation also as a stage in the aggregation pipeline and use it to receive the filtered docs from the earlier steps and pass the processed result to next step.
The mapreduce operation is based on sample map and reduce operation. I intend to use the map , reduce and finalize functions as shared in the below stackoverflow issue.
Implement auto-complete feature using MongoDB search
My query is I do now know if we can have MapReduce operation as part of the Mongo DB aggregation pipeline and if so can we use as inline and pass it to the next stage.
I am using Spring Data Mongo DB to implement the Mongo DB aggregation solution.
If someone has implemented the same please help me on this.

Why aren't defaults, setters, validators and middleware not applied for findOneAndModify mongoose?

While reading Mongoose's documentation, I found the following note for findOneAndModify:
Although values are cast to their appropriate types when using the
findAndModify helpers, the following are not applied:
defaults
setters
validators
middleware
The documentation goes on to explain that, in order to get these, one should follow the traditional approach, which uses findOne and save.
My question: why aren't these functions applied? I understand that this can be simply a design decision of the Mongoose developers, but, looking at the code for findOne and findOneAndUpdate, I don't see much difference.
Note: This is not necessarily specific to findOneAndUpdate, but applies to other methods like findOneAndRemove.
findOneAndUpdate allows you to make raw call to MongoDB with Mongoose. It just sends an findAndModify request to MongoDB.
setters, validators and middlewares requires Mongoose to fetch all the data first.
findOneAndUpdate is faster then the traditional way, because it simply makes a single call to MongoDB skipping all the Mongoose magic.
The only actual difference between Mongoose findOneAndUpdate function and raw db.collection.findAndModify operation is that Mongoose casts your update operation according to your schema.
Update. According to API docs it issues a mongodb findAndModify update command.
When you're using traditional way with findOne and save, Mongoose fetches all the data and wraps it into Mongoose document. Then it catches all your update operations applying your getters. Then, when you call save on the document, it runs all validators and hooks and issues an atomic update operation on modified fields. It's not replace the old document with the new one like raw MongoDB db.collection.save do.