Spring boot mongo template remove with limit query - mongodb

I am trying to delete a limited set of mongo documents from a collection which have id less than 10 but want to remove them in sets of 3, so tried using limit, but it still deletes all the documents and ignores limit.
Query query = new Query();
query.addCriteria(Criteria.where("_id").lt(id)).limit(3);
mongoTemplate.remove(query,TestCollection.class);
When I perform mongoTemplate.find(query,TestCollection.class); limit works fine and returns 3 element at a time but in remove it doesn't works.
Is there any other way to delete in single query only.

To achieve this do it in two passes
Find 3 ids to delete as you are doing currently
do a collection.remove with Criteria.where("_id").in[id1,id2,id3]
I would also add a sort criteria before doing a limit. Otherwise the results of deletion might be dependent on the index used

Related

Couchbase N1QL Query getting distinct on the basis of particular fields

I have a document structure which looks something like this:
{
...
"groupedFieldKey": "groupedFieldVal",
"otherFieldKey": "otherFieldVal",
"filterFieldKey": "filterFieldVal"
...
}
I am trying to fetch all documents which are unique with respect to groupedFieldKey. I also want to fetch otherField from ANY of these documents. This otherFieldKey has minor changes from one document to another, but I am comfortable with getting ANY of these values.
SELECT DISTINCT groupedFieldKey, otherField
FROM bucket
WHERE filterFieldKey = "filterFieldVal";
This query fetches all the documents because of the minor variations.
SELECT groupedFieldKey, maxOtherFieldKey
FROM bucket
WHERE filterFieldKey = "filterFieldVal"
GROUP BY groupFieldKey
LETTING maxOtherFieldKey= MAX(otherFieldKey);
This query works as expected, but is taking a long time due to the GROUP BY step. As this query is used to show products in UI, this is not a desired behaviour. I have tried applying indexes, but it has not given fast results.
Actual details of the records:
Number of records = 100,000
Size per record = Approx 10 KB
Time taken to load the first 10 records: 3s
Is there a better way to do this? A way of getting DISTINCT only on particular fields will be good.
EDIT 1:
You can follow this discussion thread in Couchbase forum: https://forums.couchbase.com/t/getting-distinct-on-the-basis-of-a-field-with-other-fields/26458
GROUP must materialize all the documents. You can try covering index
CREATE INDEX ix1 ON bucket(filterFieldKey, groupFieldKey, otherFieldKey);

MongoDB Fillter the records and Updating vs Updating with filters

If i want to update multiple documents based on multiple filter criteria which is the better approach?
Filter and get the documents (only _id field) which needs to be updated and supply the array of _id to updatemanyasync ($in) as a parameter and update . (see below 1)
Update the documents by supplying filter criteria directly.(see below 2)
Reason for this doubt.
1. MongoDB search only for _id matches and update it.
2. MongoDB search for the supplied mulitple criteria (multiple fields) each document and it will update.
What is the performance difference on these 2 approaches by spliting up the updates as 2 process
Performance on Timeouts,Locks,Document Avalability after update.
Please help to share your suggestions and views.

MongoDB - Can I have multiple sort fields in a map reduce?

I have a map/reduce that I noticed was taking nearly 10 seconds to run, even though no results were being returned. Using the mongo shell, I was able to determine that my initial query was the culprit. I was able to add a 2nd sort field to the query, which sped it up drastically.
However, when I try to add that 2nd sort field to my map reduce, I get the error "could not create cursor over [collection] for query ...". Is there any way for me to add a 2nd sort field to the map reduce?
Edit: The goal of my query is to find the first record created by each user/day. So the key that I am emitting is the user's id + created on day, ignoring any time. That way I am grouping all records created by a user on a given day. In my reduce, I am then taking the record that was created first. I have actually ditched using map/reduce and am now doing essentially the same thing, but with a normal find() and then some javascript to group and reduce.

MongoDB update many documents with different timestamps in one update

I have 10000 documents in one MongoDB collection. I'd like to update all the documents with datetime values that are 1 second apart for each document (so all the date time values are unique and are spaced 1 second apart). Is there any way to do this with a single update instead of updating each document in turn which results in 10000 distinct update operations?
Thanks.
No, there is no way to do this with a single update statement. There are no expressions which run at the server to allow this type of update. There is a feature request for this but it is not done so it cannot be used.

MongoDB: How to execute a query to result of another query (nested queries)?

I need to apply a set of filters (queries) to a collection. By default, the MongoDB applies AND operator to all queries submitted to find function. Instead of whole AND I need to apply each query sequentially (one by one). That is, I need to run the first-query and get a set of documents, run the second-query to result of first-query, and so on.
Is this Possible?
db.list.find({..q1..}).find({..q2..}).find({..q3..});
Instead Of:
db.list.find({..q1..}, {..q2..}, {..q3..});
Why do I need this?
Bcoz, the second-query needs to apply an aggregate function to result of first-query, instead of applying the aggregate to whole collection.
Yes this is possible in MongoDB. You can write nested queries as per the requirement.Even in my application I created nested MongoDb queries.If you are familiar with SQL syntax then compare this with in of sql syntax:
select cname from table where cid in (select .....)
In the same way you can create nested MongoDB queries on different collections also.