What is the best practice to find mongo documents count? - mongodb

Wanted to know the performance difference between countDocument and find query.
I have to find the count of documents based on certain filter, which approach will be better and takes less time?
db.collection.countDocuments({ userId: 12 })
or
db.collection.find({ userId: 12 }) and then using the length of resulted array.

You should definitely use db.collection.countDocuments() if you don't need the data. This method uses an aggregation pipeline with the filters you pass on and only returns the count so you don't waste processing and time waiting for an array with all results.
This:
db.collection.countDocuments({ userId: 12 })
Is equivalent to:
db.collection.aggregate([
{ $match: { userId: 12 } },
{ $group: { _id: null, n: { $sum: 1 } } }
])

Related

MongoDB, Panache, Quarkus: How to do aggregate, $sum and filter

I have a table in mongodb with sales transactions each containing a userId, a timestamp and a corresponding revenue value of the specific sales transaction.
Now, I would like to query these users and getting the minimum, maximum, sum and average of all transactions of all users. There should only be transactions between two given timestamps and it should only include users, whose sum of revenue is greater than a specified value.
I have composed the corresponding query in mongosh:
db.salestransactions.aggregate(
{
"$match": {
"timestamp": {
"$gte": new ISODate("2020-01-01T19:28:38.000Z"),
"$lte": new ISODate("2020-03-01T19:28:38.000Z")
}
}
},
{
$group: {
_id: { userId: "$userId" },
minimum: {$min: "$revenue"},
maximum: {$max: "$revenue"},
sum: {$sum: "$revenue"},
avg: {$avg: "$revenue"}
}
},
{
$match: { "sum": { $gt: 10 } }
}
]
)
This query works absolutely fine.
How do I implement this query in a PanacheMongoRepository using quarkus ?
Any ideas?
Thanks!
A bit late but you could do it something like this.
Define a repo
this code is in kotkin
class YourRepositoryReactive : ReactivePanacheMongoRepository<YourEntity>{
fun getDomainDocuments():List<YourView>{
val aggregationPipeline = mutableListOf<Bson>()
// create your each stage with Document.parse("stage_obj") and add to aggregates collections
return mongoCollection().aggregate(aggregationPipeline,YourView::class.java)
}
mongoCollection() automatically executes on your Entity
YourView, a call to map related properties part of your output. Make sure that this class has
#ProjectionFor(YourEntity.class)
annotation.
Hope this helps.

Is there an easy way to select the most recent document per unique value for a particular field?

For example, if you had a dataset that had the fields _id, uuid, and timestamp, and the data contained many thousands of documents, spread across let's say 200 different uuids, and you wanted to return 200 documents, one per uuid, with each being the most recent (timestamp descending etc), how would you go about this?
I've tried a few solutions and searched through StackOverflow without much luck. I'm sure there is some way to do this with aggregate.
Any tips or nods in the right direction appreciated.
Thanks
Well, it turns out the solution is actually quite simple. Use the distinct field as as the $group _id and for the value you want, sort it first, and then select the last item with $last. Like so:
db.example.aggregate([
{ $sort: { "timestamp": 1 } },
{ $group: { _id: "$uuid", timestamp: { $last: "$timestamp" } } }
])
And if you want the most recent, I imagine you can just lose the sort with:
db.example.aggregate([
{ $group: { _id: "$uuid", timestamp: { $last: "$timestamp" } } }
])
Well, that settles that.

Delete all but one duplicate from a mongo db

So I mad the mistake and saved a lot of doduments twice because I messed up my document id. Because I did a Insert, i multiplied my documents everytime I saved them. So I want to delete all duplicates except the first one, that i wrote. Luckilly the documents have an implicit unique key (match._id) and I should be able to tell what the first one was, because I am using the object id.
The documents look like this:
{
_id: "5e8e2d28ca6e660006f263e6"
match : {
_id: 2345
...
}
...
}
So, right now I have a aggregation that tells me what elements are duplicated and stores them in a collection. There is for sure a more elegant way, but I am still learning.
[{$sort: {"$_id": 1},
{$group: {
_id: "$match._id",
duplicateIds: {$push: "$_id"},
count: {$sum: 1}
}},
{$match: {
count: { $gt: 1 }
}}, {$addFields: {
deletableIds: { $slice: ["$duplicateIds", 1, 1000 ] }
}},
{$out: 'DeleteableIds'}]
Now I do not know how to proceed further, as it does not seem to have a "delete" operation in aggregations and I do not want to write those temp data to a db just so I can write a delete command with that, as I want to delete them in one go. Is there any other way to do this? I am still learning with mongodb and feel a little bit overwhelmed :/
Rather than doing all of those you can just pick first document in group for each _id: "$match._id" & make it as root document. Also, I don't think you need to do sorting in your case :
db.collection.aggregate([
{
$group: {
_id: "$match._id",
doc: {
$first: "$$ROOT"
}
}
},
{
$replaceRoot: {
newRoot: "$doc"
}
}, {$out: 'DeleteableIds'}
])
Test : MongoDB-Playground
I think you're on the right track, however, to delete the duplicates you've found you can use a bulk write on the collection.
So if we imagine you aggregation query saved the following in the the DeleteableIds collection
> db.DeleteableIds.insertMany([
... {deletableIds: [1,2,3,4]},
... {deletableIds: [103,35,12]},
... {deletableIds: [345,311,232,500]}
... ]);
We can now take them and write a bulk write command:
const bulkwrite = db.DeleteableIds.find().map(x => ({ deleteMany : { filter: { _id: { $in: x.deletableIds } } } }))
then we can execute that against the database.
> db.collection1.bulkWrite(bulkwrite)
this will then delete all the duplicates.

Why sort document by id is slower with $match than not in mongodb?

So, I tried to query
db.collection('collection_name').aggregate([
{
$match: { owner_id: '5be9b2f03ef77262c2bd49e6' }
},
{
$sort: { _id: -1 }
}])
the query above takes up 20s
but If I tried to query
db.collection('collection_name').aggregate([{$sort : {_id : -1}}])
it's only take 0.7s
Why does it the one without $match is actually faster than without match ?
update :
when I try this query
db.getCollection('callbackvirtualaccounts').aggregate([
{
$match: { owner_id: '5860457640b4fe652bd9c3eb' }
},
{
$sort: { created: -1 }
}
])
it's only takes 0.781s
Why sort by _id is slower than by created field ?
note : I'm using mongodb v3.0.0
db.collection('collection_name').aggregate([
{
$match: { owner_id: '5be9b2f03ef77262c2bd49e6' }
},
{
$sort: { _id: -1 }
}])
This collection probably won't be having and index on owner_id; Try using below mentioned index creation query and rerun your previous code.
db.collection('collection_name').createIndexes({ owner_id:1}) //Simple Index
or
db.collection('collection_name').createIndexes({ owner_id:1,_id:-1}) //Compound Index
**Note:: If you don't know how to compound index yet, you can create simple indexes individually on all keys which are used either in match or sort and that should be making query efficient as well.
The query speed depends upon a lot of factors. The size of collection, size of the document, indexes defined on the collection (and used in the queries and properly), the hardware components (like CPU, RAM, network) and other processes running at the time the query is running.
You have to tell what indexes are defined on the collection being discussed for further analysis. The command will retrieve them: db.collection.getIndexes()
Note the unique index on the _id field is created by default, and cannot be modified or deleted.
(i)
But If I tried to query: db.collection.aggregate( [ { $sort : { _id : -1 } } ] ) it's
only take 0.7s.
The query is faster because there is an index on the _id field and it is used in sort process. Aggregation queries use indexes with sort stage and when this sort happens early in the pipeline. You can verify if the index is used or not by generating a query plan (use explain with executionStats mode). There will be an index scan (IXSCAN) in the generated query plan.
(ii)
db.collection.aggregate([
{
$match: { owner_id: '5be9b2f03ef77262c2bd49e6' }
},
{
$sort: { _id: -1 }
}
])
The query above takes up 20s.
When I try this query it's only takes 0.781s.
db.collection.aggregate([
{
$match: { owner_id: '5860457640b4fe652bd9c3eb' }
},
{
$sort: { created: -1 }
}
])
Why sort by _id is slower than by created field ?
Cannot come to any conclusions with the available information. In general, the $match and $sort stages present early in the aggregation query can use any indexes created on the fields used in the operations.
Generating a query plan will reveal what the issues are.
Please run the explain with executionStats mode and post the query plan details for all queries in question. There is documentation for Mongodb v3.0.0 version on generation query plans using explain: db.collection.explain()

MongoDB / MongoEngine: Get 2 Counts in 1 Query

I'm trying to minimize the number of database calls in an application.
Is it possible to complete these two queries in a single call?
system_0 = System.objects(platform_id=platform_id, type=0).count()
system_1 = System.objects(platform_id=platform_id, type=1).count()
I do not know what is mongoengine, but I think you will be capable of translating my mongo shell answer to what is appropriate for you. Yes, you can achieve it with aggregation. For example:
db.collection.aggregate([{
$match : { platform_id : ... }
}, {
$group: {
_id: "$type",
count: { $sum: 1 }
}
}]);
If you have more types than 0, 1, you can exclude them in $match as well.