How to speed up agregate queries in MongoDB - mongodb

I am running examples of aggregate queries similar to this:
https://www.compose.com/articles/aggregations-in-mongodb-by-example/
db.mycollection.aggregate([
{
{ $match: {"nested.field": "1110"}}, {
$group: {
_id: null,
total: {
$sum: "$nested.field"
},
average_transaction_amount: {
$avg: "$nested.field"
},
min_transaction_amount: {
$min: "$nested.field"
},
max_transaction_amount: {
$max: "$nested.field"
}
}
}
]);
One collection that I created have 5,000,000 inserted big JSON documents (around 1,000 K->V pairs, some are nested).
Before adding index on one nested field - it takes around 5min to do count of that field.
After adding index - for count it takes less than a second (which is good).
Now I am trying to do SUM or AVG or any other like example above - it takes minutes (not seconds).
Is there a way to improve aggregate queries in MongoDB?
Thanks!

Unfortunately, group currently does not use indexes in mongodb. Only sort and match can take advantage of indexes. So the query as you wrote it is as optimized as it could be.
There are a couple things you could do. For max and min, you could just query them instead of using the aggregation framework. You can than sort by $nested.field and take just one. You can put an index on $nested.field and you can then sort ascending or descending with the same index.
If you have any control over when the data is inserting, and the query is as simple as it looks, you could keep track of the data yourself. So you could have a table in mongo where the collection has the "Id" or whatever you are grouping on and have fields for "total" and "sum". You could increment them on inserts and then getting the total and averages would be fast queries. Not sure if that's an option for your situation, but its the best you can do.
Generally, mongo is super fast. In my opinion, the only place its not quite as good as SQL is aggregation. The benefits heavily outweigh the struggles to me. I generally maintain separate reporting collections for this kind of situation as I recommended.

Related

What is the proper index format that should I use in MongoDB for this particular scenario explained below?

I have the following query to be executed on my MongoDB collection order_error. It has over 60 million documents. The main concern is I am having a $in operator within my query. I tried several possibilities of indices but none of them gave a high-performance improvement. The query is as follows
db.getCollection("order_error").find({
"$and":[
{
"type":"order"
},
{
"Origin.SN":{
"$in":[
"4095",
"4100",
"4509",
"4599",
"4510"
]
}
}
]
}).sort({"timestamp.milliseconds" : 1}).skip(1).limit(100).explain("executionStats")
One issue that needs to be noted is I am allowing sort on timestamp.milliseconds in both directions(ASC + DESC). I have limited the entries within the $in. Usually, it is more. SO what kind of index gives the performance improvement. I tried creating the following indices already
type_1_Origin.SN_1_timestamp.milliseconds_-1
type_1_timestamp.milliseconds_-1_Origin.SN
Is there any better way for index creation?

MongoDB query optimizer keeps choosing the least efficient index for the query

I have a large collection (~20M records) with some moderate documents with ~20 indexed fields. All of those indexes are single field. This collection also has quite a lot of write and read traffic.
MongoDB version is 4.0.9.
I am seeing at peak times that the query optimizer keeps selecting a very inefficient index for the winning plan.
In the example query:
{
name: 'Alfred Mason',
created_at: { $gt: ... },
active: true
}
All of the fields are indexed:
{ name: 1 }
{ created_at: 1 }
{ active: 1 }
When I run explain(), the winning plan will use created_at index, which will scan ~200k documents before returning 4 that match the query. Query execution time is ~6000 ms.
If I use $hint to force the name index, it will scan 6 documents before returning 4 that match the query. Execution time is ~2 ms.
Why does query optimizer keeps selecting the slowest index? It does seem suspicious that it only happens during peak hours, when there is more write activity with the collection, but what is the exact reasoning? What can I do about it?
Is it safe to use $hint in production environment?
Is is reasonable to remove the index on the date field completely as $gt query doesn't seem any faster than a COLLSCAN? That could force the query optimizer to use an indexed field. But then again, it could also select another inefficient index (the boolean field).
I can't use compound indexes as there are a lot of use cases that use different combinations of all 20 indexes available.
There could be a number of reasons why Mongo appears to not be using the best execution plan, including:
The running time and execution plan estimate using the single field index on the name field is not accurate. This could be due to bad statistics, i.e. Mongo is making an estimate using stale or not up to date information.
While for your particular query the created_at index is not optimal, in general, for most of the possible queries on this field, the created_at index would be optimal.
My answer here is actually that you should probably be using a multiple field index, given that you are filtering on multiple fields. For the example filter you gave in the question:
{
name: 'Alfred Mason',
created_at: { $gt: ... },
active: true
}
I would suggest trying both of the following indices:
db.getCollection('your_collection').createIndex(
{ "name": 1, "created_at": 1, "active": 1 } );
and
db.getCollection('your_collection').createIndex(
{ "created_at": 1, "name": 1, "active": 1 } );
Whether you would want created_at to be first in the index, or rather name to be first, would depend on which field has the higher cardinality. Cardinality basically means how unique are all of the values in a given field. If every name in the collection be distinct, then you would probably want name to be first. On the other hand, if every created_at timestamp is expected to be unique, then it might make sense to put that field first. As for active, it appears to a boolean field, and as such, can only take on two values (true/false). It should be last in the index (and you might even be able to omit it entirely).
I do not think it is necessary to index all fields, and it is better to choose the appropriate fields.
Prefixes in Compound Indexes may be useful for you

How to query data efficiently in large mongodb collection?

I have one big mongodb collection (3-million docs, 50 GigaBytes), and it would be very slow to query the data even I have created the indexs.
db.collection.find({"C123":1, "C122":2})
e.g. the query will be timeout or will be extreme slow (10s at least), even if I have created the separate indexes for C123 and C122.
Should I create more indexs or increase the physical memory to accelerate the querying?
For such a query you should create compound indexes. One on both fields. And then it should be very efficient. Creating separate indexes won't help you much, because MongoDB engine will use first to get results of first part of query, but second if is used won't help much (or even can slow down in some cases your query because of lookup in indexes table and then in real data again). You can confirm used indexes by using .explain() on your query in shell.
See compound indexes:
https://docs.mongodb.com/manual/core/index-compound/
Also consider sorting directions on both your fields while making indexes.
The answer is really simple.
You don't need to create more indexes, you need to create the right indexes. Index on field c124 won't help queries on field c123, so no point in creating it.
Use better/more hardware. More RAM, more machines (sharding).
Create Right indices and carefully use compound index. (You can have max. 64 indices per collection and 31 fields in compound index)
Use mongo side pagination
Try to find out most used queries and build compound index around that.
Compound index strictly follow sequence so read documentation and do trials
Also try covered query for 'summary' like queries
Learned it hard way..
Use skip and limit. Run a loop for 50000 data at once .
https://docs.mongodb.com/manual/reference/method/cursor.skip/
https://docs.mongodb.com/manual/reference/method/cursor.limit/
example :
[
{
$group: {
_id: "$myDoc,homepage_domain",
count: {$sum: 1},
entry: {
$push: {
location_city: "$myDoc.location_city",
homepage_domain: "$myDoc.homepage_domain",
country: "$myDoc.country",
employee_linkedin: "$myDoc.employee_linkedin",
linkedin_url: "$myDoc.inkedin_url",
homepage_url: "$myDoc.homepage_url",
industry: "$myDoc.industry",
read_at: "$myDoc.read_at"
}
}
}
}, {
$limit : 50000
}, {
$skip: 50000
}
],
{
allowDiskUse: true
},
print(
db.Or9.insert({
"HomepageDomain":myDoc.homepage_domain,
"location_city":myDoc.location_city
})
)

MongoDB Aggregate of 120M documents

I've a system that records entries by action. There're more than 120M of them and I want to group them with aggregate by id_entry. The structure is as the following :
entry
{
id_entry: ObjectId(...),
created_at: Date(...),
action: {object},
}
When I try to do an aggregate by id_entry and grouping its actions it takes more than 3h to finish :
db.entry.aggregate([
{ '$match': {'created_at': { $gte:ISODate("2016-02-02"), $lt:ISODate("2016-02-03")}}},
{ '$group': {
'_id' :{'id_entry': '$id_entry'},
actions: {
$push: '$action'
}
}}])
But in that range of days there's only around ~4M documents. (id_entry and created_at has indexes)
What im I doing wrong in the aggregate? How can I group 3-4M documents to join them by id_entry in less than 3h?
Thanks
To speed up your particular query, you need an index on the created_at field.
However, the overall performance of the aggregation will also depend on your hardware specification (among other things).
If you find the query's performance to be less than what you require, you can either:
Create a pre-aggregated report (essentially a document that contains the aggregated data you require, updated every time a new data is inserted), or
Utilize sharding to spread your data to more servers.
If you need to run this aggregation query all the time, a pre-aggregated report allows you to have an extremely up-to-date aggregated report of your data that is accessible using a simple find() query.
The tradeoff is that for every insertion, you will also need to update the pre-aggregated document to reflect the current state of your data. However, this is a relatively small tradeoff compared to having to run long/complex aggregation query that could interfere with your day-to-day operation.
One caveat with the aggregation framework is: once the aggregation pipeline encounters a $group or a $project stage, no index can be used. This is because MongoDB index are tied to how the documents are stored physically. Grouping and projecting transform the documents to a state where the document does not have a physical representation in disk anymore.

How to make distinct operation more quickly in mongodb

There are 30,000,000 records in one collection.
when I use distinct command on this collection by java, it takes about 4 minutes, the result's count is about 40,000.
Is mongodb's distinct operation so inefficiency?
and how can I make it more efficient?
Is mongodb's distinct operation so inefficiency?
At 30m records? I would say 4 minutes is actually quite good, I think that's just as fast, maybe a little faster than SQL does it.
I would probably test this in other databases before saying it is inefficient.
However, one way of looking at performance is to see if the field is indexed first and if that index is in RAM or can be loaded without page thrashing. Distinct() can use an index so long as the field has an index.
and how can I make it more efficient?
You could use a couple of methods:
Incremental map reduce to distinct the main collection once every, say, 5 mins to a unique collection
And Pre-aggregate the unique collection on save by saving to two collections, one detail and one unique
Those are the two most viable methods of getting around this performantly.
Edit
Distinct() is not outdated and if it fits your needs is actually more performant than $group since it can use an index.
The .distinct() operation is an old one, as is .group(). In general these have been superseded by .aggregate() which should be generally used in preference to these actions:
db.collection.aggregate([
{ "$group": {
"_id": "$field",
"count": { "$sum": 1 }
}
)
Substituting "$field" with whatever field you wish to get a distinct count from. The $ prefixes the field name to assign the value.
Look at the documentation and especially $group for more information.