MongoDB Aggregation Performance - mongodb

We have a problem of aggregation queries running long time (couple of minutes).
Collection:
We have a collection of 250 million documents with about 20 fields per document,
The total size of the collection is 110GB.
We have indexes over "our_id" and dtKey fields.
Hardware:
Memory:
24GB RAM (6 * 4GB DIMMS 1333 Mhz)
Disk:
Lvm 11TB built from 4 disks of 3TB disks:
600MB/s maximum instantaneous data transfers.
7200 RPM spindle. Average latency = 4.16ms
RAID 0
CPU:
2* E5-2420 0 # 1.90GHz
Total of 12 cores with 24 threads.
Dell R420.
Problem:
We are trying to make an aggregation query of the following:
db.our_collection.aggregate(
[
{
"$match":
{
"$and":
[
{"dtKey":{"$gte":20140916}},
{"dtKey":{"$lt":20141217}},
{"our_id":"111111111"}
]
}
},
{
"$project":
{
"field1":1,
"date":1
}
},
{
"$group":
{
"_id":
{
"day":{"$dayOfYear":"$date"},
"year":{"$year":"$date"}
},
"field1":{"$sum":"$field1"}
}
}
]
);
This query takes a couple of minutes to run, when it is running we can see the followings:
Mongo current operation is yielding more than 300K times
On iostat we see ~100% disk utilization
After this query is done it seems to be in cache and this can be done again in a split second,
After running it for 3 – 4 users it seems that the first one is already been swapped out from the cache and the query takes a long time again.
We have tested a count on the matching part and seen that we have users of 50K documents as well as users with 500K documents,
We tried to get only the matching part:
db.pub_stats.aggregate(
[
{
"$match":
{
"$and":
[
{"dtKey":{"$gte":20140916}},
{"dtKey":{"$lt":20141217}},
{" our_id ":"112162107"}
]
}
}
]
);
And the queries seems to take approximately 300-500M of memory,
But after running the full query, it seems to take 3.5G of memory.
Questions:
Why the pipelining of the aggregation takes so much memory?
How can we increase our performance for it to run on a reasonable time for HTTP request?

Why the pipelining of the aggregation takes so much memory?
Just performing a $match won't have to read the actual data, it can be done on the indexes. Through the projection's access of field1, the actual document will have to be read, and it will probably be cached as well.
Also, grouping can be expensive. Normally, it should report an error if your grouping stage requires more than 100M of memory - what version are you using? It requires to scan the entire result set before yielding, and MongoDB will have to at least store a pointer or an index of each element in the groups. I guess the key reason for the memory increase is the former.
How can we increase our performance for it to run on a reasonable time for HTTP request?
Your dtKey appears to encode time, and the grouping is also done based on time. I'd try to exploit that fact - for instance, by precomputing aggregates for each day and our_id combination - makes a lot of sense if there's no more criteria and the data doesn't change much anymore.
Otherwise, I'd try to move the {"our_id":"111111111"} criterion to the first position, because equality should always precede range queries. I guess the query optimizer of the aggregation framework is smart enough, but it's worth a try. Also, you might want to try turning your two indexes into a single compound index { our_id, dtkey }. Index intersections are supported now, but I'm not sure how efficient that is, really. Use the built-in profile and .explain() to analyze your query.
Lastly, MongoDB is designed for write-heavy use and scanning data sets of hundreds of GB from disk in a matter of milliseconds isn't feasible computationally at all. If your dataset is larger than your RAM, you'll be facing massive IO delays on the scale of tens of milliseconds and upwards, tens or hundreds of thousands of times, because of all the required disk operations. Remember that with random access you'll never get even close to the theoretical sequential disk transfer rates. If you can't precompute, I guess you'll need a lot more RAM. Maybe SSDs help, but that is all just guesswork.

Related

Aggregate $lookup so slow

1st stage:
$Match
{
delFlg: false
}
2nd stage:
$lookup
{
from: "parkings",
localField: "parking_id",
foreignField: "_id",
as: "parking",
}
Using 2 IDX (ID & parking_ID).
In MongoDB /Linux/ Compass explain plan almost 9000ms.
In Local MongoDB server explain plan 1400ms.
Is there any problem? Why this our production server running so slow?
Full Explain Plan
Explain Plan
Query Performance Summary
Documents returned:
71184
Actual query execution time (ms):
8763
Query used the following indexes:
2
_id_
delFlg_1
explainVersion
"1"
stages
Array
serverInfo
Object
serverParameters
Object
command
Object
ok
1
$clusterTime
Object
operationTime
Timestamp({ t: 1666074389, i: 3 })
Using MongoDB 5.0.5 Version
Local MongoDB 6.0.2 Version
Unfortunately the explain output that was provided is mostly truncated (eg Array and Object hiding most of the important information). Even without that there should be some meaningful observations that we can make.
As always I suggest starting with more clearly defining what your ultimate goals are. What is considered "so slow" and how quickly would the operation need to return in order to be acceptable for your use case? As outlined below, I doubt there is much particular room for improvement with the current schema. You may need to rethink the approach entirely if you are looking for improvements that are several order of magnitude in nature.
Let's consider what this operation is doing:
The database goes to an entry in the { delFlg: 1 } index that has a value of false.
It then goes to FETCH that full document from the collection.
Based on the value of the parking_id field, it performs an IXSCAN of the { _id: 1 } index on the parkings collection. If a matching index entry is discovered then the database will FETCH the full document from the other collection and add it to the new parking array field that is being generated.
This full process then repeats 71,183 more times.
Using 8763ms as the duration, that means that the database performed each full iteration of the work described above more than 8 times per millisecond (8.12 ~= 71184 (iters) / 8763 (ms)). I think that sounds pretty reasonable and is unlikely to be something that you can meaningfully improve on. Scaling up the hardware that the database is running on may provide some incremental improvements, but it is generally costly, not scalable, and likely not worthwhile if you are looking for more substantial improvements.
You also mentioned two different timings and database versions in the question. If I understood correctly, it was mentioned that the (explain) operation takes ~9 seconds on version 5.0 and about 1.4 seconds on version 6.0. While there is far too little information here to say for sure, one reason for that may be associated with improvements for $lookup that were introduced in version 6.0. Indeed from their 7 Big Reasons to Upgrade to MongoDB 6.0 article, they claim the following:
The performance of $lookup has also been upgraded. For instance, if there is an index on the foreign key and a small number of documents have been matched, $lookup can get results between 5 and 10 times faster than before.
This seems to match your situation and observations pretty directly.
Apart from upgrading, what else might improve performance here? Well, if many/most documents in the source collection have { delFlg: false } then you may wish to get rid of the { delFlg: 1 } index. Indexes provide benefits when the size of the results they are retrieving are small relative to the overall size of the data in the collection. But as the percentage grows, the overhead of scanning most of the index plus randomly fetching all of the data quickly becomes less effective than just scanning the full collection directly. It is mentioned in the comments that invoices collection contains 70k documents, so this predicate on delFlg doesn't seem to remove hardly any results at all.
One other thing really stands out is this statement from the comments "parkings contains 16 documents". Should the information from those 16 documents just be moved into the parkings documents directly instead? If the two are commonly accessed together and if the parkings information doesn't change very often, then combining the two would reduce a lot of overhead and would further improve performance.
Two additional components about explain that are worth keeping in mind:
The command does not factor in network time to actually transmit the results to the client. Depending on the size of the documents and the physical topology of the environment, this network time could add another meaningful delay to the operation. Be sure that the client really needs all of the data that is being sent.
Depending on the verbosity, explain will measure the time it takes to run the operation through completion. Since there are no blocking stages in your aggregation pipeline, we would expect your initial batch of documents to be returned to the client in much less time. Apart from networking, that time may be around 12ms (which is approximately (101 docs / 71184 ) * 8763) for 5.0.

Mongo query with $or on 10,000 $geoWithin query completed faster when split into set of 1000 each made in parallel, why?

Basically, My collection has 200,000 documents on which I'm applying $geoWithin among a bunch of selected locations to find documents in these locations.
It looks kind of like:
{
"$or": [
{
"location": {
"$geoWithin": {
"$centerSphere": [
[
-12, // coordinates from document 1
23
],
0.00015
]
}
}
},
{
"location": {
"$geoWithin": {
"$centerSphere": [
[
-43, // coordinates from document 2
51
],
0.00015
]
}
}
}
.
.
.
.
.
.
]
}
It took several minutes to complete when $or was between 8000-10000 locations, However, when we tried splitting the request into multiple similar requests with lesser locations and running parallelly, we got results quickly at a certain value and increasing it further again increased the time taken. Same with decreasing it drastically.
My question is why is this happening and how can we figure out the number that optimized the time taken, are some known factors to be considered?
EDIT: adding query planner
100 locations $geoWithin per call, total - 7.66 secs, Query planner + executionStats - https://hastebin.com/ayudozabaz.bash
1000 locations $geoWithin per call, total - 6.006 secs, Query planner + executionStats - https://hastebin.com/komalicosu.bash
10000 locations $geoWithin per call, total - 16.384 secs, Query planner + executionStats - https://hastebin.com/kezamedisa.bash
When the application queries the database, the following things happen:
The query must be constructed. Array appends are generally linear in the size of the array; if an array of conditions is built up one condition at a time, the entire process takes quadratic time in the number of conditions.
The query must be sent to the server. Latency is expended per query, so fewer queries means less overhead per document returned.
The server must execute the query. There are fixed costs like coming up with a query plan and variable costs including traversing the collection. When more documents are retrieved per query, the overhead per document decreases.
The driver must instantiate language-specific data structure for the returned result. Like #1, this can be quadratic in the result size.
As the batch size increases (100 -> 1,000 -> 10,000 in the stated question) the overhead per document in steps #1 and #4 increases and #2 and #3 decreases. It appears that for your query and your data set, batch size of 1,000 produces optimal performance.
The provided execution plans show that the time for each query increases monotonically with input size, which is expected behavior, but the time growth is not linear in input size yielding an apparent optimum batch size somewhere in the middle of the tested range.
With different query structure, query inputs or data in the collection the optimal batch size may be different.
Coming up with a universally optimal batch size is hard. One way of doing so is to randomize inputs and test various input possibilities, to arrive at a batch size that works well in most cases (or, said differently, perform poorly in few cases).

MongoDB: reduce read size and RAM needed with project?

I am designing a MongoDB database that looks something like this:
registry:{
id:1,
duration:123,
score:3,
text:"aaaaaaaaaaaaaaaaaaaaaaaaaaaa"
}
The text field is very big compared to the rest. I sometimes need to perform analytics queries that average the duration or the score, but never use the text.
I have queries that are more specific, and retrieve all the information about a single document. But in this queries I could spend more time making two queries to retrieve all the data.
My question is, if I make a query like this:
db.registries.aggregate( [
{
$group: {
_id: null,
averageDuration: { $avg: "$duration" },
}
}
] )
Would it need to read the data from the transcript field? That would make the query much slower and it would take a lot of RAM. If that is the case it would be better to split the records in two and have something like this right?:
registry:{
id:1,
duration:123,
score:3,
}
registry_text:{
id:1,
text:"aaaaaaaaaaaaaaaaaaaaaaaaaaaa"
}
Thanks a lot!
I don't know how the server works in this case but I expect that, for caching reasons, the server will load complete documents into memory when it reads them from disk. Disk reads are very slow (= expensive in time taken) and I expect server will aggressively use memory if it can to avoid reads.
An important note here is that the documents are stored on disk as lists of key-value pairs comprising their contents. To not load a field from disk the server would have to rebuild the document in question as part of reading it since there are length fields involved. I don't see this happening in practice.
So, once the documents are in memory I assume they are there with all of their fields and I don't expect you can tune this.
When you are querying, the server may or may not drop individual fields but this would only change the memory requirements for the particular query. Generally these memory requirements are dwarfed by the overall database cache size and aggregation pipelines. So I don't think it really matters at what point a large field is dropped from a document during query processing (assuming you project it out in the query).
I think this isn't a worthwhile matter to try to ponder/optimize. If you have a real system with real workloads, you'll be much more pressed to optimize something else.
If you are concerned with memory usage when the amount of available memory is consumer-sized (say, under 16 gb), just get more memory - it's insanely cheap given how much time you'd spend working around lack of it (whether we are talking about provisioning bigger AWS instances or buying more sticks of RAM).
You should be able to use $project to limit the fields read.
As a general advice, don't try to normalize the data with MongoDB as you would with SQL. Also, it's often more performant to read documents plain from DB and do the processing on your server.
I have found this answer that seems to indicate that project needs to fetch all document in the database server, it only reduces bandwith
When using projection to remove unused fields, the MongoDB server will
have to fetch each full document into memory (if it isn't already
there) and filter the results to return. This use of projection
doesn't reduce the memory usage or working set on the MongoDB server,
but can save significant network bandwidth for query results depending
on your data model and the fields projected.
https://dba.stackexchange.com/questions/198444/how-mongodb-projection-affects-performance

MongoDB hanging count query on collection with large objects

I have a collection with 10.000 objects. Each object's size is around 500kb since they include images in them. For statistics, I need to count objects with their creation time. Even though I have indexes, counting the whole collection takes more than 15 seconds. When I remove the image field (i.e the object becomes a simple JSON object), the query immediately returns. I do not understand why size of the objects affects performance this much. Here is a sample query I have been using:
const aggregation = [
{"$match": {"createTime": {"$gte": "2019-01-01T00:00:00.000Z"}}},
{"$match": {"createTime": {"$lte": "2020-01-01T23:59:59.999Z"}}},
{"$count": "value"}];
myCollection.aggregate(aggregation).then(foo);
Is there a way to make the query faster?
One solution I could think of is to store images in a separate collection. This will definitely make the query faster but I am wondering the reason behind this performance drop.
500KB * 10000 documents is 5.1GB to examine. That might take a few seconds, especially if your cache is smaller than that.
Try doing this with a count query instead.
Assuming there is an index on createTime, and no document in the collection contains an array for that field (i.e. the index is not multikey), this query should be able to be fully covered.
This means that they query executor should use a COUNTSCAN stage to find the number of matching documents by scanning the index, and never need to look at a single document, which means document size no longer matters, and it should cut down on your disk IO, cache churn, and CPU utilization as well.
db.myCollection.count({"createTime": {"$gte": "2019-01-01T00:00:00.000Z"},"createTime": {"$lte": "2020-01-01T23:59:59.999Z"}})`

What index to use for mongodb?

I have documents that look like this:
{
{
"_id": ObjectId("5444fc67931f8b040eeca671"),
"meta": {
"SessionID": "45",
"configVersion": "1",
"DeviceID": "55",
"parentObjectID": "55",
"nodeClass": "79",
"dnProperty": "16"
},
"cfg": {
"Name": "test"
}
}
The names and the data is just for testing atm. but I have a total of 25million documents in the DB. And I'm using find() to fetch a specific document(s) in this find() I use four arguments in this case, dnProperty, nodeClass, DeviceID and configVersion none of them are unique.
Atm. I have the index setup as simple as:
ensureIndex([["nodeClass", 1],["DeviceID", 1],["configVersion", 1], ["dnProperty",1]])
In other words I have index on the four arguments. I still have huge problems if you do a search that doesn't find any document at all. In my example all the "data" is random from 1-100 so if I do a find() with one of the values > 100 then it takes anywhere from 30-180sec to perform the search it also uses all of my 8gb RAM, then since there is no RAM left the computer goes very very slow.
What would be better indexes? Am I using indexes correct? Do I simply need more RAM since it will put "all" of the DB in it's working memory? Would you recommend another DB (other than mongo) to handle this better?
Sorry for multiple questions I hope they are short enough that you can give me an answer.
MongoDB uses memory mapped files which means copy of your data and indexes is stored in RAM and whenever there is a query it fetches it from the RAM itself. In the current scenario your queries are slower because your data + indexes size is so large that it will not fit in RAM , hence there will be lot of I/O activity to get data from disk which is the bottleneck.
Sharding helps in solving this problem because if you partition/shard your data across for example 5 machines then you will have 8GB * 5 = 40GB RAM which can hold your (dataset + indexes = working set) in RAM itself and the overhead of I/O will be reduced leading to improved performance.
Hence in this case your indexes will not help improve performance beyond a certain point, you will need to shard your data across multiple machines. Sharding will tend to increase the read as well as write throughput linearly.
Sharding in MongoDB