I am trying to figure out when are Mongodb indexes loaded in memory. Assuming I have n collections and each having m indexes. So when mongodb starts, will all n x m indexes be loaded in the memory?
As per the docs, they mention that if indexes fit in the RAM, all of them are loaded. If not, some of them are swapped to secondary storage. But I couldn't find a place where they have clarified that if on mongodb startup, are all indexes loaded?
This is important because it would allow us to get an estimate of how much RAM to expect for the db to function optimally.
PS: I am using aws-documentdb which I assume should have the similar behaviour for indexes as they also haven't touched this part in their docs anywhere.
thank you for asking the question.
With most databases, including Amazon DocumentDB, index pages are paged into memory based on the queries that are run against the database (think of this as a lazy load). On start-up, the buffer cache is empty and will fill up with pages as your workload issues queries against the database. When an index(es) size is so big that it can't fit into memory, the database has to purge and read from disk to iterate through the index be able to response to a query. The same goes for data pages well. Ideally, you want to have enough RAM on your instance so that both your data pages and index pages fit in memory. Reads from disk will add additional latency. The best thing to do here is run your workload until it hits steady state and then observe the BufferCacheHitRatio to see if your queries are being served mainly from the buffer cache or if you're need to read from disk a lot. For more information, see: https://docs.aws.amazon.com/documentdb/latest/developerguide/best_practices.html
Related
Do indexes always persist on RAM?
Hence, does scanning indexes require first retrieving the index from disk?
EDITED:
My questions is more about whether or not MongoDB will keep the index on RAM always, assuming that there is enough space. Because actual data is pushed off of RAM if they are not recent to make room for more recently accessed data. Is this the case with indexes as well? WIll indexes be pushed off of RAM based upon recency? Or does MongoDB treat indexes with priority and always keep it in RAM if there is enough room?
That is not guaranteed.
MongoDB does store indexes in the same cache as documents, which does evict LRU.
It does not load the entire structure into memory, it load pages as they are needed, so the amount of the index in memory will depend on how it is accessed.
Indexes do get a bit of priority, but that is not absolute, so index pages can be evicted.
An insert into a collection will likely need to update all of the indexes, so it would be a reasonable assumption that any collection that is not totally idle will have at least the root page of each index in the cache.
I'm trying to put a forum-like structure in a MongoDB 4.0 database, which consists of multiple threads under a same "topic", each thread consists of a bunch of posts. So usually there are no limits on the numbers of the threads and posts. And I want to try fully utilizing the benefits of NoSQL features, grabbing a list of posts under any speicified thread at one time without having to scan and look up for the identical "thread_id" and "post_id" in a RDBMS table in the traditional way, so in my mind I want to put all the threads as collections in a database, as the thread_id as the code-generated collection names, and put all the posts of a thread as normal documents under that collection, so the way to access a post may look like:
forum_db【database name】.thread_id【collection name】.post_id【document ID】
But my concern is, despite of the obscure phrase saying at https://docs.mongodb.com/manual/reference/limits/#data,
Number of Collections in a Database
Changed in version 3.0.
For the MMAPv1 storage engine, the maximum number of collections in a database is a function of the size of the namespace file and the number of indexes of collections in the database.
The WiredTiger storage engine is not subject to this limitation.</pre>
Is it safe to do it in this way in terms of performance and scalability? Can we safely take it that there is no limit on the number of collections in a WiredTiger database (MongoDB 4.0+) today as there is pratically no limit on the number of documents in a collection? Many thanks in advance.
To calculate how many collections one can store in a MongoDB database, you need to figure out the number of indexes in each collection.
WiredTiger engine keeps an open file handler for each used collection (and its indexes). A large number of open file handlers can cause extremely long checkpoints operations.
Furthermore, each of the handles will take about ~22KB memory outside the WT cache; this means that just for keeping the files open, mongod process will need ~NUM_OF_FILE_HANDLES * 22KB of RAM.
High memory swapping will lead to a decrease in performance.
As you probably understand from the above, different hardware (RAM size & Disk speed) will behave differently.
From my point of view, you first need to understand the behavior of your application then calculate the required hardware for your MongoDB database server.
I have made a test with 10 M rows of data. Each row has 3 integer and 2 string columns. First I import this data to mongoDB which is a single shard. I do a simple "where" query with db.table.find() on a non-index columns. The query fetches a single row which takes roughly in 7 seconds.
On the same hardware I load the same data to a c# list which is in memory. I do a while loop to scan all 10M data and do a simple equal control to emulate where query. It takes only around 650 ms which is much more faster than MongoDB.
I have a 32 GB machine so mongodb is having no problem to memory map the table.
Why mongoDB is much slower? Is it because the mongoDB is keeping the data in a data structure which is hard to full scan or is it because memory mapping in not the same as keeping a data in a variable.
As Remon pointed out you are definitely comparing apples to oranges in this test.
To understand a bit more on what is happening behind the scenes in that table scan, read through the MongoDB internals here. (Look under the Storage model)
There is the concept of extents which represents a contiguous disk space.
Each extent points to a linked list of docs.
The doc contains the data in BSON format. So now you can imagine how we would retrieve data.
Now the beauty of having an index is aptly shown at the right top corner. MongoDB uses a BTree structure to navigate which is pretty fast.
Try changing your test to have some warm up runs and use an index.
UPDATE : I have done some testing as a part of my day job to compare the performance of JBoss Cache (an in memory Java Cache) with MongoDB as an application cache (queries against _id). The results are quite comparable.
Where to start..
First of all the test is completely apples and oranges. Loading a dataset into memory and doing a completely in-memory scan of it is in no way equal to a table scan on any database.
I'm also willing to bet you're doing your test on cold data and MongoDB performance improves dramatically as it swaps hot data into memory. Please note that MongoDB doesn't preemptively swap data into memory. It does so if, and only if, the data is accessed frequently (or at all, depending). Actually it's more accurate to say the OS does since MongoDB's storage engine is built on top of MMFs (memory mapped files).
So in short, your test isn't a good test and the way you're testing MongoDB isn't producing accurate results. You're testing a theoretical best case with your C# equivalent that on top of that is considerably less complex than the database code.
In doing some preliminary tests of MongoDB sharding, I hoped and expected that the time to execute queries that hit only a single chunk of data on one shard/machine would remain relatively constant as more data was loaded. But I found a significant slowdown.
Some details:
For my simple test, I used two machines to shard and tried queries on similar collections with 2 million rows and 7 million rows. These are obviously very small collections that don’t even require sharding, yet I was surprised to already see a significant consistent slowdown for queries hitting only a single chunk. Queries included the sharding key, were for result sets ranging from 10s to 100000s of rows, and I measured the total time required to scroll through the entire result sets. One other thing: since my application will actually require much more data than can fit into RAM, all queries were timed based on a cold cache.
Any idea why this would be? Has anyone else observed the same or contradictory results?
Further details (prompted by Theo):
For this test, the rows were small (5 columns including _id), and the key was not based on _id, but rather on a many-valued text column that almost always appears in queries.
The command db.printShardingStatus() shows how many chunks there are as well as the exact key values used to split ranges for chunks. The average chunk contains well over 100,000 rows for this dataset and inspection of key value splits verifies that the test queries are hitting a single chunk.
For the purpose of this test, I was measuring only reads. There were no inserts or updates.
Update:
Upon some additional research, I believe I determined the reason for the slowdown: MongoDB chunks are purely logical, and the data within them is NOT physically located together (source: "Scaling MongoDB" by Kristina Chodorow). This is in contrast to partitioning in traditional databases like Oracle and MySQL. This seems like a significant limitation, as sharding will scale horizontally with the addition of shards/machines, but less well in the vertical dimension as data is added to a collection with a fixed number of shards.
If I understand this correctly, if I have 1 collection with a billion rows sharded across 10 shards/machines, even a query that hits only one shard/machine is still querying from a large collection of 100 million rows. If values for the sharding key happen to be located contiguously on disk, then that might be OK. But if not and I'm fetching more than a few rows (e.g. 1000s), then this seems likely to lead to lots of I/O problems.
So my new question is: why not organize chunks in MongoDB physically to enable vertical as well as horizontal scalability?
What makes you say the queries only touched a single chunk? If the result ranged up to 100 000 rows it sounds unlikely. A chunk is max 64 Mb, and unless your objects are tiny that many won't fit. Mongo has most likely split your chunks and distributed them.
I think you need to tell us more about what you're doing and the shape of your data. Were you querying and loading at the same time? Do you mean shard when you say chunk? Is your shard key something else than _id? Do you do any updates while you query your data?
There are two major factors when it comes to performance in Mongo: the global write lock and it's use of memory mapped files. Memory mapped files mean you really have to think about your usage patterns, and the global write lock makes page faults hurt really badly.
If you query for things that are all over the place the OS will struggle to page things in and out, this can be especially hurting if your objects are tiny because whole pages have to be loaded just to access a small pieces, lots of RAM will be wasted. If you're doing lots of writes that will lock reads (but usually not that badly since writes happen fairly sequentially) -- but if you're doing updates you can forget about any kind of performance, the updates block the whole database server for significant amounts of time.
Run mongostat while you're running your tests, it can tell you a lot (run mongostat --discover | grep -v SEC to see the metrics for all you shard masters, don't forget to include --port if your mongos is not running on 27017).
Addressing the questions in your update: it would be really nice if Mongo did keep chunks physically together, but it is not the case. One of the reasons is that sharding is a layer on top of mongod, and mongod is not fully aware of it being a shard. It's the config servers and mongos processes that know of shard keys and which chunks that exist. Therefore, in the current architecture, mongod doesn't even have the information that would be required to keep chunks together on disk. The problem is even deeper: Mongo's disk format isn't very advanced. It still (as of v2.0) does not have online compaction (although compaction got better in v2.0), it can't compact a fragmented database and still serve queries. Mongo has a long way to go before it's capable of what you're suggesting, sadly.
The best you can do at this point is to make sure you write the data in order so that chunks will be written sequentially. It probably helps if you create all chunks beforehand too, so that data will not be moved around by the balancer. Of course this is only possible if you have all your data in advance, and that seems unlikely.
Disclaimer: I work at Tokutek
So my new question is: why not organize chunks in MongoDB physically to enable vertical as well as horizontal scalability?
This is exactly what is done in TokuMX, a replacement server for MongoDB. TokuMX uses Fractal Tree indexes which have high write throughput and compression, so instead of storing data in a heap, data is clustered with the index. By default, the shard key is clustered, so it does exactly what you suggest, it organizes the chunks physically, by ensuring all documents are ordered by the shard key on disk. This makes range queries on the shard key fast, just like on any clustered index.
I run a single mongodb instance which is getting inserted with logs from an app server. the current rate of insert in production is 10 inserts per second. And its a capped collection. i DONT USE ANY INDEXES . Queries were running faster when there were small number of records. only one collection has that amount of data. even querying from collection that has very few rows has become very slow. IS there any means to improve the performance.
-Avinash
This is a very difficult question to answer because we dont know much about your configuration or your document structure.
One thing that immediately pops into my head is that you are running out of memory. 10 inserts per second doesn't mean much because we do not know how big the inserted documents are.
If you are inserting larger documents at 10 per second, you could be eating up memory, causing the operating system to push some of your records to disk.
When you query without using an index, you are forced to scan every document. If your documents have been pushed to disk by the OS, you will begin having page faults. Mongo will need to fetch pages of data off the hard disk, and load them into memory so that they can be scanned. Before doing this, the operating system will need to make room for that data in memory by flushing other parts of memory out to disk.
It sounds like you are are I/O bound and the two biggest things you can do to fix this are
Add more memory to the machine running mongod
Start using indexes so that the database does not need to do full collection scans
Use proper indexes, though that will have some effect on the efficiency of insertion in a capped collection.
It would be better if you can share the collection structure and the query you are using.