I read about this topic at
http://docs.mongodb.org/manual/faq/storage/#faq-storage-memory-mapped-files
But didn't understand point .Does it is used to keep query data in physical memory ? How it is related with virtual memory ? Why it is important and how it effect at performance ?
I'll try to explain in a simple way.
MongoDB (and other storage systems) stores data in files. Each database has its own files, created as they are needed. The first file weights 64 MB, the next 128 and so up to 2 GB. Then, new files created weigh 2 GB. Each of these files are logically divided into different blocks, that correspond with one virtual memory block.
When MongoDB needs to access a file or a part of it, loads all virtual blocks corresponding to that file or parts of the files into memory using mmap.On the other hand, mmap is a way for applications to leverage the system cache (linux).
So what really happens when you are doing a query is that MongoDB "tells" the OS to load the part it needs with the data requested, so the next time is requested will be faster. As you can imagine this is a very important feature to boost performance in databases like MongoDB, because accessing RAM is way faster than hard drive.
Another benefit of using mmap is that MongoDB memory will grow as it needs and the system memory is free.
Related
A storage engine acts as a interface which acts between the mongo db server and physical Disc which decides how much memory is required also supports Collection level locking. My question is what happened before version 3.0 ? Who allocated memory before the storage engine ? And how did the locking mechanism work before M MAP
We call it MMAPv1 - the original storage engine of MongoDB because it internally uses the mmap call under the covers in order to implement storage management. Let's look at what the MMAP system call looks like. On Linux, it talks about memory allocation, or mapping files or devices into memory. Causes the pages starting address and continuing for at most length bytes to be mapped from the object described by the file descriptor at an offset. So, what does that really practically mean?
Well, MongoDB practically needs a place to put documents. And it puts the documents inside files. And to do that it initially allocates, let's say a large file. Let's say it allocates a 100GB file on disk. So, we wind up with 100GB file on disk. The disk may or may not be physically contiguous on the actual disk, because there are some algorithms that occur beneath that layer that control the actual allocation of space on a disk. But from our point, it's a 100GB contiguous file. If MongoDB calls mmap system call, it can map this 100GB file into 100GB of virtual memory. To get this big virtual memory, we need to be on a x64 machine. And these are all page-sized. So pages on an OS or either 4k or 16k large. So, there is lot of them inside the 100GB virtual memory. And the operating system is going to decide what can fit in the memory. So, the actual physical memory of the box is let's say 32GB, then if we go to access one of the pages in this memory space, it may not be in memory at any given time. The operating system decides which of these pages are going to be in memory. We're showing the ones available in memory as green ones. So, when we go to read the document, if it hits a page that's in memory, then we get it. If it hits a page that's not in memory (the white ones), the OS has to bring it from the disk.
source
MMAPv1 storage engine provides
Collection level concurrency (locking). Each collection inside MongoDB is it's own file (can be seen in ~\data\db). If multiple writes are fired on the same collection, one has to wait for another to finish. It's a multiple reader. Only one write can happen at a time to a particular collection.
Allows in place updates. So, if a document is sitting here in one of the available (green) page and we do an update to it, then we'll try to update it right in place. And if we can't update it, then what we'll do is we'll mark it as a whole, and then we'll move it somewhere else where there is some space. And finally we'll update it there. In order to make it possible that we update the document in place without having to move it, we uses
Power of 2 sizes when we allocate the initial storage for a document. So, if we try to create a 3bytes document, we'll get 4bytes. 8bytes, if we create 7bytes. 32bytes when creating 19bytes document. In this way, it's possible to grow the document a little bit. And that space that opens up, that we can re-use it more easily.
Also, notice that since, OS decides what is in memory and what is on disk - we cannot do much about it. The OS is smart enough for memory management.
There was only one storage engine before 3.0 - MMAP, which has been the storage engine for MongoDB since the beginning (now usually referred to as MMAPv0, with the version in 3.0 being MMAPv1, though the versioning is not really official like the DB itself).
You couldn't plug in new engines prior to 3.0 nor were there any alternatives built-in so you didn't see a lot of discussion about storage engines as a result. Any presentations (here's a good one if you are interested) prior to 3.0 that discuss storage are implicitly talking about the MMAP storage engine, it just didn't have that name yet.
MMAP has been improved to include collection level locking in 3.0, before that release (in 2.6) the locking granularity was database level and before that (prior to 2.2) it was a global lock.
I am involved in a project where they get enough RAM to store the entire database in memory. According to the manager, that is what 10Gen recommended. This is counter intuitive. Is that really the way you want to use Mongodb?
It is not counter intuitive... I find it quite intuitive, actually.
In How much faster is the memory usually than the disk? you can read:
(...) memory is only about 6 times faster when you're doing sequential
access (350 Mvalues/sec for memory compared with 58 Mvalues/sec for
disk); but it's about 100,000 times faster when you're doing random
access.
So if you can fit all your data in RAM, it is quite good because you are going to be really fast reading your data.
Regarding MongoDB, from the FAQ's:
It’s certainly possible to run MongoDB on a machine with a small
amount of free RAM.
MongoDB automatically uses all free memory on the machine as its
cache. System resource monitors show that MongoDB uses a lot of
memory, but its usage is dynamic. If another process suddenly needs
half the server’s RAM, MongoDB will yield cached memory to the other
process.
Technically, the operating system’s virtual memory subsystem manages
MongoDB’s memory. This means that MongoDB will use as much free memory
as it can, swapping to disk as needed. Deployments with enough memory
to fit the application’s working data set in RAM will achieve the best
performance.
The problem is that you usually have much more data than memory available. And then you have to go to disk, and disk I/O is slow. Regarding database performance, avoiding full scan queries is key (much more important when accessing to disk). Therefore, if your data set does not fit in memory, you should aim at having indexes for the vast majority of your access patterns and try to fit those indexes in memory:
If you have created indexes for your queries and your working data set
fits in RAM, MongoDB serves all queries from memory.
It all depends on the size of your database. I am guessing that you said your database was actually quite small, otherwise I cannot see how someone at 10gen gave such advice, I mean not even #Stennie gives such advice (he is 10gen by the way).
Even if your database is small I don't see how the manager recommended that. MongoDB does not do memory management of its own as such it does not "pin" data into pages like memcached does or other memory based databases do.
This means that the paging of mongods data can be quite unpredicatable, a.k.a you will spend more time trying to keep things in RAM than paging in data. This is why it is better to just make sure your working set fits and it can loaded with speed, such things are based upon your hardware and queries.
#Stennies comment pretty much sums up the stance you should be taking with MongoDB.
I am using mongodb for only inserting documents. There are no indexes created for the collection I use. But I see that memory used by Mongodb is going high. Machine is having 20GB of RAM which is completely used. I would like to know the reason for this and is this normal?
There is an excellent discussion of how MongoDB uses storage and memory here:
http://docs.mongodb.org/manual/faq/storage/
Specifically to your case, Mongo memory maps the files that data and indexes live in (the ones inside of /data/db directory by default) which means that the files are mapped to OS's virtual address space and then whenever MongoDB accesses a page (any part of the file) that page will get pulled into RAM and it will stay there until all of RAM that's made available to mongod process by the OS is used (at that point least-recently-used pages will be swapped out of RAM).
You are inserting data and therefore you are writing to data files - those pages you are writing to need to be in RAM (mongo writes to files but since they are memory mapped it gets to interact with memory as if it's disk storage). If mongod is using 20GB+ of RAM that means your data plus your indexes (plus some overhead for other things) are 20GB or more.
For more than a month is my war with mongoDB. Until I lose =] ...
Battle 1. Battle 2.
And now a new problem. Again, not enough memory.
Initially, this was solved by simply increasing the memory at a rate of VPS. Then journal = false. But now I got to the top of your plan and continue to increase the memory is not possible.
For my base are lacking 4 GB of memory.
How should I choose a database for the project, was nowhere written that there are so many mongoDB memory. With about 10 million records in the mongoDB missing 4 GB of memory, when my MySQL database with 10 million easily copes with 1.4 GB of memory.
The problem as I understand it, a large number of index fields. But since I can not log into the database, respectively, can not remove them. They needed me in the early stages of development, now they are not important to me.
Tell me please, can I remove them somehow?
There is a dump of the database is completely whole folder database / data / db
On my PC with 4 GB of memory database does not start on a VPS with 4GB same.
As an alternative, I think to take a test period at some VPS / VDS to run mongo and delete keys.
Do you know a web hosting with a test period and 6 GB of memory?
Or if there is an alternative, could you say what?
The issues has very little to do with the size of your data set. MongoDB uses memory mapped files for its storage engine. As such it'll start swapping in pages of hot data into memory when it can and it does so fairly aggressively (or more accurately, the OS memory management does).
Basically it uses as much memory as is available to it and there's very little you can do to avoid it. All data pages (be it actual data or indexes) that are accessed during operation will be swapped into memory if there is space available.
There are plenty of references to this on the internet and on mongodb.org by the way. Saying it isn't mentioned anywhere isn't really true.
Can I use MongoDB like Redis or Memcache?
My goal is to have everything in memory and make it faster to access. We already use MongoDB but we need to improve the speed of reads.
What's the best way to do that?
You can't force mongodb to keep everything in RAM. It will keep hot and recently used data in RAM and page out the rest. If you can't afford to suffer a delay on page fault, then use redis/memcached.
Or, alternatively, you can put mongodb's data dir on a ram disk. That will effectively keep everything in memory, but you'll duplicate some data (one copy on ram disk, another - in memory mapped files in mongo).