MongoDB is running out of memory on a 96GB root server when adding a single index on a timestamp field for a 50GB collection.
Does MongoDB have any option to run a query or task in "safe-mode", e.g. without cutting the memory too much? It seems to be very touchy and can be crashed, e.g. by running some find queries with $lte/$gt on a non-indexed timestamp field.
i can't control it, but shouldn't there a mongodb config setting for "safety" which makes sure to release RAM once it's breaking the limit? maybe even before it is blocking other processes or stoped by oom killer?
MongoDB does not use its own memory management. Instead it uses the OS' LRU. The OS is paging documents so heavily because it has used the amount of memory allocated to mongod, aka your working set is bigger than the amount of RAM you have spare for MongoDB as such MongoDB is swapping page faults for most of it not all of your data ( a good reference for paging: http://en.wikipedia.org/wiki/Paging ).
I would strongly not recommend restricting MongoDB in this case since it will run even worse however, especially on Linux, you can actually use ulimit on the mongo user you are using to run mongod: http://docs.mongodb.org/manual/administration/ulimit/
Does MongoDB have any option to run a query or task in "safe-mode", e.g. without cutting the memory too much?
Not really.
It seems to be very touchy and can be crashed, e.g. by running some find queries with $lte/$gt on a non-indexed timestamp field.
Naturally this shouldn't cause an OOM exception for MongoDB, it could indicate a memory leak somewhere: http://docs.mongodb.org/manual/administration/ulimit/
If you limit the resident memory size on a system running MongoDB you risk allowing the operating system to terminate the mongod process under normal situations. Do not set this value. If the operating system (i.e. Linux) kills your mongod, with the OOM killer, check the output of serverStatus and ensure MongoDB is not leaking memory.
It seems to be very touchy and can be crashed e.g. by running some find queries with $lte/$gt on a non-indexed timestamp field.
It's the OOM killer that's killing it because your mongod instance is swapping a lot pages into RAM. You probably have a lot processes contending for RAM. You can instruct Linux to not kill the mongod daemon as follows :
sudo echo -17 > /proc/<process if of mongod>/oom_adj
You can't control how much memory mongodb uses, unfortunately. I suggest looking at the background indexing docs on mongodb. And some more useful links :
See the related thread on stackoverflow
How do i limit the cache size?
Related
There is a spike in the memory utilization of mongodb on our CentOS-07 server.It has 64 Gigabytes of RAM.This is a stand alone mongodb instance which doesn't have any application running on it and there are house keeping scripts enabled to keep only the relevant data .We haven't indexed the data .The total size of data on disk is 81 Gigabyte. This issue was not seen before we tried enabling replication,post which the the node set-up has been using high memory hence was disabled,we then brought up a fresh stand alone instance of mongo. The memory usage hasn't come down ever since we tried re-starting the mongo server but hasn't worked.Is there any reason for mongodb to use so much memory??Below is a link to the snap shot of the mem usage taken from the site server.
The mongo version is 2.6.5
Image link
This is not surprising. See the Memory Use section in the docs for the MMAPv1 storage engine (which is what MongoDB 2.6 uses):
With MMAPv1, MongoDB automatically uses all free memory on the machine as its cache. System resource monitors show that MongoDB uses a lot of memory, but its usage is dynamic. If another process suddenly needs half the server’s RAM, MongoDB will yield cached memory to the other process.
It is also not surprising that the usage spiked after enabling replication, as it sounds like you had a fully populated database and then added a replica member. This would mean that the replica member would need to perform an initial sync of the data from that node, which would require a read of every document which would "prime" MongoDB's cache as a result.
I am using mongodb for database. The size of all the databases is around 19G.
My RAM usage showing 64% of 2GB by mongod even when no query is being running.
As per FAQ saying
MongoDB automatically uses all free memory on the machine as its cache. System resource monitors show that MongoDB uses a lot of memory, but its usage is dynamic. If another process suddenly needs half the server’s RAM, MongoDB will yield cached memory to the other process.
Is it because of that only or I am doing something wrong.
MongoDB only allocates that memory but not use it all. It tells your system that other applications can use it whenever they need it. So if other applications will ask for more memory then system will give it to them. You can check real memory usage in mongo shell. Check out mongo commands in documentation to learn more.
While doing indexing on MongoDB. Now we have nearly 350 GBs of data in the database and its deployed as a windows service in AWS EC2.
And we are doing indexing for some experimentation. But every time I run the indexing command the memory usage goes to 99% and even after the indexing is done the memory usage keeps like that until I restart the service.
The instance has 30 GB of RAM and SSD drive. And right now we have the DB setup as stand alone (not sharded till now). And we are using the latest version of MongoDB.
Any feedback related to this will be helpful.
Thanks,
Arpan
That's normal behavior for MongoDB.
MongoDB grabs all the RAM it can get to cache each accessed document as long as possible. When you add an index to a collection, each document needs to be read once to build the index, which causes MongoDB to load each document into RAM. It then keeps them in RAM in case you want to access them later. But MongoDB will not squat the RAM. When another process needs memory, MongoDB will willingly release it.
This is explained in the FAQ:
Does MongoDB require a lot of RAM?
Not necessarily. It’s certainly
possible to run MongoDB on a machine with a small amount of free RAM.
MongoDB automatically uses all free memory on the machine as its
cache. System resource monitors show that MongoDB uses a lot of
memory, but its usage is dynamic. If another process suddenly needs
half the server’s RAM, MongoDB will yield cached memory to the other
process.
Technically, the operating system’s virtual memory subsystem manages
MongoDB’s memory. This means that MongoDB will use as much free memory
as it can, swapping to disk as needed. Deployments with enough memory
to fit the application’s working data set in RAM will achieve the best
performance.
See also: FAQ: MongoDB Diagnostics for answers to additional questions
about MongoDB and Memory use.
I'm using MongoDB on a cloud server with 10GB of storage and 1GB RAM. After importing about 4.4 GB of data into a MongoDB database, whenever I type "mongo" on the commandline to test some queries, the server freezes.
Is there a cap on the memory resource allocation to MongoDB that I can remove? Or is it simply a matter of increasing RAM?
MongoDB uses memory mapped files, which are allocated by the OS. This means that there is no specific resource that you can free up to make more room for a Mongo console to run.
There are a couple of things to note about your environment. Firstly, the amount of RAM you have for the amount of data you have loaded is on the small side. MongoDB is going to try and keep as much of the working set in memory as it can, to avoid page faults as the disc seeks are a real killer for performance. Secondly, there will be some initial work going on when the data is loaded which could affect performance.
You can check out the Wiki page Checking Server Memory Usage for information on how much memory Mongo is using up, and general information on the Memory Usage of Mongo.
Can you try and connect to the MongoD from another machine, so as to remove this burden from the DB Server?
I am concern about my server machine performance . The application deals with gazillion data from RETS sever feed. Whenever server starts mongod service its getting like taking off the performance and the PF usage shoot upto ~3.59GB although it has good configuration(Server2008, 4GB RAM) and using mongodb 64bit latest release (2.0.6).Please enlighten me on this regard.
Thanks
I'm not sure how much you know about MongoDB but Mongo uses memory mapped files to access data, which results in large numbers being displayed for the mongod process. This is normal when using memory-mapped files. The amount of mapped datafile is shown in the virtual size parameter and resident bytes shows how much data is being cached in RAM. The larger your data files, the higher the vmsize of the mongod process.
If other processes need more ram, the operating system’s virtual memory manager will relinquish some memory from the cache and the resident bytes on mongod process will drop.
It is recommended to use a fixed pagefile size. If you use a dynamic page file the OS doesn't increase it fast enough to keep up with the (private) mapped memory calls. There's actually an open ticket to add special warning if the page file is dynamic or (min is) set too small.
This document explains how memory usage works on MongoDB.
Here are some tools that show how you can diagnose system issues with MongoDB -
mongostat
Monitoring and Diagnostics
To be honest, I'd recommend moving this issue to the MongoDB User Google Group and posting your issue there along with the mongostat output during the issue as well as information from perfmon as this will likely be a longer discussion.
Another something to consider is to setup MMS on your Mongod instances.
https://mms.10gen.com