Is there any option to limit mongodb memory usage? - mongodb

I am using Mongo-DBv1.8.1. My server memory is 4GB but Mongo-DB is utilizing more than 3GB. Is there memory limitation option in Mongo-DB?.

If you are running MongoDB 3.2 or later version, you can limit the wiredTiger cache as mentioned above.
In /etc/mongod.conf add the wiredTiger part
...
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
...
This will limit the cache size to 1GB, more info in Doc
This solved the issue for me, running ubuntu 16.04 and mongoDB 3.2
PS: After changing the config, restart the mongo daemon.
$ sudo service mongod restart
# check the status
$ sudo service mongod status

Starting in 3.2, MongoDB uses the WiredTiger as the default storage engine. Previous versions used the MMAPv1 as the default storage engine.
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.
In MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:
60% of RAM minus 1 GB, or
1 GB.
For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger internal cache uses either 1 GB or half of the installed physical RAM, whichever is larger).
For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.
to limit the wiredTriggered Cache Add following line to .config file :
wiredTigerCacheSizeGB = 1

This question has been asked a couple times ...
See this related question/answer (quoted below) ... how to release the caching which is used by Mongodb?
MongoDB will (at least seem) to use up a lot of available memory, but it actually leaves it up to the OS's VMM to tell it to release the memory (see Caching in the MongoDB docs.)
You should be able to release any and all memory by restarting MongoDB.
However, to some extent MongoDB isn't really "using" the memory.
For example from the MongoDB docs Checking Server Memory Usage ...
Depending on the platform you may see
the mapped files as memory in the
process, but this is not strictly
correct. Unix top may show way more
memory for mongod than is really
appropriate. The Operating System (the
virtual memory manager specifically,
depending on OS) manages the memory
where the "Memory Mapped Files"
reside. This number is usually shown
in a program like "free -lmt".
It is called "cached" memory.
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions ...
MongoDB limit memory
MongoDB index/RAM relationship
Mongod start with memory limit (You can't.)

You can limit mongod process usage using cgroups on Linux.
Using cgroups, our task can be accomplished in a few easy steps.
Create control group:
cgcreate -g memory:DBLimitedGroup
(make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytes
This command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
And finally assign a server to created control group:
cgclassify -g memory:DBLimitedGroup \`pidof mongod\`
This will assign a running mongod process to a group limited by only 16GB memory.
source: Using Cgroups to Limit MySQL and MongoDB memory usage

I don't think you can configure how much memory MongoDB uses, but that's OK (read below).
To quote from the official source:
Virtual memory size and resident size will appear to be very large for the mongod process. This is benign: virtual memory space will be just larger than the size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.
In other words, Mongo will let other programs use memory if they ask for it.

mongod --wiredTigerCacheSizeGB 2 xx

Adding to the top voted answer, in case you are on a low memory machine and want to configure the wiredTigerCache in MBs instead of whole number GBs, use this -
storage:
wiredTiger:
engineConfig:
configString : cache_size=345M
Source - https://jira.mongodb.org/browse/SERVER-22274

For Windows it seems possible to control the amount of memory MongoDB uses, see this tutorial at Captain Codeman:
Limit MongoDB memory use on Windows without Virtualization

Not really, there are a couple of tricks to limit memory, like on Windows you can use the Windows System Resource Manager (WSRM), but generally Mongo works best on a dedicated server when it's free to use memory without much contention with other systems.
Although the operating system will try to allocate memory to other processes as they need it, in practice this can lead to performance issues if other systems have high memory requirements too.
If you really need to limit memory, and only have a single server, then your best bet is virtualization.

This can be done with cgroups, by combining knowledge from these two articles:
https://www.percona.com/blog/2015/07/01/using-cgroups-to-limit-mysql-and-mongodb-memory-usage/
http://frank2.net/cgroups-ubuntu-14-04/
You can find here a small shell script which will create config and init files for Ubuntu 14.04:
http://brainsuckerna.blogspot.com.by/2016/05/limiting-mongodb-memory-usage-with.html
Just like that:
sudo bash -c 'curl -o- http://brains.by/misc/mongodb_memory_limit_ubuntu1404.sh | bash'

There is no reason to limit MongoDB cache as by default the mongod process will take 1/2 of the memory on the machine and no more. The default storage engine is WiredTiger. "With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache."
You are probably looking at top and assuming that Mongo is using all the memory on your machine. That is virtual memory. Use free -m:
total used free shared buff/cache available
Mem: 7982 1487 5601 8 893 6204
Swap: 0 0 0
Only when the available metric goes to zero is your computer swapping memory out to disk. In that case your database is too large for your machine. Add another mongodb instance to your cluster.
Use these two commands in the mongod console to get information about how much virtual and physical memory Mongodb is using:
var mem = db.serverStatus().tcmalloc;
mem.tcmalloc.formattedString
------------------------------------------------
MALLOC: 360509952 ( 343.8 MiB) Bytes in use by application
MALLOC: + 477704192 ( 455.6 MiB) Bytes in page heap freelist
MALLOC: + 33152680 ( 31.6 MiB) Bytes in central cache freelist
MALLOC: + 2684032 ( 2.6 MiB) Bytes in transfer cache freelist
MALLOC: + 3508952 ( 3.3 MiB) Bytes in thread cache freelists
MALLOC: + 6349056 ( 6.1 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 883908864 ( 843.0 MiB) Actual memory used (physical + swap)
MALLOC: + 33611776 ( 32.1 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 917520640 ( 875.0 MiB) Virtual address space used
MALLOC:
MALLOC: 26695 Spans in use
MALLOC: 22 Thread heaps in use
MALLOC: 4096 Tcmalloc page size

One thing you can limit is the amount of memory mongodb uses while building indexes. This is set using the maxIndexBuildMemoryUsageMegabytes setting. An example of how its set is below:
mongo --eval "db.adminCommand( { setParameter: 1, maxIndexBuildMemoryUsageMegabytes: 70000 } )"

this worked for me on an AWS instance, to at least clear the cached memory mongo was using. after this you can see how your settings have had effect.
ubuntu#hongse:~$ free -m
total used free shared buffers cached
Mem: 3952 3667 284 0 617 514
-/+ buffers/cache: 2535 1416
Swap: 0 0 0
ubuntu#hongse:~$ sudo su
root#hongse:/home/ubuntu# sync; echo 3 > /proc/sys/vm/drop_caches
root#hongse:/home/ubuntu# free -m
total used free shared buffers cached
Mem: 3952 2269 1682 0 1 42
-/+ buffers/cache: 2225 1726
Swap: 0 0 0

If you're using Docker, reading the Docker image documentation (in the Setting WiredTiger cache size limits section) I found out that they set the default to consume all available memory regardless of memory limits you may have imposed on the container, so you would have to limit the RAM usage directly from the DB configuration.
Create you mongod.conf file:
# Limits cache storage
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1 # Set the size you want
Now you can assign that config file to the container: docker run --name mongo-container -v /path/to/mongod.conf:/etc/mongo/mongod.conf -d mongo --config /etc/mongo/mongod.conf
Alternatively you could use a docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:4.2
# Sets the config file
command: --config /etc/mongo/mongod.conf
volumes:
- ./config/mongo/mongod.conf:/etc/mongo/mongod.conf
# Others settings...

Related

Increase MongoDB memory limit(or any resources it uses)

I will rent a cloud server(12gb ram, 240gb nvme sdd). I read mongodb wiredtiger uses limited amount of system memory,
Since MongoDB 3.2, MongoDB has used WiredTiger as its default Storage Engine. And by default, MongoDB will reserve 50% of the available memory – 1 GB for the WiredTiger cache or 256 MB whichever is greater.
Since i will rent this server just for mongodb(i will have high throughput), i want wiredtiger to use all available system resources, how can i achieve this?. Thank you
Edit your mongo.conf. This file is usually located at /etc/mongo.conf
change this section:
storage:
dbPath: /var/lib/mongodb #(example) - don't change this
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 12
You may want to experiment with the size to test the stability of your server (may want to make it 1 GB less). Remember to restart the mongod service.

SWAP memory of my solaris server is used more than its threshold

SWAP memory of my solaris server is used more than its threshold, is it possible to free some space by restarting the processes which are using most of SWAP memory?
$ swap -s
total: 10820256k bytes allocated + 453808k reserved = 11274064k used, 11911648k available
$ swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d210 85,210 16 20972720 20971152
You are only using 800 KB of swap out of 10 GB (i.e. 0.008 %) which is insignificant. There is then probably nothing to worry about according to the reported statistics.
You might have a look to /tmp (or other tmpfs backed directories) to free some virtual memory.

MongoDB can't set 'maxIncomingConnections'

I have a problem setting the maximum incoming connections for my MongoDB.
I ran ulimit -n 1000000 and restarted mongo, the last ping in my MMS dashboard shows:
"maxIncomingConnections": 1000000,
however:
"connections": {
"current": 701,
"totalCreated": 712,
"available": 118
},
as you can see current + available is 819 which is the default (80% from 1024) from system.
Any ideas?
I don't know as which user you ran the ulimit command, but keep in mind that this only is valid for the current user in the current environment.
A better approach is to set the open file limit in /etc/security/limits.conf like this:
# Max is 64k anyway, and there is a hard limit
# of 20k connection in MongoDB anyway
# 40k open files should be more than enough
# unless you have _very_ large disks and a _shitload_ of datafiles
mongodb soft nofiles 40000
mongodb hard nofiles 64000
# Make sure we don't get throttled CPU wise
mongodb soft cpu unlimited
mongodb hard cpu unlimited
# This is kind of useless, since the maximum size
# a file created by MongoDB is 2GB for now
# but it is save and the docs say to do so
mongodb soft fsize -1
mongodb hard fsize -1
# We don't want our resident stack to be limited...
mongodb soft rss -1
mongodb hard rss -1
# ... nor the address space
mongodb soft as -1
mongodb hard as -1
# Last but not least, we want the number of processes at a reasonable limit
mongodb soft noproc 32000
mongodb hard noproc 32000
However, this is only a fallback in case you start MongoDB manually, since the upstart script should set the according limits. After adding these values, a reboot is needed iirc. The number of available connections should increase, then.
Note: Keep in mind that each connection gets about 1MB of stack allocated on the server, which then can not be used for holding indices and data within RAM.

How to bypass a 2TB core dump file system limit?

I have a huge VM address space in a multi-threaded program. Typically it runs in ~5TB of virtual, and of that only touches up to 16GB of resident set. I map HugePages, and allow them to be dumped from their sections.
When the RedHat 6.3 + Kernel 3.0.29 system forces a crash dump,
most of the resident set gets dumped, but the core file stops at 2TB.
-rw------- 1 root root 2.0T Aug 21 21:15 core-primes-6-1377119514.21645
top reports ~4TB in this instance.
21726 root -91 0 4191g 7.1g 13m R 99.7 5.6 2:39.06 50 primes
gdb reports the core is truncated
BFD: Warning: /export0/crash/core-primes-6-1377119514.21645 is truncated: expected core file size >= 4488958177280, found: 2133738614784.
Is there some magic foo to allow the kernel to dump more than 2TB of this
process? The filesystem is ext3, and has plenty of space.
The VM for the unused memory is never touched.
The underlying etx3 filesystem with a 4KB block size tops out at 2TB max file size. Switching to an XFS filesystem gives a much larger max file size, and the core dump completes up to ~16TB. However it takes 30 minutes to dump. Switching to ext4, the native max file size is 16TB. The dump completes in < 4 minutes. You just have to remember to update fstab. mkfs.ext4 -LEXPORT0 -Tlargefile /dev/sda10. Using largefile hints to the filesystem that more space than files are needed.
ulimit -c is the command to verify and change the allowed maximum size of core files created; it is documented in the bash man page. Use something like
ulimit -c 16000000
As root, you can run
ulimit -c unlimited

MongoDb replica-server getting killed due to less memory?

Require a huge help here, since this is affecting our production instance.
One of the replica server is failing due to lack of memory (see below chunk of piece from kern.log)
kernel: [80110.848341] Out of memory: kill process 4643 (mongod) score 214181 or a child
kernel: [80110.848349] Killed process 4643 (mongod)
UPDATE
kernel: mongod invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
kernel: [85544.157191] mongod cpuset=/ mems_allowed=0
kernel: [85544.157195] Pid: 7545, comm: mongod Not tainted 2.6.32-318-ec2
Insight:
Primary server DB size is 50GB out of which 30GB is filled by index.
Primary server has 7GB Ram whereas secondary server has 3.1GB Ram.
Both servers are 64-bit machine and running Debian/Ubuntu respectively.
Running Mongo 2.0.2 on both servers
Note:
I see a similar issue has been created in Jira-Mongo web-site recently - no answer to that yet.
Have you got swap enabled on these instances? While generally not needed for mongoDB operation it can prevent the process from being killed by the kernel when you hit an OOM situation. That is mentioned here:
http://www.mongodb.org/display/DOCS/Production+Notes#ProductionNotes-Swap
The issue referenced is happening during a full re-sync rather than ongoing production replication - is that what you are doing also?
Once you get things stable, take a look at your Res memory in mongostat or MMS, if that is exceeding or close to 3GB you should consider upgrading your secondary.
I had a similar issue. One of the things to check is how many open connections you have. run the lsof command to see the open files associated with the mongod process. Try disabling journaling and see if you see a smaller number of open files. If so, let the replica catch up and then re-enable journaling. That might help. Adding swap should help too or if possible temporarily up the RAM.