I don't know why but very interesting stats shows.
First of all, I installed MongoDB on ec2, c5.4xlarge.
And I make some test write on the DB.
The funny thing is,, when I wrote 10 documents concurrently, MongoDB uses almost 100% CPU per core, but no memory increases at all!!
If I knew correctly, mongodb writes at first memory, so what I expect is increasing memory usage.
In order to reduce CPU usage per core, what I did is as follows:
disable atime
make MongoDB data folder running on xfs file system.
store MongoDB data on EBS with provisioning for increasing IOPS
upgrade glibc new latest version
What I used MongoDB version is 4.4 and used engine is WiredTiger.
Please help me if you have experience of running MongoDB on AWS for writing many many data continuously.
Below is my mongodb.conf
# Where and how to store data.
storage:
dbPath: "/home/ubuntu/mongodb/data"
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 24
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: "/home/ubuntu/mongodb/log/mongod.log"
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
replication:
replSetName: "rs-exchange"
There is no requirement that a database uses either a specific amount of memory or that the amount of memory it uses is constantly growing.
For example, if I perform a replace operation in an infinite loop, naturally a lot of CPU would be consumed but the amount of memory used may well stay flat.
Related
I will rent a cloud server(12gb ram, 240gb nvme sdd). I read mongodb wiredtiger uses limited amount of system memory,
Since MongoDB 3.2, MongoDB has used WiredTiger as its default Storage Engine. And by default, MongoDB will reserve 50% of the available memory – 1 GB for the WiredTiger cache or 256 MB whichever is greater.
Since i will rent this server just for mongodb(i will have high throughput), i want wiredtiger to use all available system resources, how can i achieve this?. Thank you
Edit your mongo.conf. This file is usually located at /etc/mongo.conf
change this section:
storage:
dbPath: /var/lib/mongodb #(example) - don't change this
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 12
You may want to experiment with the size to test the stability of your server (may want to make it 1 GB less). Remember to restart the mongod service.
I am using mongodb. I have around 700,000 documents in a collection in my db. When I try to make a find call, mongodb crashes. I've found out from looking at dmesg that OOM killer is killing mongod after it consumes a lot of RAM.
We just have 1GiB of RAM. I'd like to know how I can make mongo db take less RAM than it takes right now for making a find query. Is there a way I can configure mongodb in such a way.
I've looked at another SO answer and tried uncommenting wired tiger and setting to
wiredTiger:
engineConfig:
cacheSizeGB: 0.1
and did a systemctl restart mongod, and it still crashes for find with a query.
I've also found many questions and blogposts describing this problem, but none say there exists a solution. Can anything be done with configs that will prevent mongodb crashing / taking up less memory for all mongod operations?
It may help you.
storage:
wiredTiger:
engineConfig:
configString : cache_size=345M
From Here:
Is there any option to limit mongodb memory usage?
Is there a way to prevent mongod from pre-allocating these 100 MB files in journal folder?
WiredTigerPreplog.0000000001
WiredTigerPreplog.0000000002
I want journaling to be enabled.
Find below some notes from MongoDB Documentation for Journaling
WiredTiger will pre-allocate journal files.
Preallocation Lag
MongoDB may preallocate journal files if the mongod process determines
that it is more efficient to preallocate journal files than create new
journal files as needed.
The preallocation can be avoided using only for MMapv1 storage engine using --noprealloc option when starting the mongod. But this is not applicable for wiredtiger storage engine.
few more references
Meaning of WiredTigerPreplog files
I often need to run test instances of mongod, and these preallocated WiredTiger journal (log) files just waste 200MB each time.
They can disabled by adding this parameter to your mongod command line:
--wiredTigerEngineConfigString 'log=(prealloc=false)'
Or this to your mongod.conf file:
storage:
wiredTiger:
engineConfig:
configString: log=(prealloc=false)
Of course this should never be done in production, and only when testing things that are unrelated to journalling. Journal file preallocation is a deliberate performance feature, which is almost always a win in the real world (and so is why it defaults to true).
I did not find answer in the mongodb documentation. What happens when the memory is full when using the MongoDB In-Memory Storage Engine?
Is there an eviction (LRU)?
Is there an error message ?
Is it configurable ?
Thank you
By default, the in-memory storage engine uses 50% of physical RAM minus 1 GB.
If a write operation would cause the data to exceed the specified memory size, MongoDB returns with the error:
"WT_CACHE_FULL: operation would overflow cache"
To specify a new size, use the storage.inMemory.engineConfig.inMemorySizeGB
setting in the YAML configuration file format:
storage:
engine: inMemory
dbPath: <path>
inMemory:
engineConfig:
inMemorySizeGB: <newSize>
Or use the command-line option --inMemorySizeGB:
mongod --storageEngine inMemory --dbpath <path> --inMemorySizeGB <newSize>
Btw, I found this in the official documentation, you may want to explore more.
When backing up the mongo file system using tar, using a secondary in a replication set, tar is saying files have changed during the tar process even though the lock command has been run. For reliable backups this should not happen. What I am missing?
devtest:SECONDARY> use admin
switched to db admin
devtest:SECONDARY> db.fsyncLock()
{
"info" : "now locked against writes, use db.fsyncUnlock() to unlock",
"seeAlso" : "http://dochub.mongodb.org/core/fsynccommand",
"ok" : 1
}
Using the find command looking for changed files while the tar process is running confirms this. Comparing before and after versions of these files with diff also confirms. It appears to always be these files.
/var/lib/mongo # find -cmin 1
.
./WiredTiger.turtle
./WiredTiger.wt
./diagnostic.data
./diagnostic.data/metrics.interim
Using Mongo 3.2 and wiredtiger configured.
/etc/mongo.conf
storage:
directoryPerDB: true
dbPath: /var/lib/mongo
engine: "wiredTiger"
wiredTiger:
engineConfig:
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
journal:
enabled: true
Documentation seems to imply files will not be changed. Maybe only "data" files will not change...
https://docs.mongodb.com/v3.2/reference/method/db.fsyncLock/
Changed in version 3.2: db.fsyncLock() can ensure that the data files do not change for MongoDB instances using either the MMAPv1 or the WiredTiger storage engines, thus providing consistency for the purposes of creating backups.
In previous MongoDB versions, db.fsyncLock() cannot guarantee a consistent set of files for low-level backups (e.g. via file copy cp, scp, tar) for WiredTiger.
I'm guessing mongodb still needs to continue saving data while it's running, but from what you say it seems safe to backup your data as nothing is changing your collection data.
However if you're unsure you could always db.shutdownServer() which will force a flush to disk and stop the service, then back it up then start the mongod process again.