I am running tests on the server, as a result of which mongodb starts using a lot of RAM. Mongo version 3.6.8, arm architecture on the server, there is no swap file. I've tried all the solutions from here Is there any option to limit mongodb memory usage? . In particular, when using cgroups, mongo exceeds the limit and dies.
I will also say that the memory that mongodb uses for the cache is not returned if necessary.
I will be glad of any help.
Related
I have two instances running on AWS (EC2). One instance is running only mongodb server while the other one is running a multi process python program that acquires info from the remote mongo server.
On the python instance I am using pymongo, and each process establishes connection (MongoClient) independently.
While monitoring the CPU utilization of the mongo's instance, I get very low CPU usage (about 2%).
In the free monitoring tool (https://cloud.mongodb.com/freemonitoring/cluster), I get about 40% CPU utilization.
Why there is such a big difference between the two values?
Does the mongodb needs to be special configured in order to utilize multiple CPU's cores?
Does the mongodb needs to be special configured in order to utilize multiple CPU's cores?
No.
Why there is such a big difference between the two values?
You have not described where the 2% value came from or what it is measuring, hence this question is impossible to answer.
I have a MongoDB instance in a cloud on AWS EC2 t2.micro (30GB storage, 1GB ram) running in Docker and in that database I have a single collection which stores 411 thousand documents, an this takes ~700MB disk space.
On my local computer, if I run this in mongo shell:
db.my_collection.find().skip(200000).limit(1)
then I get the correct results, but if I run this
db.my_collection.find().skip(220000).limit(1)
then MongoDB shuts down. Why? What should I do, to access these data?
It appears that your system doesn't have enough RAM to fulfill mongodb demand. When a Linux system is critically low in memory, kernel starts killing processes to avoid system crash itself.
I believe, this is what happening in your case too. Mongodb is not even getting chance to write a log. I'd recommend to increase RAM or if it's not feasible, add more swap space. This will prevent system crash but mongodb will keep working though very very slow.
Please visit these excellent resources on Linux and it's behavior.
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
https://serverfault.com/questions/480266/how-to-know-if-the-server-runs-out-of-ram-before-crashing-down
server :ubuntu 16.04
database :mongodb 3.2.10
configuration (replica set 3 nodes)
engine WiredTiger
we are having performance related to querying , the query run time degraded over the time and it does not seem to be related to load as no users accessing the node , the current fix is by restarting the mongo instance. i had the same issue on 3.2.9 version , it seems to keep happening on 3.2.10 as well the degradation is happening on all nodes.
Few tips to verify performance issues in MongoDB
In MongoDB Profiler you can check the slow running queries.
You can try indexing the documents (use inputs from above step)
Since the instances are replicated please revisit the write concern part https://docs.mongodb.com/manual/core/replica-set-write-concern/
can you check whether mongodb in-memory implementation can help https://docs.mongodb.com/manual/core/inmemory/
You can see few important tips here - https://docs.mongodb.com/manual/administration/analyzing-mongodb-performance/
I'm using Mongodb on my Windows server 2012 for more than two years. Since the last update some weird issues started to happen which in the end lead to usage of the entire RAM memory.
The service Iv'e configured for Mongodb is as follows:
logpath=d:\data\log\mongod.log
dbpath=d:\data\db
storageEngine=wiredTiger
rest=true
#override port
port=27017
#configsvr = true
shardsvr = true
And in order to limit the Cache memory usage Iv'e added the following line:
wiredTigerCacheSizeGB=10
And this is where the weird stuff started happening. When I check the task manager it says that now Mongodb is really limited to 10GB as I defined in the service but it is actually using a lot more than 10GB.
In the first image you can see the memory consumption sorted by RAM consumption
While in fact the machine I'm using has 28GB in total
This crazy consumption leads to failure in the scripts I'm running, even the most basic ones, even when I only run simple queries like 'count' or 'distinct', I believe that this is a direct results of the memory consumption.
When I checked the log files I saw that there are many open connections that even when the session ends it indicates that still the same amount of connections is opened:
So in the end I have two major questions:
1. Is there a way of solving this issue without downgrading the Mongodb version?
2. The config file looks right? is everything there is necessary?
Memory usage in WiredTiger is a two-level cache:
First is the WiredTiger cache as controlled by --wiredTigerCacheSizeGB
Second is the Operating System filesystem cache. MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes
See also WiredTiger memory usage
For OS filesystem cache, MongoDB doesn't manage the memory it uses directly - it lets the OS manage it. Windows will try to use every last scrap of physical memory if it can - but lots of it should and will be thrown out if other processes request memory.
An alternative is to run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system.
Having said the above:
You are also running another database in the server i.e. mysqld. MongoDB, like some databases will perform better on a dedicated server to reduce memory contention.
Task Manager shows mongod is using 10GB, although the machine is using up to ~28GB. This may or may not be mongod as you have other processes as well.
Useful resources:
FAQ: Memory diagnostics for WiredTiger
FAQ: MongoDB Cache Handling
MongoDB Production Notes
We are currently setting up a mongodb database for our enviroment. We are running into specific collections which will initially be more then 2gb of size.
Out deployment enviroment is a 64bit Ubuntu machine.
We are trying to find what is the limit of a sizes of a specific collection and shard for mongodb in a sharding enviroment?
As far as I know, there is no limit to the size of a collection within MongoDB. The only limit would be the amount of disk space available to you. In the case of sharding, it would be the total amount of disk space available on all shards. And according to the docs, you can only have 1000 shards.