When we deployed our java app using spring data + mongodb we noticed that the cpu spiked to 100%. After removing the mongo configuration it went back to normal. The application is literally doing nothing with mongo, we merely have it added to the context file
<mongo:mongo id="mongo" host="127.0.0.1" port="27017" />
Any ideas what could be causing this? The cpu slowly gets eaten up until it reaches 150%.
Related
I am using Meteor JS in production and it first looked like we suffered from a memory leak in our application, but now it seems like MongoDB is the culprit or perhaps Meteor?
We're heavy users of Meteor methods and don't manage connection to Mongo ourselves, since Meteor does that for us. However, in production it seems like mongo processes just hangs around forever and consumes more and more memory until the application becomes unresponsive.
As you clearly can see, there are a lot of mongo processes here, some that has been living for hours. Yesterday, after a reboot of the server the memory consumption was about 900MB, now it's closing onto 4GB.
Since rebooting every once in a while in unacceptable, what can we do to fix this? I don't really know where to start since Meteor manages connections to Mongo and I am a bit uncertain what's causing this issue.
Any tips or direction is helpful, even if it's just a tip on how to debug our application.
MeteorJS version: 1.10
Mongo version: 4.2.1
I've installed BorPred in local iisexpress on clean server 2019 core. Debug in web.config is disabled, log4net setup changed to show only ERROR/FATAL.
Borpred started with mem usage less than 20M, and then I connect to it mem usage start growing and this is ok.
If I leave borpred alone for 1 hour it keeps running and it is normal too due to the periodic api/admin_WebApi/GetChangesSince calls.
But the mem usage after 1 hour increased up to 600M
I use TASKLIST command to check it.
Question - is it normal behavior or it can be mem leak?
Are there some settings to change/to check that can help to decrease mem usage?
Thank you
New name for this product is MDrivenServer.
The MDrivenServer has client synchronization - this builds up a list of changed identities. It will be expected to see a build up of memory due to update operations building the memory of the recently changed objects.
The MDrivenServer also has internal EcoSpaces to handle its own administration and ServerSide jobs - these will be garbaged and recreated when used a certain period of time.
.NET does not necessarily release memory from processes that have shown a need for the memory in the past - this causes you to see the used memory to equal the worst case need - like if you have a server-side job that pushes memory usage and it run once a day - the memory usage may still reflect the max usage.
I have a MongoDB instance in a cloud on AWS EC2 t2.micro (30GB storage, 1GB ram) running in Docker and in that database I have a single collection which stores 411 thousand documents, an this takes ~700MB disk space.
On my local computer, if I run this in mongo shell:
db.my_collection.find().skip(200000).limit(1)
then I get the correct results, but if I run this
db.my_collection.find().skip(220000).limit(1)
then MongoDB shuts down. Why? What should I do, to access these data?
It appears that your system doesn't have enough RAM to fulfill mongodb demand. When a Linux system is critically low in memory, kernel starts killing processes to avoid system crash itself.
I believe, this is what happening in your case too. Mongodb is not even getting chance to write a log. I'd recommend to increase RAM or if it's not feasible, add more swap space. This will prevent system crash but mongodb will keep working though very very slow.
Please visit these excellent resources on Linux and it's behavior.
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
https://serverfault.com/questions/480266/how-to-know-if-the-server-runs-out-of-ram-before-crashing-down
I'm using Mongodb on my Windows server 2012 for more than two years. Since the last update some weird issues started to happen which in the end lead to usage of the entire RAM memory.
The service Iv'e configured for Mongodb is as follows:
logpath=d:\data\log\mongod.log
dbpath=d:\data\db
storageEngine=wiredTiger
rest=true
#override port
port=27017
#configsvr = true
shardsvr = true
And in order to limit the Cache memory usage Iv'e added the following line:
wiredTigerCacheSizeGB=10
And this is where the weird stuff started happening. When I check the task manager it says that now Mongodb is really limited to 10GB as I defined in the service but it is actually using a lot more than 10GB.
In the first image you can see the memory consumption sorted by RAM consumption
While in fact the machine I'm using has 28GB in total
This crazy consumption leads to failure in the scripts I'm running, even the most basic ones, even when I only run simple queries like 'count' or 'distinct', I believe that this is a direct results of the memory consumption.
When I checked the log files I saw that there are many open connections that even when the session ends it indicates that still the same amount of connections is opened:
So in the end I have two major questions:
1. Is there a way of solving this issue without downgrading the Mongodb version?
2. The config file looks right? is everything there is necessary?
Memory usage in WiredTiger is a two-level cache:
First is the WiredTiger cache as controlled by --wiredTigerCacheSizeGB
Second is the Operating System filesystem cache. MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes
See also WiredTiger memory usage
For OS filesystem cache, MongoDB doesn't manage the memory it uses directly - it lets the OS manage it. Windows will try to use every last scrap of physical memory if it can - but lots of it should and will be thrown out if other processes request memory.
An alternative is to run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system.
Having said the above:
You are also running another database in the server i.e. mysqld. MongoDB, like some databases will perform better on a dedicated server to reduce memory contention.
Task Manager shows mongod is using 10GB, although the machine is using up to ~28GB. This may or may not be mongod as you have other processes as well.
Useful resources:
FAQ: Memory diagnostics for WiredTiger
FAQ: MongoDB Cache Handling
MongoDB Production Notes
I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.