MongoDB close to max memory on a PaaS - mongodb

We run a small NodeJS app that manages subscriptions for our mobile apps. It's backend is a MongoDB with only 100MB of memory. Currently the data size is around 120MB. It's all hosted on a PaaS called Nodechef.
After running for about a week the Mongo server hit 98MB/100MB in memory usage. Not knowing what would happen, we forced a restart and it dropped back to 70MB or so. It's slowly creeping back up.
A few questions:
Is this normal behavior for Mongo to keep growing in memory up to the max?
What happens when it hits max? Will it reboot or crash, or do some kind of garbage collection?
Is restarting weekly a pretty normal fix for this type of issue?

According to this you can try setting hostInfo.system.memLimitMB but I am surprised MongoDB runs at all with 100 MB available memory (if this is accurate).
If the MongoDB process runs out of memory (i.e. requests a memory allocation which is denied) it is likely to immediately terminate.

This is a Nodechef specific answer based on how they handle this. Other PaaS may handle differently:
"When it hits 125%, SWAP included, it will auto restart itself. It does a graceful shutdown so there should not be any problems there.
In regards to if this is normal, depends, i have seen cases where the app does not close cursors properly causing a cursor leak on the database server which results in memory continuously increasing. Another issue could also be memory fragmentation on the server itself. As long as the restarts are not happening hourly, you are more than fine. Taking a couple week to hit its peak is ok."

Related

Server collapses when memory reaches 88%

I'm using server side rendering with Angular universal, and PM2 as the process manager, in a Digital Ocean droplet of 8 GB Memory / 80 GB Disk / Ubuntu 16.04.3 x64 / 4 vCPUs.
I use a 6GB swap file, and the available memory when "free -m" is the following:
total used free shared buff/cache available
Mem: 7983 1356 5290 16 1335 6278
Swap: 6143 88 6055
The ram used looks fine. There are 4 processes with Cluster Mode of PM2.
Every 6-8 hours, when the memory reaches ~88% in my Digital Ocean panel, the CPU goes very high, the web application does not respond correctly and PM2 has to restart the process, not sure for how long the web application does not work well.
Here is an image of what happens:
Performance is fine when working normally:
I think I'm missing some sort of configuration or something, since this happens always at the same periods of time.
EDIT1 So far I fixed some incompatibilities in my code (the app was working, but sometimes failed due to this), and added a memory limit in pm2 of 1GB. I'm not sure if this is the way to go since I'm a bit new to process management, but the CPU levels are fine now. Any comment is appreciated. I leave a picture of the current behaviour, every time one of the four processes reach 1GB, its restarted:
EDIT2 I add 3 more images, 2 showing top processes from Digital Ocean, and one showing Keymetrics status:
EDIT3 I figured out some memory leaks from my Angular app (I forgot to unsubscribe from a couple of subscriptions) and the system behaviour improved, but the memory line is still going up. I'll keep investigating about memory leaking in Angular and see if I've made some other mistakes:
It looks like your Angular Universal app is leaking memory, it should not increase over time as you observe but stay mostly flat.
You can try to find the memory leak (looks like you already found an issue and have a suspicion what else it could be).
Another thing you can try is periodically restart your app.
See here for example how to restart your pm2 process every couple of hours to reset and prevent the OOM situation that you've been running into.
In our (edge) case, the kubernetes healthcheck was the cause of the issue. The healthcheck accessed the main page by an internal IP. The page used the caller URL (in this case its IP) to load some resources which it couldn't find that way. This lead to an error and was somehow cached and slowly used up all memory. We had the same very linear rise in memory even during nights because of the regularity of the healthcheck.
We solved the problem by letting the healthcheck call "/health" where we only return a 200 code.. as one should do anyway.

MongoDB memory consumption - it keeps rising

I run a mongodb server in a EC2 instance on AWS. For a while it ran flawlessly, nevery spiking above 50% of memory usage.
Suddenly, the behaviour changed to an ever increasing curve, it never goes significantly down, it keeps growing througout the day, until it reaches a peak (sometimes it is 100%, sometimes 90 or 80) and suddenly drops to 50% (perhaps some Hypervisor activity here?).
This behavior is NOT, in any way, compatible with the usage behavior of the application server this database is serving. Below is a graph comparing incoming requests x db memory usage.
What are the things I should be looking into to diagnose this memory issue? I already looked at the amount of open connections at any given time, and it is very low (<20), so I don't think it could be that. Any other ideas?
This issue doesn't seem to impact database performance, but I can't run any maintenance jobs (like archiving and backup) with memory that high - I had cases in which the database crashed.

Unusual spikes in CPU utilization in CentOS 6.6 while starting pycharm

my system since last couple of days is behaving strangely. I am a regular user of pycharm software, and it used to work on my system very smoothly with no hiccups at all. But since last couple of days, whenever I start pycharm, my CPU utilization behaves strangly, like in the image: Unusual CPU util
I am confused as when I go to processes or try ps/top in terminal, there are no process which is utilizing cpu more then 1 or 2%. So I am not sure where these resources are getting consumed.
By unusual CPU util I mean, That first CPU1 is getting used 100% for couple or so minutes, then CPU2. Which is, only one cpu's utilization goes to 100% for sometime followed by other's. This goes on for 10 to 20 minutes. then system comes back to normal.
P.S.: I don't think this problem is related to pycharm, as I face similar issues while doing other work also, just that I always face this with pycharm for sure.
POSSIBLE CAUSE: I suspect you have a thrashing problem. The CPU usage of your applications are low because none of them are actually getting much useful work done. All the processing is being taken up by moving memory pages to and from the disk. Your CPU usage probably settles down after a time because your application has entered a state where its memory working set has shrunk to a point where it all can be held in memory at one time.
This has probably happened because one of the apps on your machine is handling a larger data set than before, and so requires more addressable memory. Another possibility is that, for some reason, a lot more apps are running on your machine.
POTENTIAL SOLUTION: There are several ways you can address this. The simplest is to put more RAM on your machine. If this doesn't work or isn't possible, you'll have to figure out which app is the memory hog. You may simply have to work with smaller problems/data-sets or offload some of the apps onto a different box.
MIGRATING CPU LOAD: Operating systems will move tasks (user apps, kernel) around for many different reasons. The reasons can range anywhere from it being just plain random to certain apps having more of their addressable memory in one bank vs another. Given that you are probably doing a lot of thrashing, I'm not surprised that the processor your app is running is randomized over time.

OSB: Analyzing memory of proxy service

I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !
If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.

iphone - open and close sqlite database every time I use it

I am writing an iPhone app that uses SQLite. I am use to opening and closing my connections every time I use a database. However, I do not know if that is a good practice in the iPhone/SQLite environment. I want to know if I should open the database 1 time or if it is OK to open and close the database each time I use it. Please let me know.
I believe you should keep it open as long as you can, so data is cached in DRAM. Of course, you should also organize your transactions so you commit at logical points in time and maintain transactional integrity.
I would do as Matthew suggested: keep one connection opened for as long as your program is running.
Both answers seem right, but actually it depends from how often you're using it and how large is it. In case DB is large you should set larger page cache, but that leads to larger memory consumption and if access is rare - no reason for holding it up all the time (but if usage also small - large page cache won't help you also).
In case it's small - there is no reason to open/close it each time even with infrequent usage. But in average your resource consumption is higher with regular open/close. So all in all - don't reopen db each time you're using it.