Reserving RAM for OS on a CentOS server running MongoDB - mongodb

Given that a mongod process will consume quite a bit of the available RAM, is there a method to 'protect' a certain amount of RAM for the CentOS OS's use?
Or is this really even necessary from the OS's perspective...I assume like most operating systems, CentOS will take what it needs regardless.
I understand that if you are seeing this in practice, it's time to scale out/up...this is a purely theoretical question at this point as I am learning CentOS.

MongoDB doesn't manage memory. It delegates the responsibility to OS. It's beautifully explained in the below link.
MongoDB Memory Management

Related

Cassandra and MongoDB minimum system requirements for Windows 10 Pro

RAM- 4GB,
PROCESSOR-i3 5010ucpu #2.10 GHz
64 bit OS
can Cassandra and MongoDB be installed in such a laptop? Will it run successfully?
The hardware configuration proposed does not meet the minimum requirements. For Cassandra, the documentation requests a minimum of 8GB of RAM and at least 2 cores.
MongoDB's documentation also states that it will need at least 2 real cores or one multi-core physical CPU. With 4GB in RAM, the WiredTiger will allocate 1.5GB for the cache. Please also note that MongoDB will require changes in BIOS to allow memory interleaving to enable Non-Uniform Access Memory, a.k.a. NUMA, such changes will impact the performance of the laptop for other processes.
Will it run successfully?
This will depend on the workload expected to be executed; there are documented examples where Cassandra was installed on a Raspberry Pi array, which since the design it was expected to have slow performance and have a limited amount of data that can be held in the cluster.
If you are looking to have a small sandbox to start using these databases there are other options, MongoDB has a service named Atlas, with a model of a database as a service, it offers a free tier for a 3-node replica and up to 512Mb of storage. For Cassandra there are similar options, AWS offers in the free tier a small cluster of their Managed Cassandra Service (MCS), Datastax is also planning to offer similar services with Constellation

Setting up the optimal number of processors/cores per processor virtual machine (VMware)

I was looking for an answear but didn't find one.
I'm trying to create a new VM to develop a web application. What would be the optimal processor settings?
I have i7 (6th gen) with hyperthreading.
Host OS: Windows 10. Guest OS: CentOS.
Off topic: RAM that should I give to VM should be 50% of my memory? Would it be ok? (I have 16GB RAM)
Thanks!
This is referred to as 'right-sizing' a vm, and it is dependent on the application workload that will run inside it. Ideally, you want to provide the VM with the minimum amount of resources the app requires to run correctly. "Correctly" is subjective based upon your expectations.
Inside your VM (CentOS) you can run top to see how much memory and cpu % is being used. You can also install htop which you may find friendlier than top.
RAM
If you see a low % of RAM being used, you can probably reduce what you're giving the VM. If you are seeing any swap memory used (paging to disk), you may want to increase the RAM. Start with 2GB and see how the app behaves.
CPU
You'll may want to start with no more than 2vCPUs, checkout top to see how utilized the application is under load, and then make an assessment for more/less vCPUs.
The way a hosted hypervisor (VMware Workstation) handles guest CPU usage is through a CPU scheduler. When you give a vm x number of vCPUs, the VM will need to wait till that many cores are free on the CPU to do 'work'. The more vCPUs you give it, the more difficult (slower) it will be to schedule. It's more complicated than this, but I'm trying to keep it high level. CPU scheduling deep dive.

Docker instead of multiple VMs

So we have around 8 VMs running on a 32 GB RAM and 8 Physical core server. Six of them run a mail server each(Zimbra), two of them run multiple web applications. The load on the servers are very high primarily because of heavy load on each VMs.
We recently came across Docker. It seems to be a cool idea to create containers of applications. Do you think it's a viable idea to run applications of each of these VMs inside 8 Docker Containers. Currently the server is heavily utilized because multiple VMs have serious I/O issues.
Or can docker be utilized in cases where we are only running web applications, and not email or any other infra apps. Do advise...
Docker will certainly alleviate your server's CPU load, removing the overhead from the hypervisor's with that aspect.
Regarding I/O, my tests revealed that Docker has its own overhead on I/O, due to how AUFS (or lately device mapper) works. In that front you will still gain some benefits over the hypervisor's I/O overhead, but not bare-metal performance on I/O. My observations, for my own needs, pointed that Docker was not "bare-metal performance like" when dealing with intense I/O services.
Have you thought about adding more RAM. 64GB or more? For a large zimbra deployment 4GB per VM may not be enough. Zimbra like all messaging and collaboration systems, is an IO bound application.
Having zmdiaglog (/opt/zimbra/libexec/zmdiaglog) data to see if you are allocating memory correctly would help. as per here;
http://wiki.zimbra.com/wiki/Performance_Tuning_Guidelines_for_Large_Deployments#Memory_Allocation

Are there major downsides to running MongoDB locally on VPS vs on MongoLab?

I have an account with MongoLab for MongoDB and the constant calls to this remote server from my app slow it down quite a lot. When I run the app locally on my computer with a local version of Mongod and MongoDB it's far, far faster, as would be expected.
When I deploy my app (running on Node/Express) it will be run from a VPS on CentOS. I have plenty of storage space available on my VPS, are there any major downsides to running MongoDB locally rather than remotely on Mongolab?
Specs of the VPS:
1024MB RAM
1024MB VSwap
4 CPU Cores # 3.3GHz+
60GB SSD space
1Gbps Port
3000GB Bandwidth
Nothing apart from the obvious:
You will have to worry about storage yourself. MongoDB does tend to take a lot of disk space. upgrading storage will probably be harder to manage than letting Mongolab take care of it.
You will have to worry about making sure the Mongo server doesn't crash and it's running fine.
You will have scaling issues in the future once the load on your application and your database increases.
Using a "database-as-a-service" like Mongolab frees you from worrying about a lot of hardware/OS/system level requirements and configuration. Memory optimization? Which file system? Connection limits? Visualization and IO ops issues? (thanks to Nikolay for pointing that one out)
If your VPS provider doesn't account for local traffic in your bandwidth, then you can set up another VPS for MongoDB. That way, the server will be closer so the requests will be faster, and also, it will have the benefits of being an independent server. It still won't be fully managed like MongoLab though.
[ Edit: As Chris pointed out, MongoLab also helps you with your database schema design and bundles MongoDB support with their plans, so that's also nice. ]
Also, this is a good question, but probably not appropriate for StackOverflow. dba.stackexchange.com and serverfault.com are good places for this question.

mongodb install - requirements?

Anyone know how much disk space and ram a standard ubuntu install on mongo needs? trying to map out my VPS needs ...
There are no minimum requirements as such, but I wouldn't recomend running Mongo on the same box as your webserver.
MongoDB automatically uses all free memory on the machine as its cache
(http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram)
This, along with the reasons mentioned in this answer (albeit for RDBMS) makes running mongodb on a separate box (even a small one) a much better idea.