MongoDB Performance in Docker - mongodb

I did an experiment by running a python app that is writing 2000 records into mongoDB.
The details of my setup of the experiment as follows:
Test 1: Local PC - Python App running on Local PC with mongoDB on Local PC (baseline)
Test 2: Docker - Python App on Linux Container with mongoDB on Linux Container with persist volume
Test 3: Docker - Python App on Linux Container with mongoDB on Linux Container without persist volume
I’ve generated the result in chart - on average writing data on local PC is about 30 secs. Where else on Docker, it takes about 80plus secs. Hence it seems like writing on Docker is almost 3 times slower than writing on local PC itself.
Should I want to improve the write speed or performance of the mongoDB in docker container, what is the recommended practice? Or should I put the mongoDB as a external volume without docker?
Thank you!
graph

Your system is not consistent in many ways - dynamic storage and CPU performance, other processes, dynamic system settings etc. There are a LOT of underlying things under storage only.
60 sec tests are not enough for anything
Simple operations are not good enough for baseline comparisons
There is ZERO performance impact with storage and CPU in case of containers, there is an impact in networking, but i assume, this is not applicable here
Databases and database management systems must be optimized in special ways, there is no "install and run" approach. We, sysadmins/db admins usually need days to have it running smoothly. Also, performance changes over time.

After couple of weeks of testing and troubleshooting. I finally got the answer and I shall share my findings with the rest of the DevOps or anyone who facing the same issue as me
Correct this statement if needed, Docker Container was started off with Linux, Microsoft join the container bandwagon late and in order to for the container works (with Linux), the DevOps team need to install Linux WSL2 in Windows. And that cost extra overheads which resultant in the process speed.
So to improve the performance speed with containers, the setup should be in Linux OS instead of Windows OS. (and yes the speed reduce drastically)

Related

Why is MongoDB very slow on update requests when running using docker compose?

I'm running a single instance mongodb node using docker-compose
I'm trying to update 40k documents.
I tried running this on 3 setups of aws instances - 2x4 (cpu/mem), 4x8 and 16x32.
It seems that even after adding more cpu, the task doesn't run faster.
Also in top it shows that cpu of mongodb is always 100% during this task.
Any way to improve this?
Use iostat to determine what the cpu is spent on.
If it is spent on iowait, your system is i/o bound and adding more cpu won't help - you need faster disks.

how can we run multiple mongodb versions on same system?

How can i run mongo 2.6 and mongo 3.0 at the same time on my linux system ?
I need this because i have 2 projects one is working on mongo 2.6 and one is working on mongo 3.0
Any help is appreciated !!
If you really wan to do this, and I recommend you DO NOT, then the best way is to use a container tech, docker can work quite well or LXC or one of the others.
However, do note that there is a high chance that if your sever is in the "cloud" it is already vitualised as it is as such you will constantly be losing power and resources due to sub virtualising everything over and over.
DO NOT put them on the same host without some kind of separation there will be a high chance of resource contention and conflict that tools like ulimit just cannot solve (well, technically it could but it will be a hairball).

Are there major downsides to running MongoDB locally on VPS vs on MongoLab?

I have an account with MongoLab for MongoDB and the constant calls to this remote server from my app slow it down quite a lot. When I run the app locally on my computer with a local version of Mongod and MongoDB it's far, far faster, as would be expected.
When I deploy my app (running on Node/Express) it will be run from a VPS on CentOS. I have plenty of storage space available on my VPS, are there any major downsides to running MongoDB locally rather than remotely on Mongolab?
Specs of the VPS:
1024MB RAM
1024MB VSwap
4 CPU Cores # 3.3GHz+
60GB SSD space
1Gbps Port
3000GB Bandwidth
Nothing apart from the obvious:
You will have to worry about storage yourself. MongoDB does tend to take a lot of disk space. upgrading storage will probably be harder to manage than letting Mongolab take care of it.
You will have to worry about making sure the Mongo server doesn't crash and it's running fine.
You will have scaling issues in the future once the load on your application and your database increases.
Using a "database-as-a-service" like Mongolab frees you from worrying about a lot of hardware/OS/system level requirements and configuration. Memory optimization? Which file system? Connection limits? Visualization and IO ops issues? (thanks to Nikolay for pointing that one out)
If your VPS provider doesn't account for local traffic in your bandwidth, then you can set up another VPS for MongoDB. That way, the server will be closer so the requests will be faster, and also, it will have the benefits of being an independent server. It still won't be fully managed like MongoLab though.
[ Edit: As Chris pointed out, MongoLab also helps you with your database schema design and bundles MongoDB support with their plans, so that's also nice. ]
Also, this is a good question, but probably not appropriate for StackOverflow. dba.stackexchange.com and serverfault.com are good places for this question.

mongodb install - requirements?

Anyone know how much disk space and ram a standard ubuntu install on mongo needs? trying to map out my VPS needs ...
There are no minimum requirements as such, but I wouldn't recomend running Mongo on the same box as your webserver.
MongoDB automatically uses all free memory on the machine as its cache
(http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram)
This, along with the reasons mentioned in this answer (albeit for RDBMS) makes running mongodb on a separate box (even a small one) a much better idea.

Running JIRA on a VM

Anyone have any success or failure running Jira on a VM?
I am setting up a new source control and defect tracking server. My server room is near full and my services group suggested a VM. I saw that a bunch of people are running SVN on VM (including NCSA). The VM would also free me from hardware problems and give me high availability. Finally, it frees me from some red tape and it can be implemented faster.
So, does anyone know of any reason why I shouldn't put Jira on a VM?
Thanks
We just did the research for this, this is what we found:
If you are planning to have a small number of projects (10-20) with 1,000 to 5,000 issues in total and about 100-200 users, a recent server (2.8+GHz CPU) with 256-512MB of available RAM should cater for your needs.
If you are planning for a greater number of issues and users, adding more memory will help. We have reports that allocating 1GB of RAM to JIRA is sufficient for 100,000 issues.
For reference, Atlassian's JIRA site (http://jira.atlassian.com/) has over 33,000 issues and over 30,000 user accounts. The system runs on a 64bit Quad processor. The server has 4 GB of memory with 1 GB dedicated to JIRA.
For our installation (<10000 issues, <20 concurrent sessions at a time) we use very little server resources (<1GB Ram, running on a quad-core processor we typically use <5% with <30% peak), and VM didn't impact performance in any measurable ammount.
I don't see why you shouldn't run jira off a vm - but jira needs a good amount of resources, and if your vm resides on a heavily loaded machine, it may exhibit poor performance. Why not log a support request (support.atlassian.com) and ask?
We run Jira on a virtual machine - VMWare running Windows Server 2003 SE and storing data on our SQL Server 2000 server. No problems, works well.
My company moved our JIRA instance from a hosted physical server to an Amazon EC2 instance recently, and everything is holding up pretty well. We're using an m1.large instance (64-bit o/s with 4 virtual cores and 8GB RAM), but that's way more than we need just for JIRA; we're also hosting Confluence and our corporate Web site on the same EC2 instance.
Note that we are a relatively small outfit; our JIRA instance has 25 users (with maybe 15 of them active) and about 1000 JIRA issues so far.
We run our JIRA (and other Atlassian apps) instance on Linux-based VM instances. Everything run very nicely.
Disk access speed with JIRA on VM...
http://confluence.atlassian.com/display/JIRA/Testing+Disk+Access+Speed
I'm wondering if the person who is using JIRA with VM (Chris Latta) is running ESX underneath - that may be faster than a windows host.
I have managed to run Jira, Bamboo, and FishEye from a set of virtual machines all hosted from the same server. Although I would not recommend this setup for production in most shops. Jira has fairly low requirements by today's standards. Just be sure you can allow enough resources from your host machine things should run fine.
If, by VM, you mean a virtual instance of an OS, such as an instance of linux running on Xen, VMWare, or even Amazon EC2, then Jira will run just fine. The only time you need to worry about virtual systems is if you're doing something that depends on hardware, such as running graphical 3D apps, or say something that uses a fax modem or a Digium telephony card with Asterisk.