How to monitor Mongo DB cluster locally - mongodb

I am looking for a free solution to monitor some key stats for my mongo db cluster.
1) No of read writes happening
2) Replication lags
3) Memory status
4) Time taken by read writes
etc
.
.
After spending a couple of days I know the commands which I can use to get the same.
Also there are a lot of hosted options which directly solve the above problem.
eg: the new free monitoring https://docs.mongodb.com/manual/administration/free-monitoring/
Since I am not using the enterprise edition I am unable to find any good locally setup monitoring tool. I cant share the data over network, so need suggestions on how to setup local monitoring for the same.

Related

MongoDB Atlas - Replica Set Has No Primary

I'm fairly new to MongoDB (Atlas - free tier), where I have created a project using it for storing my data. I had it set up and working fine for a couple of weeks, when suddenly I received an email with: An alert is open for your Atlas project: Replica set has no primary. I have no idea what this means and I don't believe I have done anything in the last couple of days/weeks that could warrant this alert. However, after checking my project, it seems that I can no longer connect to my cluster and access my data.
After checking on MongoDB Cloud, it seems that my cluster has stopped working and only the secondary shard (don't know if this is the right terminology) is running, while the other two seem to be down. Can anyone explain what this means, why it is happening or how to fix it? Thanks.
To troubleshoot issues like this, read the server logs and act based on the information therein.
For free and shared tiers in Atlas the logs are apparently not available. Therefore:
For a free tier cluster (M0), delete this cluster and create a new one. If you don't have a backup you should be able to dump via a direct connection to any of the operational secondary nodes or using the secondary read preference.
For a shared tier cluster (M2/M5), use the official MongoDB support channels for assistance.

Mongodb clone to another cluster

The idea here is, I have mongo cluster deployed in managed cloud service atlas. I have enabled Continuous Backup.
Now what I want to do is :
1) I want to use existing backup.
2) Using this existing backup I want to create similar cluster
(having same data form backup)
3) Automate this process so that every day my new cluster gets upto date from original cluster.
Note: The idea here for cloning cluster is, The original cluster is production data. I want to create a db which has similar data on which I can plug and play using any analytic tools and perform diffrent operations without affecting production data and load.
So far what I have found is to use mongorestore and mongodump.But here mongodump is putting load on production db even though my backup is enabled. I want to use same backup to clone this to another db cluster.
Deployed on Atlas, your server must have replica set.
Here are 2 solutions :
You need only reading data : connect your tools to a secondary server (ideally dedicated with priority 0 for becoming primary)
You need to read/write data : on the same server than above, play your mongodump command with --oplog option. By this way, you're dumping your data from a read-only server, preventing slowing performances of your main servers.
In this last case, what you need will find its solution in backup strategies, take a look at the doc to know more.
There's an offering for this purpose in ATLAS called analytic node.Link.
Analytic node is read replica of your database. Plus it will not interfere with your production traffic which makes it safer.
Also, you can connect BI connectors to this node and create your analytic platform.
We used redash.

mongo atlas or aws - Internal or External connection

i am working on my next project currently which works 100% on mongo,
my past projects worked on SQL + Mongo on which i used AWS RDS + AWS EC2 and could connect them both in AWS internal IP which result me with much faster connection.
Now in mongo there is alot of fancy cloud servers like MLab and MongoDB Atlas which is actually cheaper then AWS.
My concern is that moving back to external DB connection will be slower and more network consuming then the internal connection in RDS
Have anyone experienced in such issue? maybe the different isn't that big as i make it but i need it to be optimized
This depends on your setup. Many of the "fancy" services also host stuff on AWS, so latency is minimal. Some even offer "private environments" or such, so you can hide your databases from public view.
The only thing left to care about is the amount of network traffic. But this will be your problem regardless of your database host. You can test this relatively easily (e.g. get a trial from one of the providers and test for throughput, or raise your own MongoDB docker cluster to use as a test etc) just to get an idea of the performance range you'll be in.

Tool to create mongodb sharded cluster

I need a tool to manage a cluster of mongodbs. With an increasing number of machines, it is hard to maintain each machine without a tool.
More details:
The database grows around 50 MB per day, so they are approximately 1.5 GB per month. The mongodb is great for this because just increase a machine in your cluster resolve the size problem. The problem is that this change requires entering the host configuration and make the changes manually. I'd like to optimize the time of the team with a tool that allows remote execution, for example, run and save scripts or similar.
You can use Juju to create mongodb cluster :
https://github.com/charms/mongodb
http://www.youtube.com/playlist?list=PLyZVZdGTRf8g-5E7ppGGpxrraStyi493V
http://www.jorgecastro.org/2014/03/17/introducing-juju-bundles/
You need a provisioning tool, like Vagrant. Or SSH wrapper like Fabric.
MongoDB Cloud Manager has a Public API that you can use for cluster deployment automation.
Here's the link to the (very well-written) official docs.

Deploy Zend Application to the cloud

Was wondering if anyone out there has any experience in deploying a Zend community app to the cloud (e.g. AWS or similar)?
I'm new to cloud hosting having always been fortunate enough in the past to work for folks who have dedicated servers, my main concern (non-zend specific) is how you manage resilience at the database level? FOr example I would in a traditional setup have 2 boxes running the DB (Mysql) in Master/Slave mode with the master replicating to the slave. Assuming any HD failure of the Master I could swap the DB connection over from the Master to the slave and rebuild master at a later point? is this done differently in the cloud?
Any help/pointers greatly appreciated?
It depends on the type of cloud service that you use. If you're using AWS to get your own virtual machine ( Amazon EC2 ) then it's basically the same as having a dedicated server and you can keep a master slave setup and work them much the same way.
However, if you plan on using Amazon's cloud database service ( Amazon Simple DB ) then you don't have to worry about masters and slaves since Amazon does this for you and makes sure that you always have access to your data. The only thing is that it's in beta.
One of the points of the cloud is to take your mind off the hardware. Amazon worries about that.
You might still want to have two virtual machines in case amazon is doing maintenance that might cause your vm to become unavailable, however, Amazon stresses that it would be highly available and never go down really, so long as you pay.