MongoDB Atlas - Replica Set Has No Primary - mongodb

I'm fairly new to MongoDB (Atlas - free tier), where I have created a project using it for storing my data. I had it set up and working fine for a couple of weeks, when suddenly I received an email with: An alert is open for your Atlas project: Replica set has no primary. I have no idea what this means and I don't believe I have done anything in the last couple of days/weeks that could warrant this alert. However, after checking my project, it seems that I can no longer connect to my cluster and access my data.
After checking on MongoDB Cloud, it seems that my cluster has stopped working and only the secondary shard (don't know if this is the right terminology) is running, while the other two seem to be down. Can anyone explain what this means, why it is happening or how to fix it? Thanks.

To troubleshoot issues like this, read the server logs and act based on the information therein.
For free and shared tiers in Atlas the logs are apparently not available. Therefore:
For a free tier cluster (M0), delete this cluster and create a new one. If you don't have a backup you should be able to dump via a direct connection to any of the operational secondary nodes or using the secondary read preference.
For a shared tier cluster (M2/M5), use the official MongoDB support channels for assistance.

Related

Mongodb localhost:27017 replica set switched to secondary localhost:27027

I'm fairly new to mongodb, just a couple of months. I just converted my mongodb database to support a secondary replica set so I can watch collections. I only added one secondary which I'm guessing now may not be the best after reading you should create an odd number, but it is a localhost on one machine. I went through the instructions, got replication working fine for for half a day running my programs. But for some reason recently it has switched the database for port 27017 from primary to secondary. Primary was previously on localhost:27017 and secondary was on localhost:27027. Now my normal program can't connect to localhost:27017 without an error, which I believe it is because it is a secondary replica set now when it was primary before, assuming you can only connect to a primary. Here is the error msg.
Exception in thread "main" com.mongodb.MongoNotPrimaryException: Command failed with error 10107 (NotWritablePrimary): 'not master' on server localhost:27017.
I'm perplexed why mongodb switched the replica set primary in the first place. I doubt an error occurred, but certainly possible, but I haven't had a single "localhost" error in months of development.
For now, ideally how would I switch 27017 back to be the primary. How do I do that so my existing programs can function again?
Eventually when in production, what is the best methodology to handle this, assuming a lookup to a DNS entry to an ip address and suddenly the primary gets changed because of a fail over?
Given question 3 is a bit more involved, is there something I can do in my development environment to better simulate a production environment.
I use StackOverflow extensively but this is my first post so thanks for anyone who can provide advice.
Without knowing more about the replica configuration and circumstances of the switch over I'm not sure anyone could confidently answer question 1 but it may not be important compared to question 3.
When you want to manually switch the primary you can manipulate the priority settings:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/
Or run manual commands to freeze or step down the current primary:
https://docs.mongodb.com/manual/tutorial/force-member-to-be-primary/#force-a-member-to-be-primary-using-database-commands
The safest option is to ensure your application is aware of all replicas in the replica set. Then when you have these situations where something unexpected has happened the application will fail over to a writable db without any issues.
https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/connect-to-mongodb/#connect-to-a-replica-set
I can only suggest setting up some VMs or containers as replica set members to better represent a production environment.
https://hub.docker.com/_/mongo
I was able to solve the problem by using the connect string which comprised both replica sets, which I was unaware I needed to do. Such as for java:
mongoClient = MongoClients.create("mongodb://localhost:27017,localhost:27027");
This also worked for Mongo Compass so I was able to connect to the secondary database. I didn't know you needed to provide paths to all replica sets when trying to connect, but in retrospect makes goods sense if something is down.
If you need a replica set for testing, you can create a single-node RS. Follow the instructions for creating a RS but only add one node.

How to monitor Mongo DB cluster locally

I am looking for a free solution to monitor some key stats for my mongo db cluster.
1) No of read writes happening
2) Replication lags
3) Memory status
4) Time taken by read writes
etc
.
.
After spending a couple of days I know the commands which I can use to get the same.
Also there are a lot of hosted options which directly solve the above problem.
eg: the new free monitoring https://docs.mongodb.com/manual/administration/free-monitoring/
Since I am not using the enterprise edition I am unable to find any good locally setup monitoring tool. I cant share the data over network, so need suggestions on how to setup local monitoring for the same.

MongoDb preparing for Sharded Clusters

We are currently setting up our mongodb environment for production. At the moment we only have one dedicated mongodb database server. We will expand this in the near future with a 2nd server and I already indicated to the management that for the ideal situation we should get a 3rd server as well.
Since I already know we're going to use sharding and replication in the near future I want to be prepared for it.
The idea I have now is to start now with the Development Configuration (as mongo's documentation names it).
Whenever our second server comes available I would like to expand this setup to a configuration with 2 configuration servers en 2 shards (replica sets).
And of course when our third server comes available have the fully functional sharded cluster configuration.
While reading mongo's documentation I was getting triggered by the note that de Development setup should not be used in production.
MongoDb Development Configuration
Keeping in mind that we will add more servers soon, would it be a bad idea to already configure the Development Configuration already so we can easily add the 2nd server to the cluster when it comes available?
After setting up the 'development sharded setup' I've found my anwser. Of course i'm happy to share in case anybody runs into the same questions as I do when starting with this.
In my case, it was ok to start with the development setup untill my new servers arrived. It was a temporary situation and when my new servers arived I was able to easily expand my replicasets. There are a number of reasons why this isn't adviced for production:
To state the obvious, there is no replication yet. Since I was running shards on one machine there is a single point of failure. If the machine, or one node goes down, the cluster won't work anymore.
Now this part is interesting. After I added a second server, I did have primary and secondary nodes. Primary nodes were used for writing and secondary for reading. I've eliminated the issue that there was no replication AND my data had a higher availability. However, I noticed with the 2-member replica sets, if one member of the replicaset went down (even is this was a secondary), the primary stepped down to a secondary node as well. This had to do with the voting mechanism that MongoDb uses. See Markus' more detailed answer on this.. Since there are no more primaries in the replicaset, my cluster won't function anymore. Now, if i were to use an arbiter I could eliminate this problem as well.
When you have a 3-member replicataset, automatic failover kicks in. Whenever a node goes down, another primary is assigned automatically and the cluster will continue performing as before.
During my tests I also got to a point where one of my MongoD.exe instances stopped working due to a "Out of memory exception". I was running a cluster with 3 replicasets, meaning every machine had at least 4 mongod.exe processes running (3 for the replicaset shards and one for the configuration server replicaset). Besides having a query which wasn't optimized yet I also noticed that the WiredTiger storage engine by default can use up to 50% of ram minus one gigabyte. Perhaps it wasn't the best choise to have multiple replicaset-shards on one machine but I was able to eliminate the problem by capping the wiredtiger memory usage.
I hope this answer helps anybody who's starting to set up replication and sharding for MongoDb.

Local MongoDB instance with index in remote server

One of our clients have a server running a MongoDB instance and we have to build an analytical application using the data stored in their MongoDB database which changes frequently.
Clients requirements are:
That we do not connect to their MongoDB instance directly or run another instance of MongoDB on their server but just somehow run our own MongoDB instance on our machine in our office using their MongoDB database directory with read only access remotely.
We've suggested deploying a REST application, getting a copy of their database dump but they did not want that. They just want us to run our own MongoDB intance which is hooked up with the MongoDB instance directory. Is this even possible ?
I've been searching for a solution for the past two days and we have to submit a solution by Monday. I really need some help.
I think this is normal request because analytical queries could cause too much load on the production server. It is pretty normal to separate production and analytical databases.
The easiest option is to use MongoDB replication. Set up MongoDB replica set with production database instance as primary and analytical database instance as secondary, also configure the analytical instance to never become primary.
If it is not possible to use replication - for example client doesn't want this, the servers could not connect directly to each other... - there is another option. You can read oplog from remote database and apply operations to your database instance. This is exactly the low level mechanism how replica set works, but you can do it manually too. For example MMS (Mongo Monitoring Sevice) Backup uses reading oplog for online backups of MongoDB.
Update: mongooplog could be the right tool for real-time application of replication oplog pulled from remote server on local server.
I don't think that running two databases that points to the same database files is possible or even recommended.
You could use mongorestore to restore from their data files directly, but this will only work if their mongod instance is not running (because mongorestore will need to lock the directory).
Another solution will be to do file system snapshots and then restore to your local database.
The downside to this backup/restore solutions is that your data will not be synced all the time.
Probably the best solution will be to use replica sets with hidden members.
You can create a replica set with just two members:
Primary - this will be the client server.
Secondary - hidden, with votes and priority set to 0. This will be your local instance.
Their server will always be primary (because hidden members cannot become primaries). Clients cannot see hidden members so for all intents and purposes your server will be read only.
Another upside to this is that the MongoDB replication will do all the "heavy" work of syncing the data between servers and your instance will always have the latest data.

mongodb automatic failover / high availability on aws

I need the proper way of failover mechanism for mongodb on aws ec2. I know failover can be accomplished by replica sets, but what is the best way to fire a new mongo installed ubuntu-ec2 ami node and add it to replica set again automatically (with zero manual operation) and return the replica set to it's proper state ?
EBS has some problems, but if I use local instance storage, I will lost the dead nodes data, but does the replica got all the master data and so is replaca is enough to recover everthing (on mongo 1.8 with journaling), or do I have to use only EBS ?
How should I start mongo instances, If I should start with repair option, how can I sperate node's first run from failover restart ?
Regards,
The easiest way to bring up new nodes is to bring up a new node with a recent backup.
So now it's a question of how you do your backup and how you restore from the backup quickly.
The MongoDB site has a write up for backups (in general) and backups on EC2 specifically. There's also a write-up for adding a new set member.
You can do this with instance storage or EBS drives, but you'll need different strategies for each. There's really no single way to do this, so I would check out the docs I've linked to for a primer.
Highly recommend reading Sean Coates' article on mutli-node MongoDB Elections, failover and AWS - specifically, the subtlety on distributed arbiter nodes (e.g., make sure to give yourself a voting majority when an AZ goes down). A similar recommendation can be found in a comment on this (now-closed) MongoDB vs. Cassandra thread.