Convert a Shard Cluster to a Replicated Shard Cluster - mongodb

I've been working with mongo for a few weeks and and building my environment in a dev. I started with a single node, then moved to a shard cluster, and now want to move to a replicated shard cluster. From what I read a Replicated Shard Cluster is the best of the best, scalability, durability, performance increase, etc.
I've read most of the (very good) tutorials in their help. It seems their lessons advise going from single node, to replica set, to sharded replica set, which, unfortunately is the opposite way I did it. I can't seem to find anything to upgrade a sharded cluster to a replicated shard cluster.
Here are 5 hosts that I have:
APPSERVER
CONFIGSERVER
SHARD1
SHARD2
SHARD3
I started each of the shard servers with:
mongod --shardsvr
Then I started the config server with:
mongod --configsvr
Then I started the mongos process on the APPSERVER with:
mongos --configdb CONFIGSERVER
Then in mongos, I added the shards, enabled sharding on my database, and defined a shardkey for a collection:
sh.addShard("SHARD1:27018");//again for 2 and 3
sh.enableSharding("beacon");
sh.shardCollection("beacon.alpha2", {"ip":1});
I want each of the shards replicated on each of the other two. (right?) Do I need to bring down the mongod processes on the shards and restart them with different CL parameters? What commands do I need to issue in my mongos shell to get it to replicate? Should I export all my data, take everything down, restart and reimport? Again, I see a lot of tutorials on how to create a replica set, but I don't really see anything on how to do a replica set given a sharded system to start with.
Thanks!

For each shard, you will need to restart the current member and start both it and two new members (or 1 new member and an arbiter) with the --replset command line option. You could add more members than that, but 3 is the lowest workable set. Then from inside what will become the new primary (your current SHARD1 for example) you could do the following:
rs.add("newmember1:port")
rs.add("newmember2:port")
rs.initiate();
You would then need to check and make sure that the sh.status() has been updated to reflect the new members of the replica set. In 2.2 this has become slightly easier as it should be automatic, for prior versions it was necessary to manually save the shard information in the config database, which is reflected in the documentation under sharded cluster. If it has been automatically added you will see the replica set list in the sh.status() output, similar to the following:
{ "_id" : "shard1", "host" : "shard1/SHARD1:port,newmember1:port,newmember2:port" }
If this does not happen automatically you will need to follow the procedure outlined in the documentation link above, namely from inside mongos:
db.getSiblingDB("config").shards.save({_id:"<name>", host:"<rsName>/member1,member2,..."})
Following the above example it would look like:
db.getSiblingDB("config").shards.save({_id:"SHARD1", host:"shard1/SHARD1:port,newmember1:port,newmember2:port"})
You would need to do this procedure for each shard, and you should do them one at a time so that you have all of SHARD1 as a replica set before moving on to SHARD2. You will also need to be aware that each replica set will become read-only while the initial election takes place, so at the very least you should schedule this in a downtime or maintenance window. Ideally test first in a staging environment.

Related

One Shard with Multiple Mongos

Can we have this type of configuration?
Two server running the following things each-
1.Mongo Config Server.
2.Mongo Router.
3.Application.
Total 4 EC2 servers-
First Server-Running the web application & mongos.
Second Server-Running the web application & mongos.
Third Server-Running the First Shard with complete DB(Say for
example Demo).
Forth Server-Running The Second Shard with complete DB(Say for
example Demo).
Both the Mongos should point to one shard named Shard1?
Yes, You can have multiple mongos instances running against a single shard. Think of the mongos instances as clients for the sharded cluster which have to run as a daemon process in order to keep metadata and heartbeats up to date.
Edit: as for having a complete DB, this is only possible for a single DB. You can have one DB on shard1 and the other DB on shard2, for example. but you can never have a single complete DB on two shards. To achieve the goal of having db1 on shard1 and db2 on shard2, you simply make the respective shard the primary shard of the respective database and don't shard any collection. Please read the docs for the movePrimary command for details.
A bit OOT:
However, running a single config server is strongly advised against, and for a good reason. If the single config server goes down or gets corrupted, your cluster will be impossible to use - and recreating the sharded cluster will not an easy task to be done. And it's going to be a lengty process. So please, use three config servers.*

Anyone know why a mongodb replica set would have a primary and secondary swap in rather quick succession?

We have 3 mongo servers set up as replicas, one of which cannot ever become the primary, (backup). Let's say mongo1 is primary and mongo[2..3] are secondaries. Randomly mongo1 will not be reachable by mongo2 and mongo3, which results in mongo2 being elected as primary. Mongo1 then sees this and becomes primary. Then mongo2 and 3 see mongo1 again after a few seconds or so and it gets re-elected.
Is there a reason why it would re-elect mongo1 so quickly? Both mongo 1 and 2 have the same priority.
The issue of this is that it disconnects the mongo routers from each of our webservers which takes a while to rediscover which is the primary and connect to it.
Also, should mongodb routers be on the application server or separate server? In the mongodb manual it suggests to put it on each application server but what are the benefits of doing it this way? What would be the benefits and issues with having router servers in between the application and mongo servers?
I should mention this is in AWS (ec2), if that makes a difference.
edit:
running mongo 2.4.6 as a -sharded- replica set. Sorry about that, I forgot to mention that portion. They are rather high load. The mongo instances are all in the same region and same availability zone in EC2.
Why should you put mongos on the same instance as your application server?
Network hops / latency. Querying a local mongos for an authoritative shard is quicker than doing a remote query — both approaches will then need fetch the data from the right shard.
On a side note: Other databases like Couchbase avoid the mongos / router component to avoid this overhead altogether.
I assume you have split your primary and secondaries across multiple availability zones? Maybe the network is a bit shaky, although it shouldn't be? Unfortunately, there's not much you can do against failovers if you have network split — not failing over quickly would open bigger rollback windows, which you'd want to avoid. Unless you want to disable automatic failover altogether.
And a node should only get reelected, if the new primary loses its majority. So in your case it looks like 2 and 3 cannot connect to 2. So 2 becomes the new primary. But soon afterwards 1 and 3 cannot connect to 2, and since they have a majority, they elect 1 to the be the master again.

mongodb failover Demonstration! Help needed

Here is a newbie trying to play around Mongodb. I am trying to demonstrate scaling in my class, meaning, I need to show that I have 2 instances of mongoDB up and running and I need to replicate them, set one as master and the other as secondary.
Can any of you suggest me a simple way to demonstrate that if primary/master fails the slave/secondary comes up as the master?
Please keep it as simple as possible as I am teaching to a fairly beginners of MongoDB
MongoDB replica sets are not master/slave. In order to achieve automatic failover you need to have a majority of nodes in the replica set able to elect a new primary. The minimum number of nodes in your replica set should be 3, which can either be 3 data-bearing nodes or 2 data-bearing nodes and an arbiter, which is a node that votes in elections.
A demo using replication alone is more about failover and redundancy than scaling (better demo'd with sharding).
If you want a very simple (and non-production) way to stand up a replica set or sharded cluster in a development environment, I would suggest using the mlaunch script which is part of mtools.
For example, to create a 3-node replica set with an arbiter:
mlaunch --replicaset --nodes 2 --arbiter
To create a sharded cluster with 3 shards backed by a replica set (plus mongos and config server):
mlaunch --replicaset --sharded 3
As mentioned in the other comments here, the free MMS Monitoring service is a good way to visualise activity in your MongoDB deployment, and you can use db.shutdownServer() to shutdown specific nodes to see the outcome.
The easiest way would be to set up the MongoDB monitoring service. Stop the MongoD process on one and watch the other take over. But, use replica sets rather than master/secondary as they are the recommended approach.
Actually, it is pretty easy
Set up a replica set with 2 "normal" mongods and an arbiter
Connect to both of the normal mongods using mongo
Show the output of rs.status(). (Note the selffield)
Shut down the current primary
Show the output of rs.status() again and again, until the former secondary is elected primary
Another option would be to write a simple java app which utilizes the driver, put it in an infinite loop which writes one entry every second and puts out the number of objects in the database. Catch exceptions and write out that a problem occurred. Start the sharded cluster, then start your application. Shut down the primary while the program is running. during the elections, there may be exceptions be thrown. As soon as the former secondary is elected primary, the document count should start to rise again.

64-bit mongodb multiple-shards Issue

I am using 64-bit MongoDB, and i am undergoing test on multiple-shards. If i keep multiple shards in a single machine. Its working fine but if i keep shards in different machine, its failed in sharding to second shard. I have restricted the first-shard size to 10MB, once its reaches the limited size in first shard it should start sharding to second-shard but not happening so.Instead failed to store in second-shard updating to first shard. The following are my shard details. In my environment initially i have two shards. The first shard is on my first-machine running along with my application. The Second-shard is on my second machine.
Configuration as follows:-
*)On both of my shards, shard-server,configserver,mongos and i have connected mongo through mongos as follows ./mongo hostname:27017/admin and i have added both the shards in first & second shard and enabled sharding for database and collection level by using shard-key.
Please, let me know if i gone wrong anywhere in the configuration.
Advance Thanks,
Your post could use some editing, this is very difficult to read.
It looks like you have 2 machines. On each machine you have:
mongod process serving as one shard
mongod process serving as a config
mongos process
a copy of your application connecting to localhost:27017/admin
Please, let me know if i gone wrong anywhere in the configuration.
There are several possible problems here. Please check the following:
You can only have 1 or 3 config processes. It looks like you have 2, this will not work.
When you connect to localhost:27017/admin are you connecting to mongos or mongod? Either one could be running on those ports. Can you specify the ports for each process to help clarify? You must connect to mongos or the sharding will not happen.
Please look at the logs, they generally have output indicating what the server is doing. If there is no indication of "splits" or "chunks" happening, then your database may be configured incorrectly.
Your best bet is to start from top and test each piece one at a time.

is this the optimal minimum setup for mongodb to allow for sharding/scaling?

3 instances for config servers
1 instance for webserver & mongos
1 instance for shard 1
then when i need to start more shards i can just add more instances?
also, what is a replica set? if i had say 3 servers to shard 1 then is that a replica set?
A Replica Set is a set of computers that are clones of each other. (i.e.: replicas) Within a given set there is an elected master. By default reads and writes go to this elected master and the replicas just "tail" the changes to be up-to-date copies. If the master fails, a new one is elected and the system just keeps going. The documentation is here.
So you ask about scaling with MongoDB. There are two types of scaling:
Read Scaling: use Replica Sets (see here)
Write Scaling: use Sharding
The minimum config for Replica Sets is
- 2 full replicas
- 1 arbiter (lightweight process, breaks ties when voting)
The minimum config for Sharding is
- 1 config server
- 1 mongod process (only one shard)
- 1 or more mongos (generaly on app server)
However, you probably don't want to run like this in production. Running only a single DB, means that you only have one source for the data which can result in large down-times or total data loss. This is generally solved by using replica sets.
Additionally, the config server is quite important. MongoDB supports 1 or 3 config servers. Most production deployments use 3. Note that config servers and arbiters are very lightweight and can live on other boxes or on Amazon micro instances.
Most production deployments with sharding also involve replica sets. In fact, they usually start as replica sets.
then when i need to start more shards i can just add more instances?
From a sharding perspective it should be as easy as:
- start new shard server
- run the addshard command from a mongos
Note that when you add a shard, you will need to allow for time and resources as data migrates between shards and everything re-balances.