mongoDB replica set - mongodb

I am trying to understand the concept of replica sets in MongoDB.
Taking a simple example of 2 mongoDB instances A (primary) and B (secondary).
If my client is happily querying A I understand that writes get replicated to B but what happens if server A becomes inaccessible?
Whilst in terms of mongo replication I can see that B gets elected as the new primary, how does the client know to now channel its queries to B and not A? Is this all done internally to mongo?
I ask because my client's configuration has a uri pointing to A yet nothing for B. Is that correct?

Taking a simple example of 2 mongoDB instances A (primary) and B (secondary).
You should not use two instance as part of your replica set. You should either have an arbiter or another secondary so that elections can take place.
Please see here for gotchas, here for replication fundamentals and here for information on elections.
If my client is happily querying A I understand that writes get replicated to B but what happens if server A becomes inaccessible?
The service should failover with server B becoming primary. However, with two instances, server B will not be able to elect itself as primary because there is no majority.
Whilst in terms of mongo replication I can see that B gets elected as the new primary, how does the client know to now channel its queries to B and not A? Is this all done internally to mongo?
This is typically performed through the driver as the driver is aware of the replica set configuration. Please see here for the relevant driver documentation (depending on what driver you are using).

Related

How mongo db primary is selected when environment is restarted?

I have set up an mongo cluster with 3 servers say A,B,C. My application writes lot of data db, hence oplog size is always very high.
When environment is restarted, Server A is primary ( we always run the rs.add(a), rs.add(B), rs.add(C) on A) , then Server C becomes primary due to xyz reason - may be due to server reboot oo temporary connection loss. Then application has written data to Server C , and Server A is still trying to remain in sync with Server C.
Now environment is restarted, my understanding is that server C should be chosen as primary. However server A is being chosen as primary.
Could anyone please explain the reason why Server A is chosen as primary.
Assuming that you are using majority write concern for data that you actually care about, server C would not have acknowledge the writes from the application until at least one of the secondary nodes also wrote the data.
A 3-node replica set needs to have at least 2 nodes online in order to elect a primary, and the election process will favor the node(s) with the most recent timestamp.
If nodes A and B came online slightly faster than C, they might complete an election before C is available.
In this situation, if you were using majority write concern there would be no data loss as either A or B would also contain everything that had been written to and acknowledged by C.
However, if you were using write concern with w:1 or w:0, there might be writes that C acknowledged that neither A nor B contain. When A and B come online, they would still elect a primary because neither knows anything about the additional data on C. When C then becomes available, if there have been any writes, C will have to conduct rollback in order to participate in the replica set.
As to why server A became primary in your cluster, check the mongod logs. In modern version of MongoDB election information is logged so you will be able to determine which nodes participated in the election, and which nodes voted for server A.
If you want a specific server to be preferred for the primary role, you can set member priorities accordingly. Absent priorities, MongoDB is free to have any of the members of the replica set as the primary.
Setting priorities does not guarantee that the server you want is ALWAYS going to be the primary - if this server gets restarted some other node will be a primary, and when this server comes back up it will not be eligible to be a primary again until it syncs with the current primary.

MongoDB replication factors

I'm new to Mongo, have been using Cassandra for a while. I didn't find any clear answers from Mongo's docs. My questions below are all closely related.
1) How to specify the replication factors in Mongo?
In C* you can define replication factors (how many replicas) for the entire database and/or for each table. Mongo's newer replica set concept is similar to C*'s idea. But, I don't see how I can define the replication factors for each table/collection in Mongo. Where to define that in Mongo?
2) Replica sets and replication factors?
It seems you can define multiple replica sets in a Mongo cluster for your databases. A replica set can have up to 7 voting members. Does a 5-member replica set means the data is replicated 5 times? Or replica sets are only for voting the primary?
3) Replica sets for what collections?
The replica set configuration doc didn't mention anything about databases or collections. Is there a way to specify what collections a replica set is intended for?
4) Replica sets definition?
If I wanted to create a 15-node Mongo cluster and keeping 3 copies of each record, how to partition the nodes into multiple replica sets?
Thanks.
Mongo replication works by replicating the entire instance. It is not done at the individual database or collection level. All replicas contain a copy of the data except for arbiters. Arbiters do not hold any data and only participate in elections of a new primary. They are usually deployed to create enough of a majority that if an instance goes down a new instance can be elected as the primary.
Its pretty well explained here https://docs.mongodb.com/manual/replication/
Replication is referred to the process of ensuring that the same data is available on more than one Mongo DB Server. This is sometimes required for the purpose of increasing data availability.
Because if your main MongoDB Server goes down for any reason, there will be no access to the data. But if you had the data replicated to another server at regular intervals, you will be able to access the data from another server even if the primary server fails.
Another purpose of replication is the possibility of load balancing. If there are many users connecting to the system, instead of having everyone connect to one system, users can be connected to multiple servers so that there is an equal distribution of the load.
In MongoDB, multiple MongDB Servers are grouped in sets called Replica sets. The Replica set will have a primary server which will accept all the write operation from clients. All other instances added to the set after this will be called the secondary instances which can be used primarily for all read operations.

MongoDB load balancing in multiple AWS instances

We're using amazon web service for a business application which is using node.js server and mongodb as database. Currently the node.js server is runing on a EC2 medium instance. And we're keeping our mongodb database in a separate micro instance. Now we want to deploy replica set in our mongodb database, so that if the mongodb gets locked or unavailble, we still can run our database and get data from it.
So we're trying to keep each member of the replica set in separate instances, so that we can get data from the database even if the instance of the primary memeber shuts down.
Now, I want to add load balancer in the database, so that the database works fine even in huge traffic load at a time. In that case I can read balance the database by adding slaveOK config in the replicaSet. But it'll not load balance the database if there is huge traffic load for write operation in the database.
To solve this problem I got two options till now.
Option 1: I've to shard the database and keep each shard in separate instance. And under each shard there will be a reaplica set in the same instance. But there is a problem, as the shard divides the database in multiple parts, so each shard will not keep same data within it. So if one instance shuts down, we'll not be able to access the data from the shard within that instance.
To solve this problem I'm trying to divide the database in shards and each shard will have a replicaSet in separate instances. So even if one instance shuts down, we'll not face any problem. But if we've 2 shards and each shard has 3 members in the replicaSet then I need 6 aws instances. So I think it's not the optimal solution.
Option 2: We can create a master-master configuration in the mongodb, that means all the database will be primary and all will have read/write access, but I would also like them to auto-sync with each other every so often, so they all end up being clones of each other. And all these primary databases will be in separate instance. But I don't know whether mongodb supports this structure or not.
I've not got any mongodb doc/ blog for this situation. So, please suggest me what should be the best solution for this problem.
This won't be a complete answer by far, there is too many details and I could write an entire essay about this question as could many others however, since I don't have that kind of time to spare, I will add some commentary about what I see.
Now, I want to add load balancer in the database, so that the database works fine even in huge traffic load at a time.
Replica sets are not designed to work like that. If you wish to load balance you might in fact be looking for sharding which will allow you to do this.
Replication is for automatic failover.
In that case I can read balance the database by adding slaveOK config in the replicaSet.
Since, to stay up to date, your members will be getting just as many ops as the primary it seems like this might not help too much.
In reality instead of having one server with many connections queued you have many connections on many servers queueing for stale data since member consistency is eventual, not immediate unlike ACID technologies, however, that being said they are only eventually consistent by 32-odd ms which means they are not lagging enough to give decent throughput if the primary is loaded.
Since reads ARE concurrent you will get the same speed whether you are reading from the primary or secondary. I suppose you could delay a slave to create a pause of OPs but that would bring back massively stale data in return.
Not to mention that MongoDB is not multi-master as such you can only write to one node a time makes slaveOK not the most useful setting in the world any more and I have seen numerous times where 10gen themselves recommend you use sharding over this setting.
Option 2: We can create a master-master configuration in the mongodb,
This would require you own coding. At which point you may want to consider actually using a database that supports http://en.wikipedia.org/wiki/Multi-master_replication
This is since the speed you are looking for is most likely in fact in writes not reads as I discussed above.
Option 1: I've to shard the database and keep each shard in separate instance.
This is the recommended way but you have found the caveat with it. This is unfortunately something that remains unsolved that multi-master replication is supposed to solve, however, multi-master replication does add its own ship of plague rats to Europe itself and I would strongly recommend you do some serious research before you think as to whether MongoDB cannot currently service your needs.
You might be worrying about nothing really since the fsync queue is designed to deal with the IO bottleneck slowing down your writes as it would in SQL and reads are concurrent so if you plan your schema and working set right you should be able to get a massive amount of OPs.
There is in fact a linked question around here from a 10gen employee that is very good to read: https://stackoverflow.com/a/17459488/383478 and it shows just how much throughput MongoDB can achieve under load.
It will grow soon with the new document level locking that is already in dev branch.
Option 1 is the recommended way as pointed out by #Sammaye but you would not need 6 instances and can manage it with 4 instances.
Assuming you need below configuration.
2 shards (S1, S2)
1 copy for each shard (Replica set secondary) (RS1, RS2)
1 Arbiter for each shard (RA1, RA2)
You could then divide your server configuration like below.
Instance 1 : Runs : S1 (Primary Node)
Instance 2 : Runs : S2 (Primary Node)
Instance 3 : Runs : RS1 (Secondary Node S1) and RA2 (Arbiter Node S2)
Instance 4 : Runs : RS2 (Secondary Node S2) and RA1 (Arbiter Node S1)
You could run arbiter nodes along with your secondary nodes which would help you in election during fail-overs.

Mongo sharding confusion with replication turned on

Lets say I have 3 machines. Call them A, B, C. I have mongo replication turned on. So assume A is primary. B and C are secondary.
But now I would like to shard the database. But I believe all writes still go to A since A is primary. Is that true ? Can somebody explain the need of sharding and how it increases write throughput ?
I understand mongos will find the right shard. But the writes will have to be done on the primary. And rest other machines B, C will follow. Am I missing something.
I also don't understand the statement : "Each shard is a replica set". The shard is incomplete unless it is also the primary.
Sharding is completely different to replication.
I am now going to attempt to explain your confusion without making you more confused.
When sharding is taken into consideration a replica set is a replicated range of sharded information.
As such this means that every shard in the cluster is actually a replica set in itself holding a range (can hold differnt range chunks but that's another topic) of the sharded data and replicating itself across the members of that self contained replica set.
Imagine that a shard will be a primary of its own replica set with its own set of members and what not. This is what is meant by each shard being a replica set.
As such each shard will have its own self contained primary but at the same time each primary of each replica set of each shard can receive writes from a mongos if the range that the replica set holds matches the shard key target being sent down by the client.
I hope that makes sense.
Edit
This post might help: http://www.kchodorow.com/blog/2010/08/09/sharding-and-replica-sets-illustrated/
Does that apply here ? Say I want writes acknowledged from 2 mongod instances. Is that possible ? I still have trouble understanding. Can you please explain with an example ?
Ok let's take an example. You have 3 shards which are in fact 3 replica sets, called rs1 and rs2 and rs3, consisting of 3 nodes each (1 primary and 2 secondaries). If you want a write to be acknowledged by 2 of the members of the replica set then you can do that as you normally would:
db.users.insert({username:'sammaye'},{w:2})
Where username is the shard key.
The mongos this query is sent to will seek out the correct shard (node) which is in fact a replica set, connect to that replica set and then perform the insert as normal.
So as an example, rs2 actually holds the range of all usernames starting with the letters m-s. The mongos will use its internal mapping of rs2 (gotten when connecting to the replica set) along with your write/read concern to judge which members to read from and write to.
Tags will apply all the same too.
If mongos finds a shard, which is not "primary"(as in replication) is the write still performed in the secondary ?
You are still confusing yourself here, there is no "primary" like in replication, there is a seed (sometimes called a master) server for the sharded database but that is a different thing entirely. The mongos does not have to write to the master node in anyway, it is free to write to any part of the sharded cluster that your queries allow it.

What is the advantage to explicitly connecting to a Mongo Replica Set?

Obviously, I know why to use a replica set in general.
But, I'm confused about the difference between connecting directly to the PRIMARY mongo instance and connecting to the replica set. Specifically, if I am connecting to Mongo from my node.js app using Mongoose, is there a compelling reason to use connectSet() instead of connect()? I would assume that the failover benefits would still be present with connect(), but perhaps this is where I am wrong...
The reason I ask is that, in mongoose, the connectSet() method seems to be less documented and well-used. Yet, I cannot imagine a scenario where you would NOT want to connect to the set, since it is recommended to always run Mongo on a 3x+ replica set...
If you connect only to the primary then you get failover (that is, if the primary fails, there will be a brief pause until a new master is elected). Replication within the replica set also makes backups easier. A downside is that all writes and reads go to the single primary (a MongoDB replica set only has one primary at a time), so it can be a bottleneck.
Allowing connections to slaves, on the other hand, allows you to scale for reads (not for writes - those still have to go the primary). Your throughput is no longer limited by the spec of the machine running the primary node but can be spread around the slaves. However, you now have a new problem of stale reads; that is, there is a chance that you will read stale data from a slave.
Now think hard about how your application behaves. Is it read-heavy? How much does it need to scale? Can it cope with stale data in some circumstances?
Incidentally, the point of a minimum 3 members in the replica set is to offer resiliency and safe replication, not to provide multiple nodes to connect to. If you have 3 nodes and you lose one, you still have enough nodes to elect a new primary and have replication to a backup node.