Replica set architecture - Arbiter requirement - mongodb

What should be the number of replica set members required to handle Disaster Recovery ( DR ) situation effectively.Currently we are using 3 node replica set ( 1 primary , 1 secondary in same region and 1 secondary in DR region ).
We are planning to add 2 arbiters to it to increase its fault tolerance.
Is it a good practice to use more than one arbiter instance ?
Would it be better to create the arbiter instance in DR zone ?

As JJussi points out, adding more than one arbiter will not help at all, but it might be useful to add further nodes (data-bearing and/or arbiter) to achieve maximum resilience and availability.
Your current arrangement is like this:
If your datacentre in region 1 goes down, then the node in the DR region won't be able to step up to primary, because it could not command a majority:
Even if you added a further data-bearing node and an arbiter, you would run into the same problem if they were in the same two regions.
Instead, what I recommend you configure is your existing two nodes in region 1, add a fourth data-bearing node to the DR region, but also add an arbiter but make sure the arbiter is in a different region again:
That way, even if the datacentre goes down in region 1 or the DR region, the nodes in the other region will be able to — with the help of the arbiter —
command a majority, and continue working:

Arbiters don't increase fault tolerance, because they don't hold data. You don't have any need to add arbiters to the current setup, because you already have an odd count of votes. You current node count (three) is perfect for DR, especially if all three nodes are in different data centers, even two of those are in same geographical region.
Of course you can always add one node (and then you need arbiter) to some other region, but normally three separated nodes is perfect DR situation. If all your current nodes are in USA, you could have "half" (enough to majority) of nodes located to Europe...

Related

Mongodb Replicaset on AZURE with an Arbiter

I want to use MongoDB with replication; I created a VM with 2 secondary nodes and 1 arbiter:
1 Primary
2 Secondary
1 Arbiter
I'm trying to understand how this system works, so I have some questions:
1) According to information "If a replica set has an even number of members, add an arbiter." I added an arbiter. But I'm not sure if I have done it correctly. Does this even number apply to secondaries or to all members in total?
2) What does this arbiter doing? I actually don't understand its job.
3) I created public IP addresses for each VM, in order to connect to them from outside. I successfully connected from my application, using this connection string:
mongodb://username:password#vm0:27017,vm1:27017,vm2:27017/dbname?replicaSet=xxx&readPreference=primaryPreferred
I didn't add the arbiter in this connection string but Should I add it or not?
4) When I shut down the primary machine, one of the secondary machine successfully became primary as I expect. There is no problem in this case; but when I shut down the second primary machine my application throws an error. The second secondary node has not become primary - why is this happening?
5) If all VMs are working but I shut down the arbiter, my application again throws an error and I cannot connect to the db. I'm trying this because I'm thinking the case of if there will be something wrong on arbiter machine and it may be shut down in the future because of the maintenance or any other problems.
Maybe because I didn't understand the role of an arbiter; I'm thinking this is wrong but why it is not converting any secondary machine to arbiter? And why when I shut down the arbiter does the whole system not work?
Thanks.
1) If you have 1 Primary and 2 Secondaries, you have 3 members in your replica set. Therefore you should not be adding an arbiter. You already have an odd number of nodes.
2) An arbiter is a node which doesn't hold data and can't be elected as Primary. It is only used to elect a new Primary if the current Primary goes down.
For example, say you have 1 Primary and 1 Secondary. The replica set has 2 members. If the primary goes down, the replica set will attempt to vote to elect a new Primary. In order for a node to be elected, it needs to win over half the votes. But if the Secondary votes for itself, it will only get 1 out of 2 votes. That's not more than half so it will not be elected. Thus the replica set will not be able to elect a new Primary and your whole replica set will go down.
To fix this, you can add an arbiter to the replica set. This is usually a much smaller machine since it doesn't need to hold data. It just has one job, voting for the Secondary to be the new Primary in the case of elections.
But, since you already have 3 data-bearing nodes, you won't want to add an arbiter. You can read more about arbiters here.
3) You can add arbiters to connection strings but in general you won't need to. Adding the data-bearing nodes is just fine. That's what people usually do.
4) You have 4 members in the replica set. You took down 2 of them. That means there are only 2 votes left. The final secondary won't be able to get more than 50% of the votes so no Primary will be elected.
In general, testing two nodes going down is overkill. You probably want a 3 member replica set. Each member should be in a different availability zone (Availability Set in Azure). If two nodes go down your replica set will be unavailable. But two nodes going down at the same time is very unlikely if all nodes are in different availability zones. So don't worry too much about more than one node going down. If that's a real concern (in most applications it really isn't), you want to make a 5 member replica set.
5) That's weird. This sounds like your replica set might be configured incorrectly. As I said, you don't need an arbiter anyway. So you could just try setting it up again without the arbiter and see if it works. Open a new question if you're still having issues. Make sure to include the output of running rs.status() in your question.

Requires simple explanation on Arbiter's role in a given mongoDB replica set

I came across MongoDB official site explaining on having odd number of members replica set up. I also heard of the term Arbiter from the same site, which based on my understanding, it will not be elected as primary and it does participate on election (from https://docs.mongodb.com/manual/core/replica-set-arbiter/).
There is also a post related to Arbiter in Why do we need an 'arbiter' in MongoDB replication? which then relates to CAP theorem, which further gets things more complicated.
First of all, why do we need to make the number of members odd? Also, can someone explain to me what this Arbiter is and what is its role in a given replica set in simple layman English??
Thanks in advance.
In short: it is to stop the two normal nodes of the replica set getting into a split-brain situation if they lose contact with each other.
MongoDB replica sets are designed so that, if one or more members goes down or loses contact, the other members are able to keep going as long as between them they have a majority. The majority clause is important: without that, you might have a situation where the network is split in two, and the nodes on each side of the partition think that they're still carrying on the replica set, and end up with different sets of data.
So to avoid the split brain problem, the nodes of a replica set will not continue if they can't command an absolute majority. An example of this is if you have two nodes, in a replica set like this:
If they lose communication, the outcome is symmetrical:
Each one will reason the same way:
realise it has lost communication with the other
assess whether it is possible to keep the replica set going
realise that 1 node (out of 2) does not constitute a majority
revert to Secondary mode
The difference an Arbiter makes
If there is a third node, then even if the two main nodes lose contact with each other then there will still be one of them in contact with the arbiter. This allows the two main nodes to make different decisions, and keep the replica set going while avoiding the split-brain problem.
Consider the following example of a 3-node replica set:
Whichever way the network partition goes, one node will still be in contact with the arbiter; for example like this:
Node A will:
realise it can contact neither node B nor the arbiter
assess whether it is possible to keep the replica set going
realise that 1 node (out of 3) does not constitute a majority
revert to Secondary mode
Whereas node B is able to react differently:
realise it cannot contact node A, but still has contact with the arbiter
assess whether it is possible to keep the replica set going
realise that 2 nodes (out of 3) do constitute a majority
take over as Primary
This also illustrates how you should deploy an arbiter to get that benefit:
try to put the arbiter on a system independent of both the data-bearing nodes, to maximise the chance of it still being able to communicate with either throughout network problems
it doesn't need to store data, so you don't need high-spec hardware
Just 1 arbiter is enough to break the deadlock; you don't get any benefit from multiple arbiters
Take the example of a 2-member replica set: in the event of a network-partitioning, i.e., the 2 members lost touch of each other, who gets to become the primary? There will be a tie and a need for a tie-breaker. That would not be the case if we have a 3-member replica set: the group that contains two nodes will win and one of them will become primary. That is the basis of the requirement for an odd number of nodes in a replica set. As for an arbiter, it happens to be light weight so that I guess one can save money by having in place a smaller machine, since we do not expect it to hold any data, and that we just need it to be present to vote for primary.

Number of arbiters in replication set

In MongoDB tutorial of deploying geographically distributed replica set it is said that
Ensure that a majority of the voting members are within a primary facility, “Site A”. This includes priority 0 members and arbiters.
I am confused by arbiters there since in other place in documentation I found that
There should only be at most one arbiter configured in any replica set.
So how many arbiters at most can be in a replica set? If more that one arbiter allowed, then what is the point to have more than one arbiter in replica set?
Introduction
The fact that "arbiters" is written in plural in the first sentence has style reasons, not technical reasons.
You really should have at most 1 arbiter. Iirc, you technically could have more, but to be honest with you, I never tried it. But let's assume you could for the sake of the explanation below.
You seem to be a bit unsure here, but correctly assume that it does not make any sense to have more than one arbiter.
Recap: What are arbiters there for?
An arbiter exists to provide a quorum in elections.
Take a replica set with two data bearing nodes. That setup will run as expected as long as both instances are up – they form a quorum of 2 votes of 2 original members of a replica set. If one machine goes down, however, we only have 1 vote of 2 originally present, which is not a qualified majority, and the data bearing node still running will subsequently revert to secondary state, making writes impossible.
To prevent that, an arbiter is added to the mix. An arbiter does nothing more than to track which of the available data bearing nodes has the most current data set available and vote for that member in case of an election. So with our replica set with two data bearing nodes, in order to get a qualified majority of votes in case 1 of the nodes forming the replica set goes down, we only need 1 arbiter, since 2/3 votes provides a qualified majority.
Arbiters beyond 2 data bearing nodes
If we had a replica set with 3 data bearing nodes, we would not need an arbiter, since we have 3 voting members, and if 1 member goes down, the others still form a qualified majority needed to hold an election.
A bit more abstract, we can find out wether we need an arbiter by putting in the number of votes present in a replica set into the following "formula"
needArbiter = originalVotes - floor(originalVotes/2) <= originalVotes / 2
If we put in an additional arbiter, the number of votes would be 4: 3 data bearing nodes and 1 arbiter. One node goes down, no problem. Second node goes down, and the replica set will revert to secondary state. Now let's assume one of the two nodes down are is the arbiter – we would be in secondary state while the data bearing nodes only would be able to provide a quorum. We'd have to pay for and maintain an additional arbiter without anything gained from it. So in order to provide a qualified majority again, we would need to add yet another arbiter (making 2 now), without any benefit other than the fact that two arbiters can go down. You basically would need additional arbiters to prevent situations in which the existence of the arbiter you did not need in the first place becomes a problem.
Now let's assume we have 4 data bearing nodes. Since they can not form a qualified majority when 2 of them going down, that would pretty much be the same situation as with a replica set with 3 data bearing nodes, just more expensive. So in order to allow 2 nodes of the replica set being down at the same time, we simply add an arbiter. Do more arbiters make sense? No, even less than with a replica set with two or 3 data bearing nodes, since the probability that 2 data bearing nodes and the arbiter will fail at the same time is very low. And you'd need an uneven number of arbiters.
Conclusion
Imho, with 4 data bearing nodes, the arbiter reaches its limit of usefulness. If you need a high replication factor the percentage of money saved when using an arbiter in comparison to a data bearing node becomes smaller and smaller. Remember, the next step would be 6 data bearing nodes plus an arbiter, so the costs you save is less than 1/6 of your overall costs.
So more generally speaking, the more data bearing nodes you have (the higher your "replication factor" in Mongo terms) the less reasonable it becomes to have additional arbiters. Both from the technical point of view (the probability of a majority of nodes failing the same time becomes lower and lower) and the business point of view (with a high replication factor, the money saved with an arbiter in comparison to the overall costs becomes absurdly small).
Mnemonic:
The lowest uneven number is 1.
I have a scenario where I think having more than 1 Arbiter makes sense.
Problem
I have 3 data bearing nodes in a replicaset. Now I want to distribute my replicaset geographically so that I can mitigate the risk of a datacenter outage.
3 Node Replicaset, does not solve the problem
Primary Datacenter => 2 Data bearing Nodes
Backup Datacenter => 1 Data bearing Node
If that primary datacenter is down and the two out of three nodes in the replicaset would not be available then data bearing node in backup datacenter would not be able to become a primary since majority is not available. So the 3 node configuration does not solve the problem of a datacenter outage.
5 Node replicaset
Primary Datacenter => 2 Data bearing Nodes
Backup Datacenter => 1 Data bearing Node
Third Datacenter => 2 Arbiters
In this configuration I am able to sustain outage of any of the three datacenters and still be able to operate.
Obviously, a more ideal configuration would be to have 4 data bearing nodes and have 1 arbiter. It would give me redundancy in the backup datacenter as well. However since data bearing node is a much more expensive proposition than an arbiter going with 3 data bearing nodes and 2 arbiters makes more sense and I am happy to forgo the redundancy in backup datacenter in favor of the cost saving.
For our special case it makes sense to have 2 arbiters. Let me explain: we have 3 data centers but 1 of these 3 data centers is not suitable to host data bearing members. That's why we host in this data center 2 arbiters for each replica set. The 3 data bearing members of the replSet are hosted in the two other data centers (we want to have 3 instead of 2 data bearing members for resilience reasons). If 1 of the 3 data center goes down or is not reachable due to a network partition, the replSet is still able to elect a primary, thus it's still read and writeable. This wouldn't be possible with only 1 or 0 arbiter. Hence, 2 arbiters may make sense.
Let's see how it may look like. Here are 2 replSets, each with 3 data bearing members and 2 arbiters in 3 data centers, whereas DC3 is the restricted data center:
| |DC1 |DC2 |DC3 |
|----|-----|-----|-----|
|rs1 |m1,m2|m3 |a1,a2|
|rs2 |m1 |m2,m3|a1,a2|
If one data center goes down, which replSet member would become primary?
DC1 goes down:
rs1: m3
rs2: m2 or m3
DC2 goes down:
rs1: m1 or m2
rs2: m1
DC3 goes down:
rs1: m1,m2 or m3

MongoDB sharding, arbiter and cluster setup

Could someone help validate our setup
Setting up a 4 node MongoDB cluster
1 primary (write ) , 3 secondaries (read) if primary goes down, 3 secondaries can break tie and elect a secondary to primary
Will this setup work?
is an arbiter required in such a scenario?
Once I set it up this way at the outset, then as load increases all I need to do is keep adding nodes in pairs to the cluster. (Adding nodes in pairs will help us keep up with performance and reduce the frequency of cluster changes, also we are more read heavy than writes, at some point we will have to consider scaling out writes as well )
Help is very much appreciated.
thanks.
Yes, an arbiter is required, otherwise if 2 nodes go down or are otherwise unavailable, you will not have a primary - MongoDB requires a strict majority (>50%) of votes to elect a primary, and in your case that majority number is 3 out of 4 (two out of 4 is not greater than 50%). That number will still be 3 if you add an arbiter, but you will be able to have a primary with 2 data bearing nodes down.
As for why, consider the following possibility:
2 nodes are isolated from the other 2 - they are still up, functional, but cannot talk to each other. There are now 2 votes on either side of this "split" and no way to break the tie - each side is just as valid in terms of voting for a primary, and without the strict majority rule you end up with 2 primaries and no way to resolve writes once the split resolves itself. Add an arbiter to either side of the split and you have no such ambiguity.
This type of scenario has a number of permutations when the number of votes are even which I won't go into here. Suffice it to say that the best practice when running a replica set is to always have an odd number of votes and hence avoid these situations.

Mongodb replica set odd member count vs even member count + an arbiter

I've read quite a bit about mongodb replica set and how the elections work on a failover. My curiosity is assuming the client will use readPreference set to primary only, is there any advantage to having odd number of members against having even number of members + 1 arbiter?
For example if you have a 3 member replica set you can set all 3 members to be replicas or you can have only 2 replicas and an arbiter (which you can install on a smaller machine). The safety is basically the same, any machine can go down and the replica set is still ok, but if two of them go down then the replica set is in stalemate (it cannot elect a new primary).
The only difference is that in the second case you can use a way smaller machine for the arbiter.
It's actually not true that three data holding nodes provide the same "safety" net as two data holding nodes plus an arbiter.
Consider these cases:
1) one of your nodes loses its disk and you need to fully resync it. If you had three data holding nodes you can resync off of the other secondary, instead of the primary (which would reduce the load on the primary).
2) one of your nodes loses its disk and it takes you a while to located a new one. While that secondary is down you are running with ZERO safety net if you had two nodes and an arbiter since you only have one node with data left, if anything happens to it, you are toast.