Two nodes MongoDB replica set without arbiter - mongodb

Is it possible to create a MongoDB replica set consisting of only 1 primary and 1 secondary member?
I would like to have delayed replica set that will copy data from primary with delay of 24 hours. I know I can put arbiter on one of the servers (primary or secondary, I know this is not advised but my only wish is to run this configuration on two servers) and it would run fine, but I want to know if it is possible to completely kick arbiter out.
It would look like this:

Short answer: don't.
Long answer: the way automatic failover works in MongoDB is that a replica set needs a qualified majority to successfully elect a new primary. Delayed members do have votes in elections. So if either of your nodes fails the replica set finds out that it doesn't have this majority and the current primary steps down even if it didn't fail. So what you essentially do is doubling the chances of making your replica set fail. An arbiter is a very cheap process, in term of RAM usage, CPU and even disk space when run with --smallfiles --no-journal --noprealloc or the equivalent options set in the config file. Note that the mentioned options are safe to use, since an arbiter essentially only checks the heartbeats of the data bearing nodes. You could put the arbiter on the application server for example.
Disclaimer: the following procedure is strongly discouraged to use. Proceed at your own risk.
You could set the votes of the delayed server to 0. This way the undelayed node will call for an election in case the delayed member fails, comes to the conclusion that it is the only node online of the replica set and that it has the majority of votes (1/1) and will continue to work as expected. This course of action needs some attention, as you will have an even number of votes again in case you add a member to the replica set later and makes it necessary to reconfigure the replica set. It also has serious implications with network fragmentation issues. Again: Use at your own risk

Yes, it is possible but not recommended. The caveat of this approach is no automatic failovers.
If you primary goes down then you will have to manually make the other server as primary.
If you are keeping you secondary only as a mirror of your primary and you are fine with manual failover then it should work for you.
More info here:
http://openmymind.net/Does-My-Replica-Set-Need-An-Arbiter/

Yes you can and all you really need to do is set the member to not be eligible for primary.
There is documentation on how to make sure a member cannot be elected as primary here: http://docs.mongodb.org/manual/tutorial/configure-secondary-only-replica-set-member/

In this case, the best option is add an arbiter. I tried before with votes, but on 2 nodes replicaset you can have some issues with sync.

Related

Spring Boot Mongo DB Replica Set not working as expected [duplicate]

I am investigating using MongoDB ReplicaSet for high availability.
But just discovered that in ReplicaSet with 3 nodes, if PRIMARY mongod is the only one left (that is 2 other mongod instances died or were shut down), then after several seconds it switches role to SECONDARY and accepts writes no more. That makes Replica Set worth less than single instance.
I know & understand about PRIMARY election, but the PRIMARY role is fixed to a server (by using priority set to ,say, 10) and (for example due to network problems) other servers become inaccessible, why the main server just gives up?!
Tested with 2.4.8 on Windows (mongodb-win32-x86_64-2008plus-2.4.8) and Linux (CentOS) and 2.0.x on Linux
BOUNTY STARTED:
If the replica set gives up when PRIMARY feels alone, what are alternative to ensure 100% availability? Or maybe there is special configuration needed for the case. The current implementation makes ReplicaSet fragile in case of network problems.
UPDATED:
Alas, I have not said before the scenario when #3 goes down (PRIMARY & SECONDARY are left)
and then after a while SECONDARY goes down. Then PRIMARY really just "gives up", because it is already known that #3 is unavailable for some time. This was actually tested in my test environment.
var rsconfig = {"_id":"rs4","members":[{"_id":0,"host":"localhost:27041","priority":10},{"_id":1,"host":"localhost:27042"},{"_id":2,"host":"localhost:27043","arbiterOnly":true}]}
printjson(rsconfig)
rs.initiate(rsconfig)
We initially thought to put SECONDARY and #3 (that is ARBITER) on the same server,
but because of question in title, we cannot use such configuration.
Thanks to Alan Spencer for first explaining the logic that MongoDB takes.
This is expected, since the majority of the members are down MongoDB does not assume the last remaining member is consistent.
When you have a majority of the members down there are a couple of options: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
You say that when the primary is cut off from the other two nodes it should stay up, otherwise write availability is lost, but that's not necessarily the case. If the other two nodes are actually up and on the other side of the network partition, then they have elected a new primary (as two out of three are a majority) and it is that primary that is accepting new writes.
If the previous primary continued to accept writes, you would have potentially conflicting data which there is no mechanism to resolve. Since MongoDB replica set is a single primary architecture (as opposed to a multi-master system) the election mechanism assures that there cannot be two primaries at the same time.
From the point of view of two secondaries, network partition is the same as primary being unavailable, and from the primary's point of view, network partition is indistinguishable from "both other nodes are down". It steps down, because in case of network partition there may already be another primary on the other side of it, and it assures there cannot be two primaries by stepping down.
It is not the case that the "replica set" gives up when primary feels alone - the reason primary steps down when it feels alone is precisely to preserve the integrity of the replica set as a whole. It is not true that setting high priority score fixes a role to a node - a primary can only be elected via consensus among majority - all priority scores do is influence election when all other things are equal.
I highly recommend the excellent "call me maybe" series as reading to understand the challenges of write availability in a distributed system: http://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions
Just to chime in on the answers. The behavior in this scenario is expected. MongoDB uses a leader election algorithm to elect the new leader. So if there is no majority you cannot elect a leader and hence no writes.
Your only option at the point where 2 nodes are down is to reconfigure your replica set as a 1 node replica set to make it writeable. You can do this using the rs.reconfig cmd with just one server. However please note that this should just be a temporary and emergency configuration. For the longer duration you should have an odd number of total nodes (3+) in your replica set configuration.
Try to use arbiters, most documents say to use just one, but in you case, you need to win the election.
From http://docs.mongodb.org/manual/core/replica-set-architectures/ :
Fault tolerance for a replica set is the number of members that can
become unavailable and still leave enough members in the set to elect
a primary. In other words, it is the difference between the number of
members in the set and the majority needed to elect a primary. Without
a primary, a replica set cannot accept write operations. Fault
tolerance is an effect of replica set size, but the relationship is
not direct.
More on elections: http://docs.mongodb.org/manual/core/replica-set-elections/
More on arbiters: http://docs.mongodb.org/manual/faq/replica-sets/#how-many-arbiters-do-replica-sets-need

ReplicaSet with only 2 nodes

i've got two server with a mongo instance each.
On the first server i set mongo instance as primary and on the second mongo is secondary.
I haven't got the possibility to take another server to make it as arbiter.
How can i use mongodb with just two server?
If primary fails, secondary becomes automatically primary?
Thanks!
How can i use mongodb with just two server?
If you really want to go down this road, which may I add is a very bad road then you can set your primary to have no votes, in which case the only voting member would be the secondary in the event of a failover, however, this then causes another problem. In the event of a secondary failover you cannot have a primary elected (failover of any member will trigger an election).
So even though with 2 members you can account for one failover you cannot account for both equally.
It is not a good practice to have even number of members in replica set because it leads to election problem. In order to be elected node is required to get majority of votes. If you have two members you need to get two votes, that is impossible in case at least one node is down. There are several options:
add lightweight arbiter node to the first or second server to replicaSet, so you would have three members in replica set. It doesn't prevent you from recovery in case of network partition, but it is a bit better than just having two node replica set.
use replica set in master-slave mode, i.e. without automatic recovery, you could achive it by setting votes:2 for primary. If primary is down, you need to reconfigure replica set and set votes:2 for secondary, then secondary would be elected as primary. So you would have option for manual recovery.

MongoDB ReplicaSet - PRIMARY role falls to SECONDARY when only PRIMARY is left

I am investigating using MongoDB ReplicaSet for high availability.
But just discovered that in ReplicaSet with 3 nodes, if PRIMARY mongod is the only one left (that is 2 other mongod instances died or were shut down), then after several seconds it switches role to SECONDARY and accepts writes no more. That makes Replica Set worth less than single instance.
I know & understand about PRIMARY election, but the PRIMARY role is fixed to a server (by using priority set to ,say, 10) and (for example due to network problems) other servers become inaccessible, why the main server just gives up?!
Tested with 2.4.8 on Windows (mongodb-win32-x86_64-2008plus-2.4.8) and Linux (CentOS) and 2.0.x on Linux
BOUNTY STARTED:
If the replica set gives up when PRIMARY feels alone, what are alternative to ensure 100% availability? Or maybe there is special configuration needed for the case. The current implementation makes ReplicaSet fragile in case of network problems.
UPDATED:
Alas, I have not said before the scenario when #3 goes down (PRIMARY & SECONDARY are left)
and then after a while SECONDARY goes down. Then PRIMARY really just "gives up", because it is already known that #3 is unavailable for some time. This was actually tested in my test environment.
var rsconfig = {"_id":"rs4","members":[{"_id":0,"host":"localhost:27041","priority":10},{"_id":1,"host":"localhost:27042"},{"_id":2,"host":"localhost:27043","arbiterOnly":true}]}
printjson(rsconfig)
rs.initiate(rsconfig)
We initially thought to put SECONDARY and #3 (that is ARBITER) on the same server,
but because of question in title, we cannot use such configuration.
Thanks to Alan Spencer for first explaining the logic that MongoDB takes.
This is expected, since the majority of the members are down MongoDB does not assume the last remaining member is consistent.
When you have a majority of the members down there are a couple of options: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
You say that when the primary is cut off from the other two nodes it should stay up, otherwise write availability is lost, but that's not necessarily the case. If the other two nodes are actually up and on the other side of the network partition, then they have elected a new primary (as two out of three are a majority) and it is that primary that is accepting new writes.
If the previous primary continued to accept writes, you would have potentially conflicting data which there is no mechanism to resolve. Since MongoDB replica set is a single primary architecture (as opposed to a multi-master system) the election mechanism assures that there cannot be two primaries at the same time.
From the point of view of two secondaries, network partition is the same as primary being unavailable, and from the primary's point of view, network partition is indistinguishable from "both other nodes are down". It steps down, because in case of network partition there may already be another primary on the other side of it, and it assures there cannot be two primaries by stepping down.
It is not the case that the "replica set" gives up when primary feels alone - the reason primary steps down when it feels alone is precisely to preserve the integrity of the replica set as a whole. It is not true that setting high priority score fixes a role to a node - a primary can only be elected via consensus among majority - all priority scores do is influence election when all other things are equal.
I highly recommend the excellent "call me maybe" series as reading to understand the challenges of write availability in a distributed system: http://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions
Just to chime in on the answers. The behavior in this scenario is expected. MongoDB uses a leader election algorithm to elect the new leader. So if there is no majority you cannot elect a leader and hence no writes.
Your only option at the point where 2 nodes are down is to reconfigure your replica set as a 1 node replica set to make it writeable. You can do this using the rs.reconfig cmd with just one server. However please note that this should just be a temporary and emergency configuration. For the longer duration you should have an odd number of total nodes (3+) in your replica set configuration.
Try to use arbiters, most documents say to use just one, but in you case, you need to win the election.
From http://docs.mongodb.org/manual/core/replica-set-architectures/ :
Fault tolerance for a replica set is the number of members that can
become unavailable and still leave enough members in the set to elect
a primary. In other words, it is the difference between the number of
members in the set and the majority needed to elect a primary. Without
a primary, a replica set cannot accept write operations. Fault
tolerance is an effect of replica set size, but the relationship is
not direct.
More on elections: http://docs.mongodb.org/manual/core/replica-set-elections/
More on arbiters: http://docs.mongodb.org/manual/faq/replica-sets/#how-many-arbiters-do-replica-sets-need

What is the advantage to explicitly connecting to a Mongo Replica Set?

Obviously, I know why to use a replica set in general.
But, I'm confused about the difference between connecting directly to the PRIMARY mongo instance and connecting to the replica set. Specifically, if I am connecting to Mongo from my node.js app using Mongoose, is there a compelling reason to use connectSet() instead of connect()? I would assume that the failover benefits would still be present with connect(), but perhaps this is where I am wrong...
The reason I ask is that, in mongoose, the connectSet() method seems to be less documented and well-used. Yet, I cannot imagine a scenario where you would NOT want to connect to the set, since it is recommended to always run Mongo on a 3x+ replica set...
If you connect only to the primary then you get failover (that is, if the primary fails, there will be a brief pause until a new master is elected). Replication within the replica set also makes backups easier. A downside is that all writes and reads go to the single primary (a MongoDB replica set only has one primary at a time), so it can be a bottleneck.
Allowing connections to slaves, on the other hand, allows you to scale for reads (not for writes - those still have to go the primary). Your throughput is no longer limited by the spec of the machine running the primary node but can be spread around the slaves. However, you now have a new problem of stale reads; that is, there is a chance that you will read stale data from a slave.
Now think hard about how your application behaves. Is it read-heavy? How much does it need to scale? Can it cope with stale data in some circumstances?
Incidentally, the point of a minimum 3 members in the replica set is to offer resiliency and safe replication, not to provide multiple nodes to connect to. If you have 3 nodes and you lose one, you still have enough nodes to elect a new primary and have replication to a backup node.

How to turn off Mongo (in Replica Set) properly without DB down?

My server set up is like that:
2 x Servers . The mongoDB has replica set among both servers. Each is one node.
and then I have my node.js server connect to the MongoDB.
What happen was.. when I kill the secondary server. (shutting down the server). The MongoDB at primary still up but the Node.js Server had connection issue with MongoDB then. Even I added the server back, it didn't work. I use mongoose and connect-mongo .
So, what happened? how to shut down Mongo node properly?
If you have a replica set with 2 nodes, when one node goes down the other will demote itself to secondary. If you aren't connecting with slaveOk true, then you won't be able to read (and in either case you won't be able to write).
This is a safety measure imposed by MongoDB, which requires that a majority (meaning half plus one) of a replica set be able to see one another in order to ensure that a primary can be safely elected. If a majority cannot be seen, the nodes in the minority cannot know whether the "other half" have elected a primary. Having two primaries at the same time would be Very Bad (TM), as that could lead to conflicting updates.
In situations where you only want to run two nodes, you can also run an arbiter to break ties in the case that one node goes down or becomes otherwise invisible to the replica set. An arbiter is a normal mongod process, but does not store any data -- essentially it only participates in elections, and is idle otherwise. In a replica set with 2 "normal" nodes and one arbiter, either one of the two data-holding nodes can go down without losing a majority.
For more, see the MongoDB documentation on replica sets and the documentation on artibers.
If your primary is still primary after you take down the secondary, it's a node's driver issue. Anyway you always should have an arbiter with an even number of replica nodes, the "why" is well documented on mongodb's doc.
In case this is a node.js issue, wich version of node-mongodb-native are you using ? I had some different replicaset issues 2 month ago but there have ben fixed with the latest versions. The last replicaset issue of the driver has ben closed the 9th Sept, you shoud giv it a try with the last tagged version (V0.9.6.18 as i'm writing this)