mongo DB - All nodes secondary - mongodb

All of the nodes in our cluster are "secondary" and no node is stepping up as "primary".
How do I force a node to become primary?
===SOLUTION===
We had 4 nodes in our replica set, when we are supposed to only have an odd number of nodes.
Remove a node so you have an odd number of nodes
rs.config()
Edit the list of servers in notepad/textpad removing one of the servers
config = POST_MODIFIED_LIST_HERE
rs.reconfig(config, {force:true})
Stop the mongodb service 'mongod' on all nodes, and bring them back up
Done
If this doesn't fix it, try adding a priority to one of the nodes.

You can use the following instructions available at MongoDB's website:
http://www.mongodb.org/display/DOCS/Forcing+a+Member+to+be+Primary

If you have an even number of nodes, one answer is to remove one. Another answer can be to add an arbiter, which doesn't have a copy of the data but participates in the cluster purely for voting and breaks ties. This way you get odd vote numbers and guaranteed elections, but the availability/capacity of four nodes.

Related

Standard availability for mongoDB replica set cluster of 3 nodes

I have set up a replica set of mongoDB with one primary, one secondary and one arbiter node, mongoDB installed on three independent AWS instances. I need to document overall availability of the replica set cluster formed as per aforementioned configuration but don't have any reliable/standard data to establish so.
Is there any standard data which can be referred to establish avaialability of overall cluster/individual node in above case?
Your configuration will guarantee continued availability, even after one node goes down. However, availability after that depends on how quickly you can replace the downed node, and that is up to your monitoring and maintenance abilities.
If you do not notice for while that a node is down, or if your procedure for replacing the node takes a long time (you may need to commission a new VM, install MongoDB, reconfigure the replica set, allow time for the new node to sync), then another node may go down and leave you with no availability.
So your actual availability depends on the answers to four questions:
Which replica set configuration do you use? Because that determines how many nodes need to go down before the replica set stops being available
How likely it is that any single node will go down or lose connection to the rest?
How good is your monitoring, so you notice there is a problem?
How fast are your processes for repairing the problem?
The answer to the first one is straightforward; you have decided on the minimum of two data-bearing nodes and one arbiter.
The answer to the second one is not quite straightforward; it depends on the reliability of each node, and the connections between them, and whether two or more are likely to go down together (perhaps if they are in the same availability zone).
The third and fourth, we can't help you with; you'll have to assess those for yourself.

Mongodb two nodes deployment

I know Mongodb suggest the replication set minimum is 3
Can I use two servers to install Mongodb with 4 replication sets in order to prevent Write failure when one node down?
My idea is to install two more instances on each server to fake/get rid of it:
Mongodb 1
Replcation set 1 (Master)
Replcation set 2 (Secondary)
Mongodb 2
Replcation set 3 (Secondary)
Replcation set 4 (Arbiter)
If one server down, it still has two replication sets for it.
Can anyone can comment my idea?
I would like to know is there any issue/risk needed to consider?
Thanks!
This is a bad idea. Your suggested configuration would be counter-productive for two main reasons:
Running two instances of MongoDB on the same hosts will lead to them both running slowly and awkwardly, competing for memory, cpu, disk i/o, etc.
If one of the hosts goes down, the two nodes on the other will not be able to continue the replica set because with only two nodes running out of four, you don't have the majority necessary to become primary.
MongoDB is intended to be run effectively on multiple hosts, so it is designed for that; even if you could find a way to shoe-horn a functioning replica set onto only two hosts, you would be working against MongoDB's design and struggling to make it work effectively.
Really, your best option is to run a single mongod instance on each of your two hosts, and to run an arbiter on a third (separate) host; that kind of normal 3-host configuration is an effective and straightforward way of achieving both data replication and high availability.
Please read https://docs.mongodb.com/manual/replication/
Even number of members in a set is not recommended. The only reason to add an Arbiter is to resolve this issue and increase number of members by 1 to make it odd. It makes no sense to add 2 Arbiters at all.
Similarly there is no point to put an Arbiter to the same server with any other member. It is quite lightweight, so you can co-locate it with an application server for example.
Having 2 db servers the best replica set configuration would be:
Mongodb 1
Master
Mongodb 2
Secondary
Any other server
Arbiter
Simple answer is, no you cannot do it..
Reason is "majority". If you loose one server of those two, you don't have majority of votes online.
This is reason, why 3 is minimum. It can be 3 data bearing nodes or 2 data bearing nodes and one arbiter. All of those have one vote and if you loose one of those, replica set still have 2/3 votes what is majority. Like 3/5 or 4/7.
Values 1/2 (or 2/4) are not majority.

Requires simple explanation on Arbiter's role in a given mongoDB replica set

I came across MongoDB official site explaining on having odd number of members replica set up. I also heard of the term Arbiter from the same site, which based on my understanding, it will not be elected as primary and it does participate on election (from https://docs.mongodb.com/manual/core/replica-set-arbiter/).
There is also a post related to Arbiter in Why do we need an 'arbiter' in MongoDB replication? which then relates to CAP theorem, which further gets things more complicated.
First of all, why do we need to make the number of members odd? Also, can someone explain to me what this Arbiter is and what is its role in a given replica set in simple layman English??
Thanks in advance.
In short: it is to stop the two normal nodes of the replica set getting into a split-brain situation if they lose contact with each other.
MongoDB replica sets are designed so that, if one or more members goes down or loses contact, the other members are able to keep going as long as between them they have a majority. The majority clause is important: without that, you might have a situation where the network is split in two, and the nodes on each side of the partition think that they're still carrying on the replica set, and end up with different sets of data.
So to avoid the split brain problem, the nodes of a replica set will not continue if they can't command an absolute majority. An example of this is if you have two nodes, in a replica set like this:
If they lose communication, the outcome is symmetrical:
Each one will reason the same way:
realise it has lost communication with the other
assess whether it is possible to keep the replica set going
realise that 1 node (out of 2) does not constitute a majority
revert to Secondary mode
The difference an Arbiter makes
If there is a third node, then even if the two main nodes lose contact with each other then there will still be one of them in contact with the arbiter. This allows the two main nodes to make different decisions, and keep the replica set going while avoiding the split-brain problem.
Consider the following example of a 3-node replica set:
Whichever way the network partition goes, one node will still be in contact with the arbiter; for example like this:
Node A will:
realise it can contact neither node B nor the arbiter
assess whether it is possible to keep the replica set going
realise that 1 node (out of 3) does not constitute a majority
revert to Secondary mode
Whereas node B is able to react differently:
realise it cannot contact node A, but still has contact with the arbiter
assess whether it is possible to keep the replica set going
realise that 2 nodes (out of 3) do constitute a majority
take over as Primary
This also illustrates how you should deploy an arbiter to get that benefit:
try to put the arbiter on a system independent of both the data-bearing nodes, to maximise the chance of it still being able to communicate with either throughout network problems
it doesn't need to store data, so you don't need high-spec hardware
Just 1 arbiter is enough to break the deadlock; you don't get any benefit from multiple arbiters
Take the example of a 2-member replica set: in the event of a network-partitioning, i.e., the 2 members lost touch of each other, who gets to become the primary? There will be a tie and a need for a tie-breaker. That would not be the case if we have a 3-member replica set: the group that contains two nodes will win and one of them will become primary. That is the basis of the requirement for an odd number of nodes in a replica set. As for an arbiter, it happens to be light weight so that I guess one can save money by having in place a smaller machine, since we do not expect it to hold any data, and that we just need it to be present to vote for primary.

how mongodb determines majority if a node fails during write operation

Suppose i have set w=majority as write concern and a node fails during a write operation,
will the majority be changed according to the currently alive nodes?
i.e., Suppose there are 3 nodes. Now the majority is 2. And if a node fails during a write operation, will the majority be decreased or will it remain same and wait for the node to come up?
The majority of a replica set is determined based on replica set configuration, not its current running state.
In other words, if you have a three node replica set configured, then majority is always two. If one node is offline, two is still the majority. If two nodes are offline, two is still majority and cannot be satisfied until one of the offline nodes comes back online.

How to turn off Mongo (in Replica Set) properly without DB down?

My server set up is like that:
2 x Servers . The mongoDB has replica set among both servers. Each is one node.
and then I have my node.js server connect to the MongoDB.
What happen was.. when I kill the secondary server. (shutting down the server). The MongoDB at primary still up but the Node.js Server had connection issue with MongoDB then. Even I added the server back, it didn't work. I use mongoose and connect-mongo .
So, what happened? how to shut down Mongo node properly?
If you have a replica set with 2 nodes, when one node goes down the other will demote itself to secondary. If you aren't connecting with slaveOk true, then you won't be able to read (and in either case you won't be able to write).
This is a safety measure imposed by MongoDB, which requires that a majority (meaning half plus one) of a replica set be able to see one another in order to ensure that a primary can be safely elected. If a majority cannot be seen, the nodes in the minority cannot know whether the "other half" have elected a primary. Having two primaries at the same time would be Very Bad (TM), as that could lead to conflicting updates.
In situations where you only want to run two nodes, you can also run an arbiter to break ties in the case that one node goes down or becomes otherwise invisible to the replica set. An arbiter is a normal mongod process, but does not store any data -- essentially it only participates in elections, and is idle otherwise. In a replica set with 2 "normal" nodes and one arbiter, either one of the two data-holding nodes can go down without losing a majority.
For more, see the MongoDB documentation on replica sets and the documentation on artibers.
If your primary is still primary after you take down the secondary, it's a node's driver issue. Anyway you always should have an arbiter with an even number of replica nodes, the "why" is well documented on mongodb's doc.
In case this is a node.js issue, wich version of node-mongodb-native are you using ? I had some different replicaset issues 2 month ago but there have ben fixed with the latest versions. The last replicaset issue of the driver has ben closed the 9th Sept, you shoud giv it a try with the last tagged version (V0.9.6.18 as i'm writing this)