in a 3-node replicaSet why when 2 are down the third become SECONDARY and not PRIMARY?
I want to have 2 mongod inside a DataCenter and one outside, so if the Datacenters fails I wanna the third outside mongod becomes the Primary.
It's possible without and arbiter?
Ok, found response:
http://tebros.com/2010/11/mongodb-arbiters-with-only-two-replicas/
What happend?! It turns out that when a mongod instance is isolated, it cannot vote for itself to be primary. This makes sense when you think about it. If a network link went down and separated your two replicas, you wouldn’t want them both to elect themselves as primary. So in my case, when rep1-1 noticed that it was isolated from the rest of the replica set, it made itself secondary and stopped accepting writes.
Always you end up with (cluster_participants/2) + 1 nodes down (assuming you have odd number of participants), the cluster enters in read only mode. A candidate noDe needs the majority of all nodes to be elected as primary.
For example, if you have 5 noDe cluster and 3 nodes blow away, the others will stay as secondary, because none of them are able to get 3 votes.
For more information: http://docs.mongodb.org/manual/core/replication-internals/#replica-set-election-internals
Related
Let's say we have a Mongo cluster (3 or more nodes). Realized that even quick restart of Secondary node affect Primary. We need to shutdown Secondary for some reason for short time. What is the best/correct procedure (with particular commands examples please)? Should we remove the node from cluster or just set it to maintenance is enough, like?:
mongocluster:SECONDARY> db.adminCommand({"replSetMaintenance":true})
does this command affect only one particular Secondary node where it was applied?
do we need to switch the particular node to hidden mode for maintenance also?
do we need to switch the particular node to delayed replica mode for maintenance also?
I had a replica set with 5 mongo nodes I shut 3 nodes for DR testing and add a new node to replica. however, even though one node has higher priority I still have 3 secondaries nodes and no primary.
do you know why, what should be done how can I fix it
When you shut down 3 out of 5 nodes, you lost the ability to have a primary since a majority of nodes must vote for the primary.
When you added a new node, the new node had to sync data from the primary before it could become a new primary. Since there was no primary this sync couldn't have happened.
Your DR plans need to ensure there is always a majority of nodes operational.
Here is a newbie trying to play around Mongodb. I am trying to demonstrate scaling in my class, meaning, I need to show that I have 2 instances of mongoDB up and running and I need to replicate them, set one as master and the other as secondary.
Can any of you suggest me a simple way to demonstrate that if primary/master fails the slave/secondary comes up as the master?
Please keep it as simple as possible as I am teaching to a fairly beginners of MongoDB
MongoDB replica sets are not master/slave. In order to achieve automatic failover you need to have a majority of nodes in the replica set able to elect a new primary. The minimum number of nodes in your replica set should be 3, which can either be 3 data-bearing nodes or 2 data-bearing nodes and an arbiter, which is a node that votes in elections.
A demo using replication alone is more about failover and redundancy than scaling (better demo'd with sharding).
If you want a very simple (and non-production) way to stand up a replica set or sharded cluster in a development environment, I would suggest using the mlaunch script which is part of mtools.
For example, to create a 3-node replica set with an arbiter:
mlaunch --replicaset --nodes 2 --arbiter
To create a sharded cluster with 3 shards backed by a replica set (plus mongos and config server):
mlaunch --replicaset --sharded 3
As mentioned in the other comments here, the free MMS Monitoring service is a good way to visualise activity in your MongoDB deployment, and you can use db.shutdownServer() to shutdown specific nodes to see the outcome.
The easiest way would be to set up the MongoDB monitoring service. Stop the MongoD process on one and watch the other take over. But, use replica sets rather than master/secondary as they are the recommended approach.
Actually, it is pretty easy
Set up a replica set with 2 "normal" mongods and an arbiter
Connect to both of the normal mongods using mongo
Show the output of rs.status(). (Note the selffield)
Shut down the current primary
Show the output of rs.status() again and again, until the former secondary is elected primary
Another option would be to write a simple java app which utilizes the driver, put it in an infinite loop which writes one entry every second and puts out the number of objects in the database. Catch exceptions and write out that a problem occurred. Start the sharded cluster, then start your application. Shut down the primary while the program is running. during the elections, there may be exceptions be thrown. As soon as the former secondary is elected primary, the document count should start to rise again.
The question might seem ridiculous but it seems to me that a "yes" would be a little crazy.
MongoDB suggests to have replication sets of 3 machines. So if the database can stand on 1 computer, I need 3 machines, and if tomorrow I need to shard and need 2 machines I will actually need 6, right ?
Or is there something smarter that can be done and that comes for free with mongoDB ? (with coding theory like Hamming, ... the number of extra bits that we need is not linear in the size of the total number of bits)
Please don't hesitate to ask me to reformulate if what I say is not clear
Thanks in advance for your answers,
Thomas
So there is some really good documentation which is the recommended cluster setup in terms of phisycal instance separation. There should be considered two things (at least) separately. One is replication and for this one see this documentation : http://docs.mongodb.org/manual/core/replica-set-members/
Which means you have to have at least two data nodes (due to HA) in a replicaset and can have one arbiter which is not holding data just participate in election as it is described in the docs linked above. You need an odd number of setmembers due to the primary has to be elected by a majority inside the replicaset.
The other aspect is sharding. Sharding needs some additional metadata maintaining layer which is achived through additional processes these are configuration servers and mongos routers. For sharded production cluster see : http://docs.mongodb.org/manual/core/sharded-cluster-architectures-production/. In this setup the three configservers have to be on separated instances. Also the two mongos processes cannot reside on the same instance.
So for the minimal alignment. Have to be considered :
You must not collocate data nodes (each two datanodes in each shard have to be on a separated instance)
The arbiter node belonging to a specific shards replicaset have to be on a separated instance from the two datanodes
The three configservers should reside on separated instances from each other
The minimal two mongos processes have to reside on separated nodes from each other
However datanodes cannot be collocated, configservers and mongos processes can be on the same instances as the datanodes.
So theoretically one can align a sharded cluster without braking any of the recomendations on 4 instances with two shards like this:
Instance 1:
datanode replicaset 1, configserver 1, arbiter replicaset 2
Instance 2:
datanode replicaset 1, configserver 2, mongos 1
Instance 3:
datanode replicaset 2, configserver 3, arbiter replicaset 1
Instance 4:
datanode replicaset 2, mongos 2
Where replicaset 1 represents the first shard and replicaset 2 represents the second.
datanode is not a terminology which is used for mongoDB in general just i am likely to address with this name those mongod process which are handling real data, so the (Primaries and secondaries in a replicaset).
Just as a sidenote i would not do this. Just start micro instances for the configservers and keep mongos processes on the application servers.
I'd like to use mongodb to distribute a cached database to some distributed worker nodes I'll be firing up in EC2 on demand. When a node goes up, a local copy of mongo should connect to a master copy of the database (say, mongomaster.mycompany.com) and pull down a fresh copy of the database. It should continue to replicate changes from the master until the node is shut down and released from the pool.
The requirements are that the master need not know about each individual slave being fired up, nor should the slave have any knowledge of other nodes outside the master (mongomaster.mycompany.com).
The slave should be read only, the master will be the only node accepting writes (and never from one of these ec2 nodes).
I've looked into replica sets, and this doesn't seem to be possible. I've done something similar to this before with a master/slave setup, but it was unreliable. The master/slave replication was prone to sudden catastrophic failure.
Regarding replicasets: While I don't imagine you could have a set member invisible to the primary (and other nodes), due to the need for replication, you can tailor a particular node to come pretty close to what you want:
Set the newly-launched node to priority 0 (meaning it cannot become primary)
Set the newly-launched node to 'hidden'
Here are links to more info on priority 0 and hidden nodes.