Let's say we have a Mongo cluster (3 or more nodes). Realized that even quick restart of Secondary node affect Primary. We need to shutdown Secondary for some reason for short time. What is the best/correct procedure (with particular commands examples please)? Should we remove the node from cluster or just set it to maintenance is enough, like?:
mongocluster:SECONDARY> db.adminCommand({"replSetMaintenance":true})
does this command affect only one particular Secondary node where it was applied?
do we need to switch the particular node to hidden mode for maintenance also?
do we need to switch the particular node to delayed replica mode for maintenance also?
Related
I am trying to configure a Active/Passive cluster with two nodes (using OpenShift). The second passive node should be a hot standby, in other words it is up and running but not doing anything, until the first node dies. Then the passive node becomes active and a new passive node is started.
I have read the High Availability documentation, however it just seems to cover the theory. Furthermore it seems like overkill ( I am thinking there might be an easier way to meet my goal).
Where would I start?
What you are asking for goes against the usual practice for how Kubernetes/OpenShift is used. You wouldn't have hot standby nodes, you would always use all nodes in the cluster. You would then allow for enough additional capacity in your cluster such that loosing a node doesn't cause a problem as other nodes would have enough capacity to then run the applications. In this scenario the Kubernetes scheduler would automatically restart any applications which were on a failed node on the other nodes in the cluster, without you needing to perform any explicit failover steps.
So don't try and do anything special, setup your cluster with the two nodes, with applications being distributed across both. If you need to have the ability to run with only a single node, make sure it has enough capacity to run everything. If over time you add more applications and one node is not enough, add a third node, with all three being used in normal case. You can then handle failure of a single node again.
Here is a newbie trying to play around Mongodb. I am trying to demonstrate scaling in my class, meaning, I need to show that I have 2 instances of mongoDB up and running and I need to replicate them, set one as master and the other as secondary.
Can any of you suggest me a simple way to demonstrate that if primary/master fails the slave/secondary comes up as the master?
Please keep it as simple as possible as I am teaching to a fairly beginners of MongoDB
MongoDB replica sets are not master/slave. In order to achieve automatic failover you need to have a majority of nodes in the replica set able to elect a new primary. The minimum number of nodes in your replica set should be 3, which can either be 3 data-bearing nodes or 2 data-bearing nodes and an arbiter, which is a node that votes in elections.
A demo using replication alone is more about failover and redundancy than scaling (better demo'd with sharding).
If you want a very simple (and non-production) way to stand up a replica set or sharded cluster in a development environment, I would suggest using the mlaunch script which is part of mtools.
For example, to create a 3-node replica set with an arbiter:
mlaunch --replicaset --nodes 2 --arbiter
To create a sharded cluster with 3 shards backed by a replica set (plus mongos and config server):
mlaunch --replicaset --sharded 3
As mentioned in the other comments here, the free MMS Monitoring service is a good way to visualise activity in your MongoDB deployment, and you can use db.shutdownServer() to shutdown specific nodes to see the outcome.
The easiest way would be to set up the MongoDB monitoring service. Stop the MongoD process on one and watch the other take over. But, use replica sets rather than master/secondary as they are the recommended approach.
Actually, it is pretty easy
Set up a replica set with 2 "normal" mongods and an arbiter
Connect to both of the normal mongods using mongo
Show the output of rs.status(). (Note the selffield)
Shut down the current primary
Show the output of rs.status() again and again, until the former secondary is elected primary
Another option would be to write a simple java app which utilizes the driver, put it in an infinite loop which writes one entry every second and puts out the number of objects in the database. Catch exceptions and write out that a problem occurred. Start the sharded cluster, then start your application. Shut down the primary while the program is running. during the elections, there may be exceptions be thrown. As soon as the former secondary is elected primary, the document count should start to rise again.
We are using MongoDB in production environment and now, due to some issues of current servers, I'm going to change the server and start a new MongoDB instance.
We have a replica set and a single mongod instance (two different MongoDB networks for different purposes). Now, first I should migrate the single mongod instance and then the whole replica set to the new server.
What I want to know is, how can I migrate both instances with no down-time? I don't want to shutdown the server or stop write operations.
Thanks in advance.
So first of all you should never run mongodb as a single instance for production. At a minimum you should have 1 primary, 1 secondary and 1 arbiter.
Second, even with a replica set you will always have a bit of write downtime when you switch primaries, as writes are not possible during the election process. From the docs:
IMPORTANT Elections are essential for independent operation of a
replica set; however, elections take time to complete. While an
election is in process, the replica set has no primary and cannot
accept writes. MongoDB avoids elections unless necessary.
Elections are going to occur when for example you bring down the primary to move it to a new server or virtual instance, or upgrade the database version (like going from 2.4 to 2.6).
You can keep downtime to a minimum with an existing replica set by setting the appropriate options to allow queries to run against secondaries. Again from the docs:
Maintaining availability during a failover. Use primaryPreferred if
you want an application to read from the primary under normal
circumstances, but to allow stale reads from secondaries in an
emergency. This provides a “read-only mode” for your application
during a failover.
This takes care of reads at least. Writes are best dealt with by having your application retry failed writes, or queue them up.
Regarding your standalone the documented procedures for converting to a replica set are well tested and can be completed very quickly with minimal downtime:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
You cannot have no downtime (a new mongod will run on new IP so you need to at least connect to it). But you can minimize downtime by making geographically distributed replica set.
Please Read
http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/
Use the given process but please note:
Do not set priority 0 of instances at New Location so that they become primary when old ones at Old Location step down.
You still need to restart mongod in replica set mode at Old Location.
You need 3 instances including an arbiter at New Location, if you want it to be
replica set.
When complete data is in sync with instances at New Location, step down instances at Old Location (one by one). Now everything will go to New Location but the problem is that it is directed through a distant mongod.
So stop mongod at Old Location and start a new one at new Location. Connect your applications to New Location Mongod.
Note: I have not done the same so far. I had planned it once but then I got the problem and it was not of hosting provider. Practically you may get some issues.
Replica Set is the feature provided by the Mongodb database to achieve high availability and automatic failover.
It is kinda traditional master-slave configuration but have capability of automatic failover.
It is basically group/cluster of the mongod instances which communicates, replicates to each other to provide high availability and to do automatic failover
Basically, in replica sets there are minimum 2 and maximum of 12 mongod instances can exist
In replica set following types of server exist. out of all, one server is always primary.
http://blog.ajduke.in/2013/05/31/setup-mongodb-replica-set-in-4-steps/
John answer is right, btw in your case you have no way to avoid downtime, you can just try to make it shorter as possible.
You can prepare the new replica set and save its configuration.
Same for the single mongod instance, prepare a js file with specific configuration (ie: stuff going on the admin database).
disable client connections on production servers.
copy the datafiles from the old servers to the new ones (http://docs.mongodb.org/manual/core/backups/#backup-with-file-copies)
apply your previous saved replica set config and configuration.
done
you can use diffent ways as add an hidden secondary member on the replica set if you have a lot of data, so you can wait it's is up-to-date before stopping the production server. Basically for the replica set you have many ways to handle a migration, with the single instance instead you don't have such features.
in a 3-node replicaSet why when 2 are down the third become SECONDARY and not PRIMARY?
I want to have 2 mongod inside a DataCenter and one outside, so if the Datacenters fails I wanna the third outside mongod becomes the Primary.
It's possible without and arbiter?
Ok, found response:
http://tebros.com/2010/11/mongodb-arbiters-with-only-two-replicas/
What happend?! It turns out that when a mongod instance is isolated, it cannot vote for itself to be primary. This makes sense when you think about it. If a network link went down and separated your two replicas, you wouldn’t want them both to elect themselves as primary. So in my case, when rep1-1 noticed that it was isolated from the rest of the replica set, it made itself secondary and stopped accepting writes.
Always you end up with (cluster_participants/2) + 1 nodes down (assuming you have odd number of participants), the cluster enters in read only mode. A candidate noDe needs the majority of all nodes to be elected as primary.
For example, if you have 5 noDe cluster and 3 nodes blow away, the others will stay as secondary, because none of them are able to get 3 votes.
For more information: http://docs.mongodb.org/manual/core/replication-internals/#replica-set-election-internals
I'd like to use mongodb to distribute a cached database to some distributed worker nodes I'll be firing up in EC2 on demand. When a node goes up, a local copy of mongo should connect to a master copy of the database (say, mongomaster.mycompany.com) and pull down a fresh copy of the database. It should continue to replicate changes from the master until the node is shut down and released from the pool.
The requirements are that the master need not know about each individual slave being fired up, nor should the slave have any knowledge of other nodes outside the master (mongomaster.mycompany.com).
The slave should be read only, the master will be the only node accepting writes (and never from one of these ec2 nodes).
I've looked into replica sets, and this doesn't seem to be possible. I've done something similar to this before with a master/slave setup, but it was unreliable. The master/slave replication was prone to sudden catastrophic failure.
Regarding replicasets: While I don't imagine you could have a set member invisible to the primary (and other nodes), due to the need for replication, you can tailor a particular node to come pretty close to what you want:
Set the newly-launched node to priority 0 (meaning it cannot become primary)
Set the newly-launched node to 'hidden'
Here are links to more info on priority 0 and hidden nodes.