I would like to have Mongo on a server, then to replicate this onto a laptop.
The laptop is needed to leave the network and still be able to read/write, and once back on the network sync these changes with the primary.
At the same time I need the VM (Primary), to still be accessible (read/write).
So when each device is not talking to each other then for them to make themselves primary.
I have set up a very basic replica, Primary on a VM and the Secondary on the machine running the VM. In all examples I have seen it recommends having 3 servers for the replica, but I only need 2!
A couple of questions:
Is this possible with Mongo? If not then any suggestions!
When I turn off the Network adapter on the VM(Primary), the secondary doesn't seem to want to become the primary.
Is it possible to run 2 instance of Mongo, and then use the other instance as the 3 member of the replica.
Any advice would be great, thanks.
MongoDB always needs even number servers. in your case it should be like one primary one secondary and one arbiter. arbiter server instance is responsible for electing new server as primary when current primary goes down.
Related
Mongodb docs state
Do not run an arbiter on systems that also host the primary or the secondary members of the replica set.
However I could not find any explanation for this. Is it for preventing the arbiter go down together with a secondary or primary when a failure occurs ?
Technical it is possible to run a setup like this, but you lost redundancy.
Let's say you have a server with AB and C where B is an arbiter running on the same server as A. If this server goes down, you've lost your majority and B can't elect a new primary. So if the wrong server goes down you have no redundancy.
Fortunately arbiters don't save any data, so a small and cheap server instance is enough to run them.
I am aware that mongodb has a master-slave architecture.
Therefore, I was thinking that the master would be the single point of failure in mongoDB since it takes care of all the requests and sends it to the slave nodes. However, when the master fails, then a new master is reelected from the slaves. Therefore I need some clarification on where the single point of failure lies.
Does mongoDB have a single point of failure? Is it in the master node?
Thanks,
MongoDB can be set up in a way that there is no single point of failure (at least none specific to MongoDB).
When you set up replication as suggested (which includes primary, secondary and an arbiter on a 3rd server), the secondary will take the role of the primary when it goes down. Keep in mind that this only works when the applications know both the primary and the secondary (how to make it aware depends on the driver).
When you have a sharded cluster, the mongo router process (mongos) and the config servers becomes additional possible points of failure, but you can also set up reduntant routers and config servers. To send the clients to another mongos server when theirs goes down, you need a 3rd party load-balancing solution.
For a proper production MongoDB setup with clustering, MongoDB Inc. suggests:
At least 2 mongos routers
Exactly 3 config servers
3 servers per shard (primary, secondary and arbiter), where the arbiters do not necessarily need dedicated servers and can share hardware with the routers, config servers, members of a different replica-set or app servers.
We have 3 mongo servers set up as replicas, one of which cannot ever become the primary, (backup). Let's say mongo1 is primary and mongo[2..3] are secondaries. Randomly mongo1 will not be reachable by mongo2 and mongo3, which results in mongo2 being elected as primary. Mongo1 then sees this and becomes primary. Then mongo2 and 3 see mongo1 again after a few seconds or so and it gets re-elected.
Is there a reason why it would re-elect mongo1 so quickly? Both mongo 1 and 2 have the same priority.
The issue of this is that it disconnects the mongo routers from each of our webservers which takes a while to rediscover which is the primary and connect to it.
Also, should mongodb routers be on the application server or separate server? In the mongodb manual it suggests to put it on each application server but what are the benefits of doing it this way? What would be the benefits and issues with having router servers in between the application and mongo servers?
I should mention this is in AWS (ec2), if that makes a difference.
edit:
running mongo 2.4.6 as a -sharded- replica set. Sorry about that, I forgot to mention that portion. They are rather high load. The mongo instances are all in the same region and same availability zone in EC2.
Why should you put mongos on the same instance as your application server?
Network hops / latency. Querying a local mongos for an authoritative shard is quicker than doing a remote query — both approaches will then need fetch the data from the right shard.
On a side note: Other databases like Couchbase avoid the mongos / router component to avoid this overhead altogether.
I assume you have split your primary and secondaries across multiple availability zones? Maybe the network is a bit shaky, although it shouldn't be? Unfortunately, there's not much you can do against failovers if you have network split — not failing over quickly would open bigger rollback windows, which you'd want to avoid. Unless you want to disable automatic failover altogether.
And a node should only get reelected, if the new primary loses its majority. So in your case it looks like 2 and 3 cannot connect to 2. So 2 becomes the new primary. But soon afterwards 1 and 3 cannot connect to 2, and since they have a majority, they elect 1 to the be the master again.
We are using MongoDB in production environment and now, due to some issues of current servers, I'm going to change the server and start a new MongoDB instance.
We have a replica set and a single mongod instance (two different MongoDB networks for different purposes). Now, first I should migrate the single mongod instance and then the whole replica set to the new server.
What I want to know is, how can I migrate both instances with no down-time? I don't want to shutdown the server or stop write operations.
Thanks in advance.
So first of all you should never run mongodb as a single instance for production. At a minimum you should have 1 primary, 1 secondary and 1 arbiter.
Second, even with a replica set you will always have a bit of write downtime when you switch primaries, as writes are not possible during the election process. From the docs:
IMPORTANT Elections are essential for independent operation of a
replica set; however, elections take time to complete. While an
election is in process, the replica set has no primary and cannot
accept writes. MongoDB avoids elections unless necessary.
Elections are going to occur when for example you bring down the primary to move it to a new server or virtual instance, or upgrade the database version (like going from 2.4 to 2.6).
You can keep downtime to a minimum with an existing replica set by setting the appropriate options to allow queries to run against secondaries. Again from the docs:
Maintaining availability during a failover. Use primaryPreferred if
you want an application to read from the primary under normal
circumstances, but to allow stale reads from secondaries in an
emergency. This provides a “read-only mode” for your application
during a failover.
This takes care of reads at least. Writes are best dealt with by having your application retry failed writes, or queue them up.
Regarding your standalone the documented procedures for converting to a replica set are well tested and can be completed very quickly with minimal downtime:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
You cannot have no downtime (a new mongod will run on new IP so you need to at least connect to it). But you can minimize downtime by making geographically distributed replica set.
Please Read
http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/
Use the given process but please note:
Do not set priority 0 of instances at New Location so that they become primary when old ones at Old Location step down.
You still need to restart mongod in replica set mode at Old Location.
You need 3 instances including an arbiter at New Location, if you want it to be
replica set.
When complete data is in sync with instances at New Location, step down instances at Old Location (one by one). Now everything will go to New Location but the problem is that it is directed through a distant mongod.
So stop mongod at Old Location and start a new one at new Location. Connect your applications to New Location Mongod.
Note: I have not done the same so far. I had planned it once but then I got the problem and it was not of hosting provider. Practically you may get some issues.
Replica Set is the feature provided by the Mongodb database to achieve high availability and automatic failover.
It is kinda traditional master-slave configuration but have capability of automatic failover.
It is basically group/cluster of the mongod instances which communicates, replicates to each other to provide high availability and to do automatic failover
Basically, in replica sets there are minimum 2 and maximum of 12 mongod instances can exist
In replica set following types of server exist. out of all, one server is always primary.
http://blog.ajduke.in/2013/05/31/setup-mongodb-replica-set-in-4-steps/
John answer is right, btw in your case you have no way to avoid downtime, you can just try to make it shorter as possible.
You can prepare the new replica set and save its configuration.
Same for the single mongod instance, prepare a js file with specific configuration (ie: stuff going on the admin database).
disable client connections on production servers.
copy the datafiles from the old servers to the new ones (http://docs.mongodb.org/manual/core/backups/#backup-with-file-copies)
apply your previous saved replica set config and configuration.
done
you can use diffent ways as add an hidden secondary member on the replica set if you have a lot of data, so you can wait it's is up-to-date before stopping the production server. Basically for the replica set you have many ways to handle a migration, with the single instance instead you don't have such features.
My test system (due to lack of resources) has a dual mongodb replicaset. There is no arbiter.
During some system changes one of the servers got put out of action and will not be coming back. This server happened to host the primary mongo node. This left the only other member of the set as a secondary.
I know I should have had at least three nodes for the cluster (our prod setup does).
Is there a way I can make the primary that is now offline to step down? I haven't been able to change any of the rs.conf() settings because the only working node is secondary. starting an arbiter doesn't seem to work because I cannot add it to the replset as the primary is down.
Has anyone encountered this before and managed to resolve it?
To recap:
SERVER A (PRIMARY) - OFFLINE
SERVER B (SECONDARY) - ONLINE
A + B = REPLSET
Any help would be greatly appreciated.
The mongodb website has documentation for what to do (in an emergency only) when you need to reconfigure a replica set when members are down. This sounds like the situation you are in.
Basically, if you're on version >= 2.0, and it's an emergency, you can add force: true to the replica set configuration command.