Above is my mongodb cross replication in AWS. In our project I've added following connection string mongodb://admin:ayaplus16072019#mongo1.mydatabase.db:27017,mongo2.mydatabase.db:27017,mongo3.mydatabase.db:27017,mongo4.mydatabase.db:27017,arbiter.mydatabase.db:27017/admin?replicaSet=rsadmin as read replica to all secondary including different region. But now, I don't want to make read replica to different region called seoul region but keep syncing with other members, can I remove mongo4.mydatabase.db:27017 from connection string.
I don't want to make read replica to different region called seoul region but keep syncing with other members, can I remove mongo4.mydatabase.db:27017 from connection string.
Replica set members listed in the connection string are used as a seed list in order to connect and discover your replica set configuration. The seed list does not have to contain all members of the replica set, and does not prevent additional members from being discovered via the replica set config.
If you want your replica set member in Seoul to be hidden from client applications, you need to make it hidden and priority 0. The hidden option will ensure this replica set member is not discoverable, and priority 0 is required since hidden members are not eligible to become primary. It is still possible to connect to a hidden replica set member directly, if needed.
I would also consider making this hidden secondary non-voting and removing the arbiter, which would leave you with 3 voting members in Singapore. An arbiter is only required when you have an odd number of voting members. If your secondary in Seoul is strictly for offsite backup or disaster recovery it does not need to participate in elections.
Related
We are using 3 cluster Monggo DB.
I have stopped mongo service in primary and one replica set.
Another replica set member we changed to stand alone by changing mongod.conf file.
Removed replication and authentication key.
Now data is inserted into stand-alone Mongo.
Again started Mongo primary and other two replica set.
Now the new data is not replicated in Mongo Primary and replica set.
Please suggest, is there a way to replicate data from a replica set member.
Tried syncFrom() but no luck.
Any modifications in standalone mode are not written to the replica set oplog, so directly inserting or updating new data will introduce inconsistency in this replica set member. This data inconsistency is likely to cause replica set members to crash in future when an oplog change applies fine on another replica set member but cannot be applied to this member (or vice-versa).
The inconsistent secondary should be fully re-synced before rejoining the replica set. If you have some way of identifying the documents that have been inserted or updated, you could dump & restore those into the current primary before re-syncing.
If you need to take a majority of your replica set offline for some reason in future, you should reconfigure the remaining member(s) as a smaller replica set rather than writing to a former replica set member in standalone mode. Alternatively, you could drop the local database (in standalone mode) and convert this standalone to a new replica set.
I am trying to find an authoritative answer to the following question: Is a single member replica set a supported deployment setup?
While this question may seems weird or silly, my specific use case follows:
A team wants to upgrade from Mongo 2 to Mongo 4 and they foresee that transactions might be useful to them. They currently run a single mongod instance. MongoDB documentation leads them to believe that to use transactions they must activate replica sets and deploy at least 3 mongod instances. Interesting documentation bits are:
https://docs.mongodb.com/manual/core/transactions/#transactions-and-replica-sets
Multi-document transactions are available for replica sets only. Transactions for sharded clusters are scheduled for MongoDB 4.2 [1].
https://docs.mongodb.com/manual/core/replica-set-members/
The minimum recommended configuration for a replica set is a three member replica set with three data-bearing members: one primary and two secondary members. You may alternatively deploy a three member replica set with two data-bearing members: a primary, a secondary, and an arbiter, but replica sets with at least three data-bearing members offer better redundancy.
My point of view is that:
Replica sets aim to increase redundancy and availability through a typical primary - secondaries design
Replica set documentation focuses on what make sense for its initial orginal purpose. It documents traditional and sane deployment setups fo HA (>= 3 and odd number of voter, separate machines, how to deal with multiple DC etc.)
Team is not interested in HA but must switch to replica sets in order to use TX (alternative path being to forgo TX and deploy a single mongod)
According to replica set documentation and my distributed systems background, I don't see why a single member replica set would be an issue if you don't care about HA. With a single member a primary can be elected, replication is NOOP and default and majority write concern are just w: 1.
Am I right?
i've got two server with a mongo instance each.
On the first server i set mongo instance as primary and on the second mongo is secondary.
I haven't got the possibility to take another server to make it as arbiter.
How can i use mongodb with just two server?
If primary fails, secondary becomes automatically primary?
Thanks!
How can i use mongodb with just two server?
If you really want to go down this road, which may I add is a very bad road then you can set your primary to have no votes, in which case the only voting member would be the secondary in the event of a failover, however, this then causes another problem. In the event of a secondary failover you cannot have a primary elected (failover of any member will trigger an election).
So even though with 2 members you can account for one failover you cannot account for both equally.
It is not a good practice to have even number of members in replica set because it leads to election problem. In order to be elected node is required to get majority of votes. If you have two members you need to get two votes, that is impossible in case at least one node is down. There are several options:
add lightweight arbiter node to the first or second server to replicaSet, so you would have three members in replica set. It doesn't prevent you from recovery in case of network partition, but it is a bit better than just having two node replica set.
use replica set in master-slave mode, i.e. without automatic recovery, you could achive it by setting votes:2 for primary. If primary is down, you need to reconfigure replica set and set votes:2 for secondary, then secondary would be elected as primary. So you would have option for manual recovery.
I've read quite a bit about mongodb replica set and how the elections work on a failover. My curiosity is assuming the client will use readPreference set to primary only, is there any advantage to having odd number of members against having even number of members + 1 arbiter?
For example if you have a 3 member replica set you can set all 3 members to be replicas or you can have only 2 replicas and an arbiter (which you can install on a smaller machine). The safety is basically the same, any machine can go down and the replica set is still ok, but if two of them go down then the replica set is in stalemate (it cannot elect a new primary).
The only difference is that in the second case you can use a way smaller machine for the arbiter.
It's actually not true that three data holding nodes provide the same "safety" net as two data holding nodes plus an arbiter.
Consider these cases:
1) one of your nodes loses its disk and you need to fully resync it. If you had three data holding nodes you can resync off of the other secondary, instead of the primary (which would reduce the load on the primary).
2) one of your nodes loses its disk and it takes you a while to located a new one. While that secondary is down you are running with ZERO safety net if you had two nodes and an arbiter since you only have one node with data left, if anything happens to it, you are toast.
I have a replica set of three server and a backup server, The backup server is made
"hidden"=true and "priority"=0`
My concern is when I have one of my secondary and backup server up, my secondary does not changes into primary, which means hidden server is not taking part in election, but as per the mongo document, it cannot become primary but takes part in election. Is their some extra configuration required to do so?
Replica set member cannot become primary if is unable to connect to a majority of replica set members. If you have four members in a replica set you need to be be able to communicate with three and if only two members are up it is clearly impossible. If you really want configuration like this you should add arbiter to the replica set.
Generally speaking you should avoid even number of members in the replica set.