We have sharded cluster with 2 secondaries on each shard.
Due to space problem one of the secondaries got corrupted.
In order to add new node to the existing shard we have removed data directories on the problematic secondary data node.
Then added new data node using rs.config into existing replica set.
We have around 1.2TB data.
I can see the data folder size is increasing so it proves that its synchronizing from the primary shard.
When I do rs.status() the replica set member shows that the new node is in STARTUP2 mode
Also it shows otptitime as
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
Data node is able to see primary node as checked from "lastHeartbeatRecv"
We are using Amazon AWS.
Please advise if there are any different method to add new data node with fast sync from Primary as the data is 1.2TB and we have kicked of the sync process 7 days before.
Copy the recent snapshot of the good secondary one into the problematic secondary node after cleaning the data directories. Then add this node into replica set. Let the oplog be applied automatically and synch with primary node. This way synchronization time will be reduced as secondary has to just catch up the backlog from the day when snapshot of the good secondary node is taken.
Related
I have mongodb (version 4.2) replicaset with 3 nodes - primary, secondary, arbiter,
primary occupies close to 250 GB disk space, oplog size is 15 GB
secondary was down for few hours, tried recovering it by restarting, it went into recovering forever.
tried initial sync by deleting files on data path, took 15 hours, data path size went to 140GB and failed
tried to copy files from primary and seed it to recover secondary node
followed https://www.mongodb.com/docs/v4.2/tutorial/resync-replica-set-member/
This did not work - (again stale)
in the latest doc (5.0) they mention to use a new member ID, does it apply for 4.2 as well?
changing the member ID throws error as IP and port is same for node I am trying to recover
This method was also unsuccessful, planning to recover the node using different data path and port as primary might consider it as a new node, then once the secondary is up, will change the port to which I want and restart, will it work?
please provide any other suggestions to recover a replica node with large data like 250 GB
shut down primary
Copying the data files from primary node, placing it in new db path (other than the recovering nodes db path)
changing log path
starting mongo service with different port (other than the one used by recovering node)
start primary
adding it to replicaset using rs.add("IP:new port") on primary
worked, could see the secondary node coming up successfully
Consider the below diagram in MongoDB
I have two scenarios
Scenario 1 :-
Router directs the write call to master.Its writen to master but then it goes down before it gets replicted to slaves(i am using
synch replication mode)
Will router select one slave as master and also write the above request to both slaves ?
Scenario 2 :-
Router directs the write call to master. Its writen to master but then network link b/w it and one slave is broken(using
synch replication mode)
Will router select another slave(which is connected to all other nodes) as master and also write the above request to slave ?
Let's first use MongoDB terminologies: Primary instead of master and Secondary instead of slave.
Scenario 1: Will router select one slave as master and also write the above request to both slaves ?
A secondary can become a primary. If the current primary becomes unavailable, the replica set holds an election to choose which of the secondaries becomes the new primary. See also Replica Set Elections.
In scenario 1, if the primary had accepted write operations that the secondaries had not successfully replicated before the primary stepped down, then a rollback will revert the write operations on a former primary when the node rejoins the replica set. See also RollBacks During Replica Set Failover.
You can run all voting members with journaling enabled and use writeConcern majority to prevent rollbacks. See also Avoid Replica Set Rollbacks.
Scenario 2: Will router select another slave(which is connected to all other nodes) as master and also write the above request to slave ?
There are two parts here, the first part is replica set election. In this case because the primary and one of the secondaries are still majority, no election will be held. The primary will still be primary and replicating to one of the secondaries.
The second part is about replication of data. Secondary members copy oplog from their sync source and apply these operations in an asynchronous process. A secondary sync source may automatically change as needed based on changes in the ping time and state of other members’ replication. See also Replica Set Data Synchronization
In scenario 2, the secondary may change its sync source to the other secondary.
You may also found the following useful:
Replica Set High Availability
Replica Set Deployment Architectures
Replica Set Distributed Across Two or More Data Centers
MongoDB is always showing me this error message when I insert any data in my collection.
I am trying to configure ElasticSearch with mongodb, when I realized my Replica. I try to add something, but no results.
Mongo Shell always shows me the same message:
WritResult({"writeError":{"code":undefined,"errmsg":"not master"}})
This happens when you do not have a 3 node Replica Set, and you start the replica in Master-Slave configuration and after that your master goes down or the secondary goes down. Since there is not third node or Arbiter to elect a new primary, the primary steps down as master and in pure read only mode. The only way to bring the replica set up is to create a new server with the same repl-set name and add the current stepped down master as secondary to it.
I have 3 member replicaSet in MongoDB which fell apart when re-configuring the host names of the sever instances. I had to reconfigure the replicaSet, however I am curious how MongoDB handles records that are not synced across all the members.
Case 1) There is a new record on the MongoDB server that I access to reconfigure the set.
Case 2) There is a new record on another MongoDB server that is added later to the replica set.
Each replica-set has one primary node and one or more secondary nodes.
All writes happen on the primary. The primary then sends these changes to the secondaries (the list of changes is referred to as "the oplog"). That means the primary is always the member with the most up-to-date data.
When the primary is suddenly unreachable, the replica-set is put into read-only mode and an election takes place to find a new primary. Usually the secondary which is most up-to-date is selected (more details on replica-set election). Any writes which were not propagated to that secondary yet are lost.
When the old primary goes back online, it re-joins the replica-set as a secondary. Its data gets synchronized to the state of the new primary. Any writes which only happened on the old primary which weren't propagated to the new primary before the crash are rolled back.
The rolled-back writes are backed up as bson-files in the directory /rollback and can be re-added to the replica-set using bsondump and mongorestore. Details about this procedure can be found in the article Rollbacks During Replica Set Failover
I've been working with mongo for a few weeks and and building my environment in a dev. I started with a single node, then moved to a shard cluster, and now want to move to a replicated shard cluster. From what I read a Replicated Shard Cluster is the best of the best, scalability, durability, performance increase, etc.
I've read most of the (very good) tutorials in their help. It seems their lessons advise going from single node, to replica set, to sharded replica set, which, unfortunately is the opposite way I did it. I can't seem to find anything to upgrade a sharded cluster to a replicated shard cluster.
Here are 5 hosts that I have:
APPSERVER
CONFIGSERVER
SHARD1
SHARD2
SHARD3
I started each of the shard servers with:
mongod --shardsvr
Then I started the config server with:
mongod --configsvr
Then I started the mongos process on the APPSERVER with:
mongos --configdb CONFIGSERVER
Then in mongos, I added the shards, enabled sharding on my database, and defined a shardkey for a collection:
sh.addShard("SHARD1:27018");//again for 2 and 3
sh.enableSharding("beacon");
sh.shardCollection("beacon.alpha2", {"ip":1});
I want each of the shards replicated on each of the other two. (right?) Do I need to bring down the mongod processes on the shards and restart them with different CL parameters? What commands do I need to issue in my mongos shell to get it to replicate? Should I export all my data, take everything down, restart and reimport? Again, I see a lot of tutorials on how to create a replica set, but I don't really see anything on how to do a replica set given a sharded system to start with.
Thanks!
For each shard, you will need to restart the current member and start both it and two new members (or 1 new member and an arbiter) with the --replset command line option. You could add more members than that, but 3 is the lowest workable set. Then from inside what will become the new primary (your current SHARD1 for example) you could do the following:
rs.add("newmember1:port")
rs.add("newmember2:port")
rs.initiate();
You would then need to check and make sure that the sh.status() has been updated to reflect the new members of the replica set. In 2.2 this has become slightly easier as it should be automatic, for prior versions it was necessary to manually save the shard information in the config database, which is reflected in the documentation under sharded cluster. If it has been automatically added you will see the replica set list in the sh.status() output, similar to the following:
{ "_id" : "shard1", "host" : "shard1/SHARD1:port,newmember1:port,newmember2:port" }
If this does not happen automatically you will need to follow the procedure outlined in the documentation link above, namely from inside mongos:
db.getSiblingDB("config").shards.save({_id:"<name>", host:"<rsName>/member1,member2,..."})
Following the above example it would look like:
db.getSiblingDB("config").shards.save({_id:"SHARD1", host:"shard1/SHARD1:port,newmember1:port,newmember2:port"})
You would need to do this procedure for each shard, and you should do them one at a time so that you have all of SHARD1 as a replica set before moving on to SHARD2. You will also need to be aware that each replica set will become read-only while the initial election takes place, so at the very least you should schedule this in a downtime or maintenance window. Ideally test first in a staging environment.