I searched on stack overflow couldn't find the solution for it.
I've 3 nodes 1 primary and 2 secondary nodes like, mongo1.com, mongo2.com and mongo3.com.
Everything is working well with the connection. When I shutdown anyone node, e.g. mongo1.com, my app is working fine. Again I shutdown 2nd node e.g. mongo3.com then app stopped working. In case I enable anyone node enable then app again working fine.
In short, with single node app not working. Looking for guideline / answer for the behaviour.
I checked status using rs.status(), two node show me health: 0 and message: unreachable node.
Third node which is active show health: 1 and "infoMessage" : "could not find member to sync from"
I did multiple research and found that if 2 node shutdown then you can manually made running node as primary.
To have a primary in a 3 node replica set at least 2 nodes must be operational.
Related
I have a replica set with 3 nodes, I have a server titled dev-6 which is running mongo 3.0.6, and dev 5 which has 2 mongos on it running 3.2. I'd like for dev 6 to be the that is the primary, and so I've added the other 2 nodes to its initiated replica set, once I do that it becomes primary and the other 2 nodes begin to sync to it. Here is a screenshot of how my setup looks like when I bring down dev 6, and then is brought back up.
As, intended dev 6 is secondary, and so is dev 5: 27018. What I'm wondering about though is why is dev 5 saying there's no one to sync with, but dev 5:27019 is saying that its syncing with dev 5 :27018.
Im now going to follow the mongo instructions to make dev 6 the primary, here is the result now.
Dev 6 is the primary, but what Im trying to understand is how come the other dev 5 instances are not connecting with dev 6. Before some conclusions are jumped to, I am able to ping dev 5 from dev 6 and visa versa, the /etc/hosts profiles contain the ip addresses for one another.
EDIT: Im basing the replica set being unable to connect with the following message "lastHeartbeatMessage" : "could not find member to sync from",. This seems to be fixed if I run rs.config(//current cfg) or if I add or remove a replica set.
Your replica set seems to be healthy in both cases. All secondaries have applied the last operation from the primary's operation log (optime/optimeDate are the same), moreover lastHeartbeat is slightly behind the dev 6 time. In regard to the lastHeartbeatMessage refer this jira issue, that says:
When secondary choose a source to sync, it will choose a node who's
oplog is newer (not equal) than self, so after startup,when all nodes
have some data,the oplog will be same,so secondary cannot choose a
sync souce, write after a write operation happens, primary will have
newer oplog,secondary can successfully choose a targe to sync from,the
error message will disappear.
The error "could not find member to sync from" I usually associate with replica set members not being able to talk to one another. Either because of firewall or credential issues.
I know that you can ping the servers but have you tried connecting to the primary mongo instance from one of the secondaries using the mongoclient?
mongo vpc-dev-app-06:27017
with appropriate user credentials if necessary.
Has anything possibly changed in the mongod.conf as part of the upgrade?
I am trying to do replication in Mongodb on aws.
I have created two ubuntu instances and ssh into them and initiated replication.
However when i am trying to add the second node using rs.add(), it gives the following error-
"
Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded:"
I am following this link to do this
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
This link suggests that i have to create three instances first, however i have just created 2 instances. Pls tell me if that is the reason for the error?
We have the following multi data-center Scenario
Node1 --- Node3
| |
| |
| |
--- ---
Node2 Node4
Node1 and Node3 form a Replica (sort of) Set ( for high availability )
Node 2/Node 4 are Priority 0 members (They should never become Primaries - Solely for read purpose)
Caveat -- what is the best way to design such a situation, since Node 2 and Node4 are not accessible to one another, given the way we configured our VPN/Firewalls;
essentially ruling out any heartbeat between Node2 and Node4.
Thanks Much
Here's what I got in mind:
Don't keep even members in a set. Thus you need another arbiter or set one of node2/4 to non-voting member.
As I'm using C# driver, I'm not sure you are using the same technology to build your application. Anyway, it turns out C# driver obtain a complete available server list from seeds (servers you provided in connection string) and tries to load-balancing requests to all of them. In your situation, I guess you would have application servers running in all 3 data centers. However, you probably don't want, for example, node 1 to accept connections from a different data center. That would significantly slow down the application. So you need some further settings:
Set node 3/4 to hidden nodes.
For applications running in the same data center with node 3/4, don't config the replicaSet parameter in connection string. But config the readPreference=secondary. If you need to write, you'll have to config another connection string to primary node.
If you make the votes of 2 and 4 also 0 then it should act, in failover as though 1 and 2 are only eligible. If you set them to hidden you have to forceably connect to them, MongoDB drivers will intentionally avoid them normally.
Other than that node 2 and 4 have direct access to whatever would be the primary as such I see no other problem.
in a 3-node replicaSet why when 2 are down the third become SECONDARY and not PRIMARY?
I want to have 2 mongod inside a DataCenter and one outside, so if the Datacenters fails I wanna the third outside mongod becomes the Primary.
It's possible without and arbiter?
Ok, found response:
http://tebros.com/2010/11/mongodb-arbiters-with-only-two-replicas/
What happend?! It turns out that when a mongod instance is isolated, it cannot vote for itself to be primary. This makes sense when you think about it. If a network link went down and separated your two replicas, you wouldn’t want them both to elect themselves as primary. So in my case, when rep1-1 noticed that it was isolated from the rest of the replica set, it made itself secondary and stopped accepting writes.
Always you end up with (cluster_participants/2) + 1 nodes down (assuming you have odd number of participants), the cluster enters in read only mode. A candidate noDe needs the majority of all nodes to be elected as primary.
For example, if you have 5 noDe cluster and 3 nodes blow away, the others will stay as secondary, because none of them are able to get 3 votes.
For more information: http://docs.mongodb.org/manual/core/replication-internals/#replica-set-election-internals
My test system (due to lack of resources) has a dual mongodb replicaset. There is no arbiter.
During some system changes one of the servers got put out of action and will not be coming back. This server happened to host the primary mongo node. This left the only other member of the set as a secondary.
I know I should have had at least three nodes for the cluster (our prod setup does).
Is there a way I can make the primary that is now offline to step down? I haven't been able to change any of the rs.conf() settings because the only working node is secondary. starting an arbiter doesn't seem to work because I cannot add it to the replset as the primary is down.
Has anyone encountered this before and managed to resolve it?
To recap:
SERVER A (PRIMARY) - OFFLINE
SERVER B (SECONDARY) - ONLINE
A + B = REPLSET
Any help would be greatly appreciated.
The mongodb website has documentation for what to do (in an emergency only) when you need to reconfigure a replica set when members are down. This sounds like the situation you are in.
Basically, if you're on version >= 2.0, and it's an emergency, you can add force: true to the replica set configuration command.