I am trying to do replication in Mongodb on aws.
I have created two ubuntu instances and ssh into them and initiated replication.
However when i am trying to add the second node using rs.add(), it gives the following error-
"
Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded:"
I am following this link to do this
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
This link suggests that i have to create three instances first, however i have just created 2 instances. Pls tell me if that is the reason for the error?
Related
I searched on stack overflow couldn't find the solution for it.
I've 3 nodes 1 primary and 2 secondary nodes like, mongo1.com, mongo2.com and mongo3.com.
Everything is working well with the connection. When I shutdown anyone node, e.g. mongo1.com, my app is working fine. Again I shutdown 2nd node e.g. mongo3.com then app stopped working. In case I enable anyone node enable then app again working fine.
In short, with single node app not working. Looking for guideline / answer for the behaviour.
I checked status using rs.status(), two node show me health: 0 and message: unreachable node.
Third node which is active show health: 1 and "infoMessage" : "could not find member to sync from"
I did multiple research and found that if 2 node shutdown then you can manually made running node as primary.
To have a primary in a 3 node replica set at least 2 nodes must be operational.
I am very new to solr cloud and now integration solr cloud with external zookeeper.
I want to run 4 solr cores on 4 different server and manage by zookeeper.
So, I run solr cores on 4 different ip servers with different ports, 8983,8984,8985,8986 and bind with zookeeper. and it okay.
But when I create collection with following commend in one of core
/opt/solr/bin/solr create -c articles -s 2 -rf 2
I got Error. because of remote server. but when I create 4 nodes in same ip with different port, it okay. Is there any command or any way to create 4 remote solr colres in solr cloud mode?
First of all check if cluster is configured correctly. Go to localhost:8983/solr/admin/collections?action=CLUSTERSTATUS and check if all 4 nodes are alive.
Then, have you tried to create the collection using the admin page?
Go to http://localhost:8983/solr and then go to Collections.
Click Add Collection button
name your collection, select a config name from the rop down list (you should have uploaded it in zookeeper first)
insert 2 as num shards, go to advanced and then select 2 as max shard per node
create the collection and see if it's ok selecting Cloud on menu.
Let me know if this works.
I have a replica set with 3 nodes, I have a server titled dev-6 which is running mongo 3.0.6, and dev 5 which has 2 mongos on it running 3.2. I'd like for dev 6 to be the that is the primary, and so I've added the other 2 nodes to its initiated replica set, once I do that it becomes primary and the other 2 nodes begin to sync to it. Here is a screenshot of how my setup looks like when I bring down dev 6, and then is brought back up.
As, intended dev 6 is secondary, and so is dev 5: 27018. What I'm wondering about though is why is dev 5 saying there's no one to sync with, but dev 5:27019 is saying that its syncing with dev 5 :27018.
Im now going to follow the mongo instructions to make dev 6 the primary, here is the result now.
Dev 6 is the primary, but what Im trying to understand is how come the other dev 5 instances are not connecting with dev 6. Before some conclusions are jumped to, I am able to ping dev 5 from dev 6 and visa versa, the /etc/hosts profiles contain the ip addresses for one another.
EDIT: Im basing the replica set being unable to connect with the following message "lastHeartbeatMessage" : "could not find member to sync from",. This seems to be fixed if I run rs.config(//current cfg) or if I add or remove a replica set.
Your replica set seems to be healthy in both cases. All secondaries have applied the last operation from the primary's operation log (optime/optimeDate are the same), moreover lastHeartbeat is slightly behind the dev 6 time. In regard to the lastHeartbeatMessage refer this jira issue, that says:
When secondary choose a source to sync, it will choose a node who's
oplog is newer (not equal) than self, so after startup,when all nodes
have some data,the oplog will be same,so secondary cannot choose a
sync souce, write after a write operation happens, primary will have
newer oplog,secondary can successfully choose a targe to sync from,the
error message will disappear.
The error "could not find member to sync from" I usually associate with replica set members not being able to talk to one another. Either because of firewall or credential issues.
I know that you can ping the servers but have you tried connecting to the primary mongo instance from one of the secondaries using the mongoclient?
mongo vpc-dev-app-06:27017
with appropriate user credentials if necessary.
Has anything possibly changed in the mongod.conf as part of the upgrade?
I'd like to programatically setup zookeeper cluster. My goal is to use machines with CoreOS and dynamically deploy three nodes in form of docker containers and setup them to one zoo cluster.
Except common setup in manual (/zookeeperReconfig.html) which shows how to add another nodes to existing three nodes cluster, I found a conversation which say how to do that from the beginning when I have no running nodes in existing cluster. Unfortunately, this set of steps does not work for me. I'm talking about http://mail-archives.apache.org/mod_mbox/zookeeper-user/201408.mbox/%3CCAPSuxQipZFH2582bEMid2tCVBFk%3DC31hwCYjbFgSwChBF%2BZQVQ%40mail.gmail.com%3E
Here is a list of steps I did:
Run first node with standaloneEnabled=false and the only entry in zoo.cfg.dynamic.
server.1=localhost:2381:2281:participant;0.0.0.0:2181
Run second node with following dynamic cfg:
server.1=localhost:2381:2281:participant;0.0.0.0:2181
server.2=localhost:2382:2282:observer;0.0.0.0:2182
Note that there is no difference in resulting behavior when I'd change "observer" to "participant" for second node.
Now I have two running instances. I can use ./zkCli.sh to log into first node. When I try to add second node using following command:
reconfig -add server.2=localhost:2382:2282:participant;0.0.0.0:2182
... it fails with:
KeeperErrorCode = NewConfigNoQuorum for
However, after some research I found solution. But it's tricky and I don't think that it's the only correct solution.
What is working for me? I can do step #3 on first node again but now with "observer". This command causes that even first node knows about second node. When I type 'config' to console in zkCli, it seems that it's working. Next step is to log into second node using zkCli and than exec commands:
reconfig -remove 2 <- next step is not working w/o this
reconfig -add server.2=localhost:2382:2282:participant;0.0.0.0:2182
Well, now I have working cluster for two nodes. Finally, it's interesting that now I can add third node using regular scenario I've mentioned in first paragraph.
Do someone have some idea what I'm doing wrong?
Here is a newbie trying to play around Mongodb. I am trying to demonstrate scaling in my class, meaning, I need to show that I have 2 instances of mongoDB up and running and I need to replicate them, set one as master and the other as secondary.
Can any of you suggest me a simple way to demonstrate that if primary/master fails the slave/secondary comes up as the master?
Please keep it as simple as possible as I am teaching to a fairly beginners of MongoDB
MongoDB replica sets are not master/slave. In order to achieve automatic failover you need to have a majority of nodes in the replica set able to elect a new primary. The minimum number of nodes in your replica set should be 3, which can either be 3 data-bearing nodes or 2 data-bearing nodes and an arbiter, which is a node that votes in elections.
A demo using replication alone is more about failover and redundancy than scaling (better demo'd with sharding).
If you want a very simple (and non-production) way to stand up a replica set or sharded cluster in a development environment, I would suggest using the mlaunch script which is part of mtools.
For example, to create a 3-node replica set with an arbiter:
mlaunch --replicaset --nodes 2 --arbiter
To create a sharded cluster with 3 shards backed by a replica set (plus mongos and config server):
mlaunch --replicaset --sharded 3
As mentioned in the other comments here, the free MMS Monitoring service is a good way to visualise activity in your MongoDB deployment, and you can use db.shutdownServer() to shutdown specific nodes to see the outcome.
The easiest way would be to set up the MongoDB monitoring service. Stop the MongoD process on one and watch the other take over. But, use replica sets rather than master/secondary as they are the recommended approach.
Actually, it is pretty easy
Set up a replica set with 2 "normal" mongods and an arbiter
Connect to both of the normal mongods using mongo
Show the output of rs.status(). (Note the selffield)
Shut down the current primary
Show the output of rs.status() again and again, until the former secondary is elected primary
Another option would be to write a simple java app which utilizes the driver, put it in an infinite loop which writes one entry every second and puts out the number of objects in the database. Catch exceptions and write out that a problem occurred. Start the sharded cluster, then start your application. Shut down the primary while the program is running. during the elections, there may be exceptions be thrown. As soon as the former secondary is elected primary, the document count should start to rise again.