I have a simple replica set configured as follows:
mongo1 (primary)
mongo2 (secondary)
mongo3 (arbiter)
It functioned correctly for around a month and then we started seeing intermittent exceptions along the lines of:
Moped::Errors::ReplicaSetReconfigured: The operation: #<Moped::Protocol::Command
#length=179 #request_id=1400 #response...>{:order=>"SwimSet"}, :update=>{"$inc"=>
{:next=>1}}, :new=>true, :upsert=>true} #fields=nil> failed with error "not master"
They key bit being "failed with error not master. This happens sporadically when trying to write to a collection. This is not during or immediately after a failover. Shutting the secondary down but leaving the arbiter running resolves the error but leaves us without any redundancy.
What we've tried:
Rebuilding the secondary and re-adding it to the cluster
Failing over to the newly built node, then rebuilding the old primary
Upgrading to Mongo 2.6.4
Current Versions:
Mongo Server: 2.6.4
Mongoid: 3.1.6
Moped: 1.5.2
Any suggestions very much appreciated as been battling with this on and off for nearly a month now.
Related
We'v set up sharded cluster with replica set for each shard in our production ENV. Last night we encountered a issue while connecting to the cluster and issuing building index command through mongo shell , by connecting to the mongos instance , not directly to specific shard.
The issue is : once starting building the index, connections created from mongos to this shard increases rapidly and 'too many connections' errors show up in the primary shard's log file very soon.
The below is link for primary shard's log summary:
At the very beginning for index
Then very soon, the connections numbers reached to 10000:
Connection limit exceeded
From the three mongos' log, all the connections are initiated from mongos. We have googled and find related issue link as : https://jira.mongodb.org/browse/SERVER-28822
But there is no trigger conditions. And the same time, I tried to reproduce the question in test ENV ,but not occurred again. So, please help.
here is configurations for mongos:
mongos' configuration
and here is for shard:
primary shard's configuration
Found the answer.
It was because the index creation issued by mongorestore command was foreground, not background. I mistook the way which mongorestore took and not checked the meta file for the table schema.
I am trying to connect to a new mongodb replica set which i created on 3 VMs where it was part of another replica set from a VM as Mongo shell. I cleaned up everything in all the VMs and reinstalled MongoDB after purging everything, but still the Mongo shell throws an error pointing to old VMs:
2016-07-07T17:41:35.363-0700 E - [mongosMain] error upgrading config database to v6 :: caused by :: could not write initial changelog entry for upgrade :: caused by :: failed to write to changelog:
mongos specified a different config database string : stored : old-mongodb-host1:27018,old-mongodb-host2:27018,old-mongodb-host3-h5:27018 vs given : new-mongodb-host1:27018,new-mongodb-host1:27018,new-mongodb-host1:27018
Where this information is stored? I also restarted all the VMs many times and it does not work. Any help would be appreciated. Ubuntu version - 12.04.4 kernel - 3.11.0-15-generic. The replica set comes up fine.
I have a mongodb replicaset running in a docker container (mongo:3.0.11) in a aws vpc (for this specific case just one node, primary).
This server is shutdown every night and started again in the next morning.
After a few months running seamlessly, I'm having a few errors in the past few weeks. Happens that once or twice a week the mongo startup fails.
rs.status() returns stateStr: REMOVED
and as error message: errmsg : "Our replica set config is invalid or we are not a member of it"
Looking at the mongo logs I have:
2016-06-07T12:01:48.724+0000 W NETWORK [ReplicationExecutor] getaddrinfo("database.my_vpc_dns.net") failed: Name or service not known
When this error happens, a simple restart on the docker container will fix, but I'm struggling to understand what is causing this error to happen occasionally.
Probably the replica loses the configuration when doing the restart. It is possible that the replica loses the reading of the DNS reason why it does not manage to raise when the server is started.
What you can do is to point to the machine directly through the domain.my-machine in the Execute db.isMaster() in primary to not restart.
I'm running MongoDB 2.4.5 and recently I've started digging into Replica Set to get some kind of redundancy.
I started same mongo instance with --replSet parameter and also added an Arbiter to running Replica Set. What happened was writing to mongo slowed down significantly (from 15ms to 30-60ms, sometimes even around 300ms). As soon as I restarted it in not-replicaset-mode performance went back to normal.
I was using MongoDB version 2.6.6 on Google Compute Engine and used the click to deploy method.
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:[{role:"root", db:"admin"}]})
2015-07-13T15:02:28.434+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
rs0:SECONDARY> use admin
switched to db admin
rs0:SECONDARY> db.createUser({user:"admin", pwd:"secret_password", roles:["root"]})
2015-07-13T15:13:28.591+0000 Error: couldn't add user: not master at src/mongo/shell/db.js:1004
I had a similar problem with mongo 3.2:
Error: couldn't add user: not master :
When trying to create a new user, with root role.
I was using only a local copy of mongo.
In my mongod.conf file I had the following uncommented:
replication:
replSetName: <node name>
Commenting that out and restarting fixed the problem. I guess mongo thought it was part of a replication set, and was confused as to who the Master was.
Edit:
I've also found that if you ARE trying to setup a replication set, and you get the above error, then run:
rs.initiate()
This will start a replication set, and set the current node as PRIMARY.
Exit, and then log back in and you should see:
PRIMARY>
Now create users as needed.
I ran into this error when scripting replica set creation.
The solution was to add a delay between rs.initiate() and db.createUser().
Replica set creation is seemingly done in background and it takes time for the primary node to actually become primary. In interactive use this doesn't cause a problem because there is a delay while typing the next command, but when scripting the interactions the delay may need to be forced.
MongoDB will be deployed in a cluster of Compute Engine instances (also known as a MongoDB replica set). Each instance will use a boot disk and separate disk for database files.
Primary and master nodes are the nodes that can accept writes. MongoDB’s replication is “single-master:” only one node can accept write operations at a time.
Secondary and slave nodes are read-only nodes that replicate from the primary.
Your error message looks like you are trying to add the user on the secondary. Try adding the user in the primary.
I ran into this issue when I thought I was running mongo 3.4 but it was mongo 3.6. Uninstalling 3.6 and installing 3.4 fixed my issue.