MongoDB 3.0.4 - Issue connecting from Mongo Shell - mongodb

I am trying to connect to a new mongodb replica set which i created on 3 VMs where it was part of another replica set from a VM as Mongo shell. I cleaned up everything in all the VMs and reinstalled MongoDB after purging everything, but still the Mongo shell throws an error pointing to old VMs:
2016-07-07T17:41:35.363-0700 E - [mongosMain] error upgrading config database to v6 :: caused by :: could not write initial changelog entry for upgrade :: caused by :: failed to write to changelog:
mongos specified a different config database string : stored : old-mongodb-host1:27018,old-mongodb-host2:27018,old-mongodb-host3-h5:27018 vs given : new-mongodb-host1:27018,new-mongodb-host1:27018,new-mongodb-host1:27018
Where this information is stored? I also restarted all the VMs many times and it does not work. Any help would be appreciated. Ubuntu version - 12.04.4 kernel - 3.11.0-15-generic. The replica set comes up fine.

Related

The System can not find the files specifield in Mongo DB

I have installed successfully mongodb 3.4 on my windows 7 and set the variable path also. but in my services when I try to start the mongodb server it is occured following error message here
Windows could not start the MongoDB Service on Local Computer
The System cannot find the files specifield
how could I fix this problem

How to Make Mongodb replicaset as master

Recently because of an unknown issue our MongoDB hosted on a GCP compute VM has stopped, we were unable to restart it because it's throwing MongoDB.service is not found. so we reinstalled the MongoDB after taking a backup of all .wt files in the DB path. once we've reinstalled the MongoDB we copied the files back and we can't see the data in the DB. we tried the --repair flag but still no use.
is there a way we can get this working?
The other thing is we've taken the VM snapshot from the day before the crash. there we can see the data in MongoDB only if we run the method rs.slaveOk(). I think we can't use that DB as the primary db. is there a way that we can use this as a primary db.
I'm relatively new to the concept of replica set, Master/slave any suggestions and questions are welcome
Thanks
If you see the data from the snapshot with rs.slaveOk() it is easy to recover , you can reconfigure the member as standalone PRIMARY with this steps:
Get the current config:
cfg = rs.conf()
printjson(cfg)
Set in the temp variable cfg only the available member(in the example it is the first in the cfg with id:0 ):
cfg.members = [cfg.members[0]]
Reconfigure the replicaSet only with the available member:
rs.reconfig(cfg, {force : true})
(Don't forget to add the {force:true} option since it is the only way to reconfigure from SECONDARY)
If all is good with this member and it successfully elect as PRIMARY , you can add other new members with rs.add() ...

Read concern level of majority error while upgrading Sharded Cluster from 3.2 to 3.4

I have upgraded MongodDB sharded cluster having two replica sets from 3.2 to 3.4. Current storage engine is MMAPv1. After successfully upgrading all the secondary, primary, config server and mongos to 3.4, when I run config server using following command.
sudo mongod --configsvr
I keep getting following Error.
SHARDING [shard registry reload] Periodic reload of shard registry failed :: caused by :: 115 could not get updated shard list from config server due to Current storage engine does not support majority readConcerns; will retry after 30s
And also I am unable to connect mongos with config server. When I try to connect it using following command
sudo mongos --configdb [ip-of-my-config-server]:27019
It gives me following error.
BadValue: configdb supports only replica set connection string
I suppose mongos is unable to connect to config server due to the majority readConcerns error on config server.
MongoDB manual says
"When reading from the replica set config servers, MongoDB 3.4 uses a Read Concern level of "majority"."
And to use a read concern level of "majority", WiredTiger must be used as storage engine.
So it seems I have to switch to WiredTiger storage engine to make it work. But when I was going to switch to WiredTiger storage engine of a secondary replica set member, according to manual
"This procedure completely removes a secondary replica set member’s data"
So I am stuck halfway. Situation is
Config server is giving error regarding majority readConcerns.
I have to switch to WiredTiger to get rid of it.
Switching to WiredTiger will remove data from secondary members.
Data will not be replicated back to secondary members during this switching to WiredTiger procedure because of config server error and eventually I will be ended up losing all the data (Please correct if I am wrong).
Questions are:
Can I make MongoDB 3.4 to use a Read Concern level of "local" when reading from the replica set config servers?
How can I switch to WiredTiger engine without losing data in my scenario?
You could migrate each node in the replica set as if it was a standalone, by using mongodump to back up the data, restarting with WiredTiger and a blank data directory, then using mongorestore to populate the new database.
This isn't normally recommended for replica set nodes, but only because it's just easier to wipe the data on a secondary and let it resync from the other nodes. Doing it this way will work just fine, but involves a little more fiddly work for you with the dump and restore tools.

Can't authenticate against Mongodb 3.0.1 member added to a 2.6.8 replica set

We have a replica set running 2.6.8, and I'm trying to add a member running 3.0.1 with the WiredTiger engine. I'm attempting to do a rolling update of the replica set to 3.0.1 by replacing one member at a time. The data seems to have replicated, but I am unable to authenticate using the mongo shell.
MongoDB shell version: 3.0.1
connecting to: test
rs:SECONDARY> use admin
switched to db admin
rs:SECONDARY> db.auth("admin", "password")
Error: 18 Authentication failed.
0
The log is also full of the following:
Failed to authenticate admin#admin with mechanism MONGODB-CR: AuthenticationFailed UserNotFound Could not find user admin#admin
Failed to authenticate user#collection with mechanism MONGODB-CR: AuthenticationFailed UserNotFound Could not find user user#collection
I'm not sure if the users didn't replicate along with the data, or if this is something to do with the authentication mechanism changing in 3.0.
I'm only seeing this issue on our production replica set. I first tried in on our testing replica set and had no issues. I've tried it multiple times in production, each time launching a new clean AWS instance, and each time I have the same issue. The only difference between production and testing are the IPs and the amount of data. Production has >2TB of data, while testing has a <1GB. Monogdb is running on Amazon Linux, using packages from the http://repo.mongodb.org/yum/redhat/6/mongodb-org/3.0/x86_64/ yum repository.
The problem was that the authorization schema was still using the version 2.4 schema, even though all members of the replica set were running 2.6.
Mongodb < 3.0.2 doesn't check this before syncing, but they changed it in 3.0.2 to give an error before trying to sync.
Run db.getSiblingDB("admin").runCommand({authSchemaUpgrade: 1 }); on the primary of the replica set before adding the version 3 member.

Mongo on Linux, From Java App, throwing Exception can't find a master

I am running MongoDb before on Windows and my Java app was connecting perfectly. Now I switch the MongoDb to Linux, and started simply as "./mongod". But whenever I try to connect to Mongo, I got the following exception.
Caused by: com.mongodb.MongoException: can't find a master
at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:434)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:209)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:305)
at com.mongodb.DBCollection.findOne(DBCollection.java:647)
at com.mongodb.DBCollection.findOne(DBCollection.java:626)
at com.mongodb.DBApiLayer$MyCollection.createIndex(DBApiLayer.java:364)
at com.mongodb.DBCollection.createIndex(DBCollection.java:436)
at com.mongodb.DBCollection.ensureIndex(DBCollection.java:515)
at com.google.code.morphia.DatastoreImpl.ensureIndex(DatastoreImpl.java:245)
at com.google.code.morphia.DatastoreImpl.ensureIndexes(DatastoreImpl.java:310)
at com.google.code.morphia.DatastoreImpl.ensureIndexes(DatastoreImpl.java:279)
at com.google.code.morphia.DatastoreImpl.ensureIndexes(DatastoreImpl.java:340)
at com.google.code.morphia.DatastoreImpl.ensureIndexes(DatastoreImpl.java:333)
That's not a replica master/single problem (as i understood, you are using UMongo).
Before connection, try to change server settings from "localhost:27017" to "127.0.0.1:27017"
Sounds like your configs are different between linux and windows mongo servers. Ensure your linux server has joined the replica set properly , and is not firewalled off from the other servers. All the documentation is here: http://www.mongodb.org/display/DOCS/Replica+Sets