Error while creating Replica set - MongoDb - mongodb

I am trying to create replica but unable to proceed
Script to create 3 mongod instance :
sudo mkdir -p /data/rs1 /data/rs2 /data/rs3
sudo mongod --replSet rs1 --logpath "1.log" --dbpath /data/rs1 --port 27017 --fork
sudo mongod --replSet rs2 --logpath "2.log" --dbpath /data/rs2 --port 27018 --fork
sudo mongod --replSet rs3 --logpath "3.log" --dbpath /data/rs3 --port 27019 --fork
This executes successfully but after this i try to provide rs1 information about rs2 and rs3 via below script :
init_replica.js :
config = {
_id:"rs1",members:[
{_id:0,host:"grit-lenevo-pc:27017",priority:0,slaveDelay:5},
{_id:1,host:"grit-lenevo-pc:27018"},
{_id:2,host:"grit-lenevo-pc:27019"}]
}
rs.initiate(config)
rs.status()
Now when i try to run :
mongo --port 27018 < init_replica.js
I am getting :
MongoDB shell version: 3.2.8
connecting to: 127.0.0.1:27018/test
{
"_id" : "rs1",
"members" : [
{
"_id" : 0,
"host" : "grit-lenevo-pc:27017",
"priority" : 0,
"slaveDelay" : 5
},
{
"_id" : 1,
"host" : "grit-lenevo-pc:27018"
},
{
"_id" : 2,
"host" : "grit-lenevo-pc:27019"
}
]
}
{
"ok" : 0,
"errmsg" : "Attempting to initiate a replica set with name rs1, but command line reports rs2; rejecting",
"code" : 93
}
{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94
}
bye
Note : The same command works fine if i try below command :
mongo --port 27017 < init_replica.js
Following tutorials : M101 Mongo Db For Java Developers

It's right about there:
"Attempting to initiate a replica set with name rs1, but command line reports rs2; rejecting"
You should supply all members with the same replica set name as the seed (s1). For the second member:
sudo mongod --replSet rs1 ...
and not
sudo mongod --replSet rs2 ...
Same principal goes for third member

I had a similar name mismatch issue, but it was a bit more subtle.
In my mongo.conf I used "rs0" (quoted) for the RS name, and then ran rs.initiate({_id : "rs0"...}), which failed with
"Attempting to initiate a replica set with name rs0, but command line reports \"rs0\"; rejecting"
It took a while to notice the extra quotes - don't use them in the RS name in mongo.conf.

Related

Unable to initiate replica set while createing config server to deploy sharded cluster

I am trying to deploy a sharded cluster in MongoDB in mac. I am following this page. For creating a config server for sharded cluster I did the following steps
mkdir -p /data/config/config-a /data/config/config-b /data/config/config-c
mongod --logpath "cfg-a.log" --dbpath /data/config/config-a --port 57040 --fork --configsvr --smallfiles
mongod --logpath "cfg-b.log" --dbpath /data/config/config-b --port 57041 --fork --configsvr --smallfiles
mongod --logpath "cfg-c.log" --dbpath /data/config/config-c --port 57042 --fork --configsvr --smallfiles
and after this I tried initiating the replica set, as follows
$ mongo --port 57040
> config = { _id : "cs", members : [{ _id:0, host:"localhost:57040"}, { _id:1, host: "localhost:57041"}, { _id:2, host:"localhost:57042"}]};
{
"_id" : "cs",
"members" : [
{
"_id" : 0,
"host" : "localhost:57040"
},
{
"_id" : 1,
"host" : "localhost:57041"
},
{
"_id" : 2,
"host" : "localhost:57042"
}
]
> rs.initiate(config)
{
"ok" : 0,
"errmsg" : "This node was not started with the replSet option",
"code" : 76,
"codeName" : "NoReplicationEnabled"
}
Why am I getting this error ? I did not get any error while initiating other replica sets but I am getting error in this one. Could someone help me with this.
The guide you are following is outdated. MongoDB documentation has a tutorial that configures a sharded cluster. It gives both --replSet and --configsvr arguments:
mongod --configsvr --replSet <replica set name> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

MongoDB error when sharding a collection: ns not found

I've setup a simple server configuration for testing sharding functionnalities purpose and i get the error above.
My configuration is pretty simple: one config server, one shard server and one mongos (respectively in 127.0.0.1:27019, 127.0.0.1:27018, 127.0.0.1:27017).
Everything looks to work well until i try to shard a collection, the command gives me the following:
sh.shardCollection("test.test", { "test" : 1 } )
{
"ok" : 0,
"errmsg" : "ns not found",
"code" : 26,
"codeName" : "NamespaceNotFound",
"operationTime" : Timestamp(1590244259, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1590244259, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
The config server and shard server outputs show no errors:
2020-05-23T10:39:46.629-0400 I SHARDING [conn11] about to log metadata event into changelog: { _id: "florent-Nitro-AN515-53:27018-2020-05-23T10:39:46.629-0400-5ec935b2bec982e313743b1a", server: "florent-Nitro-AN515-53:27018", shard: "rs0", clientAddr: "127.0.0.1:58242", time: new Date(1590244786629), what: "shardCollection.start", ns: "test.test", details: { shardKey: { test: 1.0 }, collection: "test.test", uuid: UUID("152add6f-e56b-40c4-954c-378920eceede"), empty: false, fromMapReduce: false, primary: "rs0:rs0/127.0.0.1:27018", numChunks: 1 } }
2020-05-23T10:39:46.620-0400 I SHARDING [conn25] distributed lock 'test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c5
2020-05-23T10:39:46.622-0400 I SHARDING [conn25] distributed lock 'test.test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c7
2020-05-23T10:39:46.637-0400 I SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c7' unlocked.
2020-05-23T10:39:46.640-0400 I SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c5' unlocked.
Of course the collection exists on primary shard:
rs0:PRIMARY> db.test.stats()
{
"ns" : "test.test",
"size" : 216,
"count" : 6,
"avgObjSize" : 36,
"storageSize" : 36864,
"capped" : false,
...
}
I have no idea what could be wrong here, i'd much appreciate any help :)
EDIT:
Here is the detail about steps i follom to run servers, i probably misunderstand something :
Config server:
sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg
mongo --port 27019
Then in mongo shell
rs.initiate(
{
_id: "rs0",
configsvr: true,
members: [
{ _id : 0, host : "127.0.0.1:27019" }
]
}
)
Sharded server:
sudo mongod --shardsvr --replSet rs0 --dbpath /srv/mongodb/shrd1/ --port 27018
mongo --port 27018
Then in shell:
rs.initiate(
{
_id: "rs0",
members: [
{ _id : 0, host : "127.0.0.1:27018" }
]
}
)
db.test.createIndex({test:1})
Router:
sudo mongos --configdb rs0/127.0.0.1:27019
mongo
Then in shell:
sh.addShard('127.0.0.1:27018')
sh.enableSharding('test')
sh.shardCollection('test.test', {test:1})
That error happens sometimes when some routers have out of date ideas of what databases/collections exist in the sharded cluster.
Try running https://docs.mongodb.com/manual/reference/command/flushRouterConfig/ on each mongos (i.e. connect to each mongos sequentially by itself and run this command on it).
I just misunderstood one base concept: config servers and shard servers are distinct and independant mongodb instances, so each must be part of distinct replicasets .
So replacing
sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg
with
sudo mongod --configsvr --replSet rs0Config --port 27019 --dbpath /srv/mongodb/cfg
makes the configuration work.

mongodb second replica set all are secondaries

I have created one replica set with the following commands
mongod --replSet s0 --dbpath E:/mongo/data/shard0/rs0 --port 37017 --shardsvr
mongod --replSet s0 --dbpath E:/mongo/data/shard0/rs1 --port 37018 --shardsvr
mongod --replSet s0 --dbpath E:/mongo/data/shard0/rs2 --port 37019 --shardsvr
mongo --port 37017
config = { _id: "s0", members:[
{ _id : 0, host : "localhost:37017" },
{ _id : 1, host : "localhost:37018" },
{ _id : 2, host : "localhost:37019" }]};
rs.initiate(config)
when I checked rs.status() it showing 1 primary and 2 secondaries.
Second replica set:
Now i tried to create second replica set with the following commands.
mongod --replSet s1 --dbpath E:/mongo/data/shard1/rs0 --port 47017 --shardsvr
mongod --replSet s1 --dbpath E:/mongo/data/shard1/rs1 --port 47018 --shardsvr
mongod --replSet s1 --dbpath E:/mongo/data/shard1/rs2 --port 47019 --shardsvr
mongo --port 47017
config = { _id: "s1", members:[
{ _id : 0, host : "localhost:47017" },
{ _id : 1, host : "localhost:47018" },
{ _id : 2, host : "localhost:47019" }]};
rs.initiate(config)
when i checked rs.status() on second replica set its showing all are secondaries.
why there is no primary on second replica set?
Anyone help me.Thank you

authentication failed from httpinterface and Robomongo

edit:
Ah bad news, Robomongo 0.8.x doesn't support SCRAM-SHA-1
https://github.com/paralect/robomongo/issues/766. Good news is that V0.9 they're working hard with promises support for it.
And also the http interface in Mongo 3.0 doesn't work with SCRAM-SHA-1 user documents, because "(it) is generally considered insecure".
https://jira.mongodb.org/browse/SERVER-17527
I've just set up a mongo3.0 replica set, and enabled authentication, and created an userAdminAnyDatabase admin and a normal readWrite user.
./mongod --dbpath=/usr/local/mongo/mongodb/data/data1 --logpath=/usr/local/mongo/mongodb/logs/log1/mongodb.log --port 27017 --replSet jv_mongo --smallfiles --fork --rest --httpinterface --keyFile /usr/local/mongo/mongodb/key/mongodb.pem
./mongod --dbpath=/usr/local/mongo/mongodb/data/data2 --logpath=/usr/local/mongo/mongodb/logs/log2/mongodb.log --port 27018 --replSet jv_mongo --smallfiles --fork --rest --httpinterface --keyFile /usr/local/mongo/mongodb/key/mongodb.pem
./mongod --dbpath=/usr/local/mongo/mongodb/data/data3 --logpath=/usr/local/mongo/mongodb/logs/log3/mongodb.log --port 27019 --replSet jv_mongo --smallfiles --fork --rest --httpinterface --keyFile /usr/local/mongo/mongodb/key/mongodb.pem
jv_mongo:PRIMARY> use admin
switched to db admin
jv_mongo:PRIMARY> db.getUser("mongoAdmin");
{
"_id" : "admin.mongoAdmin",
"user" : "mongoAdmin",
"db" : "admin",
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
}
]
}
jv_mongo:PRIMARY> use comment
switched to db comment
jv_mongo:PRIMARY> db.getUser("comment");
{
"_id" : "comment.comment",
"user" : "comment",
"db" : "comment",
"roles" : [
{
"role" : "readWrite",
"db" : "comment"
}
]
}
And access the shell without any problem.
./mongo --port 27017 -u mongoAdmin -p PASSWORD --authenticationDatabase admin
./mongo --port 27017 -u comment -p PASSWORD --authenticationDatabase comment
jv_mongo:PRIMARY> db.user_login.find();
{ "_id" : ObjectId("5506a9de41e1073435ff06b3"), "id" : NumberLong(2), "user_id" : 9527, "login_time" : ISODate("2015-03-16T10:01:02.378Z"), "login_ip" : "127.0.0.1" }
{ "_id" : ObjectId("5506a9de41e1073435ff06b4"), "id" : NumberLong(3), "user_id" : 9538, "login_time" : ISODate("2015-03-16T10:01:02.380Z"), "login_ip" : "127.0.0.1" }
{ "_id" : ObjectId("5506a9de41e1073435ff06b5"), "id" : NumberLong(4), "user_id" : 9549, "login_time" : ISODate("2015-03-16T10:01:02.382Z"), "login_ip" : "127.0.0.1" }
And also successfully accessed mongo via java driver
But I received auth fail when trying Robomongo or 192.168.106.152:28017.
I'm not very familiar with Mongo or Mongo3.0, maybe I'm missing some key configuration?
Use MongoChef, it will work for mongodb 3.0+

uncaught exception: error: { "$err" : "not master and slaveOk=false", "code" : 13435 } [duplicate]

This question already has answers here:
mongodb, replicates and error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
(9 answers)
Closed 8 years ago.
After running db.getMongo().setSlaveOk(); still below error is coming while I am accessing collections from secondery node. As per my understanding I should able see its data from secondary node.
uncaught exception: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
I have used below command to create replica set
mkdir D:\data\repdb\One mkdir D:\data\repdb\The mkdir D:\data\repdb\Two mongod --port 27017 --dbpath D:\data\repdb\One --replSet rs0 --smallfiles --oplogSize 128 mongod --port 27018 --dbpath D:\data\repdb\The --replSet rs0 --smallfiles --oplogSize 128 mongod --port 27019 --dbpath D:\data\repdb\Two --replSet rs0 --smallfiles --oplogSize 128 mongo --port 27017 rsconf = { "_id" : "rs0", "version" : 1, "members" : [ { "_id" : 1, "host" : "localhost:27017" } ] }
rs.initiate( rsconf )
rs.conf()
rs.add("localhost:27018") rs.add("localhost:27019")
Did you try: rs.slaveOk() on the Secondary member?