error Setting up sharded mongodb cluster in localhost - mongodb

while register shards with our mongos query router gives error
start mongo --port 27011 --host localhost
then adding sh.addShard("localhost:27012") gives error
{
"ok" : 0,
"errmsg" : "no such command: 'addShard', bad cmd: '{ addShard: \"localhost:27012\" }'",
"code" : 59,
"codeName" : "CommandNotFound"
}

Related

mongodb giving error not master with replication not initalized

Installed a fresh instance of mongodb 3.6 on Ubuntu 16.x
While creating user mongo is giving error as Error: couldn't add user: not master
also same error with show dbs
{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { listDatabases: 1.0, lsid: { id: UUID(\"f9e0590a4-b20a-b21b9eecf627\") }, $db: \"admin\" }",
"code" : 13,
"codeName" : "Unauthorized"
}
Replication is not enabled
{
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
}
Why this mongo instance is behaving like slave when its not configured to do so.
I have added hosts entries for other slave instances but not initialised them yet.

MongoDB error when sharding a collection: ns not found

I've setup a simple server configuration for testing sharding functionnalities purpose and i get the error above.
My configuration is pretty simple: one config server, one shard server and one mongos (respectively in 127.0.0.1:27019, 127.0.0.1:27018, 127.0.0.1:27017).
Everything looks to work well until i try to shard a collection, the command gives me the following:
sh.shardCollection("test.test", { "test" : 1 } )
{
"ok" : 0,
"errmsg" : "ns not found",
"code" : 26,
"codeName" : "NamespaceNotFound",
"operationTime" : Timestamp(1590244259, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1590244259, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
The config server and shard server outputs show no errors:
2020-05-23T10:39:46.629-0400 I SHARDING [conn11] about to log metadata event into changelog: { _id: "florent-Nitro-AN515-53:27018-2020-05-23T10:39:46.629-0400-5ec935b2bec982e313743b1a", server: "florent-Nitro-AN515-53:27018", shard: "rs0", clientAddr: "127.0.0.1:58242", time: new Date(1590244786629), what: "shardCollection.start", ns: "test.test", details: { shardKey: { test: 1.0 }, collection: "test.test", uuid: UUID("152add6f-e56b-40c4-954c-378920eceede"), empty: false, fromMapReduce: false, primary: "rs0:rs0/127.0.0.1:27018", numChunks: 1 } }
2020-05-23T10:39:46.620-0400 I SHARDING [conn25] distributed lock 'test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c5
2020-05-23T10:39:46.622-0400 I SHARDING [conn25] distributed lock 'test.test' acquired for 'shardCollection', ts : 5ec935b235505bcc59eb60c7
2020-05-23T10:39:46.637-0400 I SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c7' unlocked.
2020-05-23T10:39:46.640-0400 I SHARDING [conn25] distributed lock with ts: 5ec935b235505bcc59eb60c5' unlocked.
Of course the collection exists on primary shard:
rs0:PRIMARY> db.test.stats()
{
"ns" : "test.test",
"size" : 216,
"count" : 6,
"avgObjSize" : 36,
"storageSize" : 36864,
"capped" : false,
...
}
I have no idea what could be wrong here, i'd much appreciate any help :)
EDIT:
Here is the detail about steps i follom to run servers, i probably misunderstand something :
Config server:
sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg
mongo --port 27019
Then in mongo shell
rs.initiate(
{
_id: "rs0",
configsvr: true,
members: [
{ _id : 0, host : "127.0.0.1:27019" }
]
}
)
Sharded server:
sudo mongod --shardsvr --replSet rs0 --dbpath /srv/mongodb/shrd1/ --port 27018
mongo --port 27018
Then in shell:
rs.initiate(
{
_id: "rs0",
members: [
{ _id : 0, host : "127.0.0.1:27018" }
]
}
)
db.test.createIndex({test:1})
Router:
sudo mongos --configdb rs0/127.0.0.1:27019
mongo
Then in shell:
sh.addShard('127.0.0.1:27018')
sh.enableSharding('test')
sh.shardCollection('test.test', {test:1})
That error happens sometimes when some routers have out of date ideas of what databases/collections exist in the sharded cluster.
Try running https://docs.mongodb.com/manual/reference/command/flushRouterConfig/ on each mongos (i.e. connect to each mongos sequentially by itself and run this command on it).
I just misunderstood one base concept: config servers and shard servers are distinct and independant mongodb instances, so each must be part of distinct replicasets .
So replacing
sudo mongod --configsvr --replSet rs0 --port 27019 --dbpath /srv/mongodb/cfg
with
sudo mongod --configsvr --replSet rs0Config --port 27019 --dbpath /srv/mongodb/cfg
makes the configuration work.

Making a replica in MongoDB

I am trying to create a replica in mongodb. I have used the following commands to make a replica. I got the following outputs.
cfg = { _id : "mySet", members : [ { _id : 0, host : "localhost:27017"
} ] }
rs.initiate(cfg)
Output:
{
"ok" : 0,
"errmsg" : "Attempting to initiate a replica set with name mySet, but command line reports 0; rejecting",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig"
}
i get the following output:
017-09-22T14:22:22.093+0530 E QUERY [thread1] ReferenceError: conf is not defined :
#(shell):1:1
When i ran show dbs and rs.config(), i get the following errors:
show dbs
2017-09-22T14:23:56.234+0530 E QUERY [thread1] Error: listDatabases
failed:{
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13 .
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:62:1
shellHelper.show#src/mongo/shell/utils.js:769:19
shellHelper#src/mongo/shell/utils.js:659:15 .
#(shellhelp2):1:1
> rs.config()
2017-09-22T14:24:23.718+0530 E QUERY [thread1] Error: Could not
retrieve replica set config: {
"info" : "run rs.initiate(...) if not yet done for the set",
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized"
} :
rs.conf#src/mongo/shell/utils.js:1276:11
#(shell):1:1
delete all replication and oplog.
use local
db.dropDatabase()
restart the mongo
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.100:15000'}]
}
rs.initiate(config);

MongoDB - locahost Not authorized on admin to execute command

I'm trying to initialize a config replica set on a single member. If I run,
mongo --host localhost:27017
It works but as soon as I run the command to initialize the replica set.
rs.initiate({_id : "c_rs1", configsvr: true, members: [{ _id: 1, host : "10.0.0.2:2701"}]})
I get this error
{
"ok" : 0,
"errmsg" : "not authorized on admin to execute command { replSetInitiate: { _id: \"c_rs1\", configsvr: true, members: [ { _id: 1.0, host: \"10.0.0.2:27017\" } ] } }",
"code" : 13,
"codeName" : "Unauthorized"
}
However, if I run this instead
mongo --port 27017
We get this when we run the exact same command.
{ ok : 1 }
This is a problem when I have to run the script to initialize a replica set on a different server because I cannot simply define the host and assume there's a config mongod instance running on the right port on the same server. Any ideas on why the --host option isn't working at the moment?
FYI: security.authentication = disabled in the config file for the mongod

replica set in MongoDB using docker, primary has error and stops being primary when another member is added to the set

I have two docker containers running a mongo instance each, they were initialized like this:
docker run --name mongodb-shard-1-node-1 -d -v mongodb-shard-1-node-1:/data/db -p 27031:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-1 it shows the ip 172.17.0.2
docker run --name mongodb-shard-1-node-2 -d -v mongodb-shard-1-node-2:/data/db -p 27020:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-2 it shows the ip 172.17.0.4
So i proceed to access mongodb-shard-1-node-1 by using docker exec -it mongodb-shard-1-node-1 mongo and i initialize it as the primary member like this:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ee3c41ef76b2:27017",
"ok" : 1
}
Then I proceed to add the mongodb-shard-1-node-2 to this member for it to work as a secondary member, at first it looks like it worked:
rs0:PRIMARY> rs.add("172.17.0.4:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2016-05-20T01:04:02.095Z"),
"myState" : 1,
"term" : NumberLong(1),
"members" : [ervalMillis" : NumberLong(2000),
{ "_id" : 0,
"name" : "ee3c41ef76b2:27017",
"state" : 1,,
"uptime" : 27,PRIMARY",
"optime""ts" : Timestamp(1463706237, 1),
}, "t" : NumberLong(1)
"infoMessage" : "could not find member to sync from",
"electionDate" : ISODate("2016-05-20T01:03:43Z"),
"self" : truen" : 2,
{,
"name" : "172.17.0.4:27017",
"state" : 0,,
"uptime" : 4,"STARTUP",
"optime""ts" : Timestamp(0, 0),
}, "t" : NumberLong(-1)
"lastHeartbeat" : ISODate("2016-05-20T01:04:01.187Z"),
"pingMs" : NumberLong(0),Date("1970-01-01T00:00:00Z"),
} "configVersion" : -2
"ok" : 1
}
but right away it fails for some reason and i have no idea why, here's what i get:
rs0:PRIMARY> rs.status()
2016-05-20T01:04:18.007+0000 E QUERY [thread1] Error: error doing query:
failed: network error while attempting to run command 'replSetGetStatus' on host '127.0.0.1:27017' :
DB.prototype.runCommand#src/mongo/shell/db.js:135:1
DB.prototype.adminCommand#src/mongo/shell/db.js:153:16
rs.status#src/mongo/shell/utils.js:1090:12
#(shell):1:1
2016-05-20T01:04:18.012+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-05-20T01:04:18.018+0000 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
rs0:SECONDARY
What is wrong? How do I fix it?
Edit: just to clarify, i had already tried the connections between the containers by doing what this part of the documentation says at: Test Connections Between all Members
Had my question answered here:
https://dba.stackexchange.com/a/139145/91866
I'm gonna quote the whole answer:
Your primary is trying to auto-configure itself as ee3c41ef76b2:27017 and that then resolves to the loopback (127.0.0.1) which is then not responding on the container as it expects. Depending on what the second container does to resolve ee3c41ef76b2, and especially it it does not resolve to 172.17.0.2, it will probably not be able to talk to the primary either.
Assuming you are correct about the connectivity (and you have verified that the instances are listening on the IP and not just the loopbasck) then you need to override the automatic detection and be explicit when you are calling rs.initiate(), something like this:
rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "172.17.0.2:27017" },
{ _id: 1, host : "172.17.0.4:27017" }
]
}
)