How to update _id in Mongodb Replica Set configuration? - mongodb

I had 5 mongo members in Replica Set. After I deleted 3 from it.
How can I change "_id" in others members to values "0", "1" and "2"?
rs.conf()
{
"_id" : "rs0",
"version" : 151261,
"members" : [
{
"_id" : 3,
"host" : "mongodb3:27017"
},
{
"_id" : 4,
"host" : "mongodb4:27017"
},
{
"_id" : 5,
"host" : "ok:27017",
"arbiterOnly" : true
}
]
}

Directly editing the replica set configuration may not be an elegant way. Instead use the rs.remove(hostname) command to remove a member from replica set , this way you need not have to bring down the primary during reconfiguration which will automatically assign ascending order values to "_id" field.

Try dropping the slaves collection as described here: http://docs.mongodb.org/manual/tutorial/troubleshoot-replica-sets/#duplicate-key-error-on-local-slaves
The master will recreate the collection the next time it is required.

You could try this in the Mongo console:
conf = rs.conf()
conf.members[0]._id = 0
conf.members[1]._id = 1
conf.members[2]._id = 2
rs.reconfig(conf)

Related

Is there a primary shard in DBs in which sh.enableSharding() has not been yet executed?

MongoDB sharding cluster uses a "primary shard" to hold collection data in DBs in which sharding has been enabled (with sh.enableSharding()) but the collection itself has not been yet enabled (with sh.shardCollection()). The mongos process choses automatically the primary shard, except if the user state it explicitly as parameter of sh.enableSharding()
However, what happens in DBs where sh.enableSharding() has not been executed yet? Is there some "global primary" for these cases? How can I know which one it is? sh.status() doesn't show information about it...
I'm using MongoDB 4.2 version.
Thanks!
The documentation says:
The mongos selects the primary shard when creating a new database by picking the shard in the cluster that has the least amount of data.
If enableSharding is called on a database which already exists, the above quote would define the location of the database prior to sharding being enabled on it.
sh.status() shows where the database is stored:
MongoDB Enterprise mongos> use foo
switched to db foo
MongoDB Enterprise mongos> db.foo.insert({a:1})
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5eade78756d7ba8d40fc4317")
}
shards:
{ "_id" : "shard01", "host" : "shard01/localhost:14442,localhost:14443", "state" : 1 }
{ "_id" : "shard02", "host" : "shard02/localhost:14444,localhost:14445", "state" : 1 }
active mongoses:
"4.3.6" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
{ "_id" : "foo", "primary" : "shard02", "partitioned" : false, "version" : { "uuid" : UUID("ff618243-f4b9-4607-8f79-3075d14d737d"), "lastMod" : 1 } }
{ "_id" : "test", "primary" : "shard01", "partitioned" : false, "version" : { "uuid" : UUID("4d76cf84-4697-4e8c-82f8-a0cfad87be80"), "lastMod" : 1 } }
foo is not partitioned and stored in shard02.
If enableSharding is called on a database which doesn't yet exist, the database is created and, the primary shard is specified, the specified shard is used as the primary shard. Test code here.

MongoDb Replication Issue- when add one more node then primary stops responding

I am using mongodb replication
here is the output of rs.conf()
firstset:PRIMARY> rs.conf();
{
"_id" : "firstset",
"version" : 43,
"members" : [
{
"_id" : 7,
"host" : "primaryip:10002"
},
{
"_id" : 10,
"host" : "arbiterip:10009",
"votes" : 2,
"arbiterOnly" : true
},
{
"_id" : 12,
"host" : "secondaryip:10006"
}
]
}
Now I want to add another secondary instance. So i just started another mongod process on port 10004 and fired the command
rs.add("secondaryip:10004");
I got the output
{ "ok" : 1 }
and the state of newly attached instance was
"stateStr" : "STARTUP2",
but at the same time my application was not able to connect to primary instance. why ?
Please help me to solve this issue.
This was a bug of MongoDB. Bug resolved by MongoDB team from version 2.6.2

exception: hosts cannot switch between localhost and hostname

I created a replication set.
I added localhost in the set in the beginning, but when I try to edit the member with the actual hostname. I get error "exception: hosts cannot switch between localhost and hostname"
I need to get rid of localhost:27017 because, otherwise, it doesn't let me enter any other member as hostname (i.e. non-localhost address)
my-rs0:PRIMARY> cfg=rs.conf();
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
}
]
}
my-rs0:PRIMARY> cfg.members[0].host="my-server04:27017"
my-rs0:PRIMARY> cfg
{
"_id" : "my-rs0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "my-server04:27017"
}
]
}
using rs.reconfig(cfg);
my-rs0:PRIMARY> rs.reconfig(cfg);
{
"errmsg" : "exception: hosts cannot switch between localhost and hostname",
"code" : 13645,
"ok" : 0
}
no luck with rs.add("my-server04:27017") or rs.remove("localhost:27017") as well.
my-rs0:PRIMARY> rs.add("my-server04:27017");
{
"errmsg" : "exception: can't use localhost in repl set member names except when using it for all members",
"code" : 13393,
"ok" : 0
}
I have tried all the reconfiguration methods mentioned here Replica Set Reconfig steps
But, none fixing above issue. Already spent hours, I am really frustrated.
I had the same problem and I fixed it without dropping any database. Just edited the host field of the member in the local.system.replset collection to match the local ip and then restarted mongod. Everything worked perfect.
It looks like you'll need to scrap your replicaset and start over.
I believe that when you initiated your Replica Set, you explicitly passed it a config document that references your MongoDB instance using localhost.
As I was investigating this, I brought up a replica set. When I initiated the replica set using rs.initiate() (without passing a config document) it used host name by default.
rs.initiate()
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "MY-HOSTNAME:28001"
}
]
}
This post describes the need to complete clear out your database files to create a fresh replica set.
Once I did this, I initiated a new replica set in the by passing a configuration document:
cfg = {
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
rs.initiate(cfg)
rs.conf()
{
"_id" : "stack1",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "localhost:28001"
}
]
}
Long story short, you'll need to delete all of the files in your --dbpath directory and re-create the replica set, without explicitly specifying "localhost" as your hostname.
I did according to the docs:
Restarted MongDB on another port (e.g. 37107) to prevent user connections to it.
Then started a shell on it:
$ mongo --port 37017
Then updated the configuration:
use local
cfg = db.system.replset.findOne( { "_id": "my-rs0" } )
cfg.members[0].host = "my-server04:27017"
db.system.replset.update( { "_id": "my-rs0" } , cfg )
Then restarted MongoDB on the original port.

mongodb replicaset host name change error

I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.

How to Verify Sharding?

I am trying to shard MongoDB. I am done with Sharding configuration, but I am not sure how to verify if sharding is functional.
How do i check whether my data is get sharded? Is there a query to verify/validate the shards?
You can also execute a simple command on your mongos router :
> use admin
> db.printShardingStatus();
which should output informations about your shards, your sharded dbs and your sharded collection as mentioned in the mongodb documentation
sharding version: { "_id" : 1, "version" : 2 }
shards:
{ "_id" : ObjectId("4bd9ae3e0a2e26420e556876"), "host" : "localhost:30001" }
{ "_id" : ObjectId("4bd9ae420a2e26420e556877"), "host" : "localhost:30002" }
{ "_id" : ObjectId("4bd9ae460a2e26420e556878"), "host" : "localhost:30003" }
databases:
{ "name" : "admin", "partitioned" : false,
"primary" : "localhost:20001",
"_id" : ObjectId("4bd9add2c0302e394c6844b6") }
my chunks
{ "name" : "foo", "partitioned" : true,
"primary" : "localhost:30002",
"sharded" : { "foo.foo" : { "key" : { "_id" : 1 }, "unique" : false } },
"_id" : ObjectId("4bd9ae60c0302e394c6844b7") }
my chunks
foo.foo { "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } }
on : localhost:30002 { "t" : 1272557259000, "i" : 1 }
MongoDB has detailed documentation on Sharding here ...
http://www.mongodb.org/display/DOCS/Sharding+Introduction
To anwser you question (I think), see the portion on the config Servers ...
Each config server has a complete copy
of all chunk information. A two-phase
commit is used to ensure the
consistency of the configuration data
among the config servers.
Basically, it is the config server's job to make sure everything get sharded ... correctly.
Also, there are system collections you can query ...
db.runCommand( { listshards : 1 } );
Lots of help in the prez below too ...
http://www.slideshare.net/mongodb/mongodb-sharding-internals
http://www.10gen.com/video/mongosv2010/sharding
If you just want to check whether you are conencted to a sharded cluster or not:
db.isMaster() can be used to detect that you are connected to a sharding router (mongos).
If db.isMaster().msg is "isdbgrid", you are connected to a sharded instance.
db.isMaster() can be run without authentication.
For checking the details of the shards, sh.status() also works, which has the same output as db.printShardingStatus(); works.