MongoDB sharding IP Changes - mongodb

Recently we made live the mongodb sharding concept and its working fine in production server. But we have configured the public IP address instead of internal IP. So we have to change the internal ip in mongodb db sharding.
Please clarify whether its possible or not. If possible means, please share your input.
public ip example:
conf = {_id : "data1",members : [{_id : 0, host : "10.17.18.01:10001", votes : 2},{_id : 1, host : "10.17.19.02:10002", votes : 1},{_id:2, host: "10.17.19.03:10003", votes : 3, arbiterOnly: true}]}
internal ip example
conf = {_id : "data1",members : [{_id : 0, host : "20.17.18.01:10001", votes : 2},{_id : 1, host : "20.17.19.02:10002", votes : 1},{_id:2, host: "20.17.19.03:10003", votes : 3, arbiterOnly: true}]}
whether it will work. Pls suggest.
Regards,
Kumaran

You said you're trying to update the IPs in the sharding system, but the config documents you provided as an example look like a replica set configuration. If it's actually your replica set configuration you want to update, you should just be able to remove the entry for the old IP address from the replica set configuration, then add the node back in with the new IP. See http://www.mongodb.org/display/DOCS/Replica+Set+Configuration and http://www.mongodb.org/display/DOCS/Reconfiguring+when+Members+are+Up for more details.
If it's actually the sharding configuration you want to update, it will be a bit more complicated.

Throwing in an answer, even though this is a dated question for anyone else who might view this.
I would recommend using host names / host entries on your servers to handle local and external ips. However, to update the hosts in your case, you would have to change the replica set config.
Log into the Primary in the replica set then do the following:
> cfg = rs.conf()
> cfg.members[0].host = "[new host1]:[port]"
> cfg.members[1].host = "[new host2]:[port]"
> cfg.members[2].host = "[new host3]:[port]"
cfg.members is obviously a zero-index array, you can reuse that for how every many replicas you have.
> rs.reconfig( cfg )
From there, you would want to re-add your shards with the newly specified hosts.

from inside mongos.
simply use the following command to update the IPs of the shard servers:
db.shards.update({_id: <<"shard name">>} , {$set: {"host" : "newIP:27018"}})
Example:
db.shards.update({_id: "shard000"} , {$set: {"host" : "172.31.1.1:27018"}})
172.31.1.1 is the private ip address of your shard server in the private network.
avoid using a dynamic ip address.

If you want to do any modification in the Shard configuration then you should
use config
db.shards.update( { _id : } , { $set : { ... } } )
Please make sure that you restart your config server and mongos after making this change.

Related

Form a new replica set with removed members

How to configure removed members of a replica set to form new replica set?
I have a replica set with 4 mongod instances
Output of rs.config()
{
"_id" : "rs0",
"members" : [
{
"_id" : 0,
"host" : "localhost:27031"
},
{
"_id" : 1,
"host" : "localhost:27032"
},
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
I removed 2 instances from the replica set
rs.remove("localhost:27033")
rs.remove("localhost:27034")
Now my requirement is to form a new replica set with these 2 removed members. What is the best way for that?
My current solution
connect to removed member
mongo --port 27033
and execute
conf = {
"_id" : "rs0",
"members" : [
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
and then
rs.reconfig(conf, {force:true})
Outcome
This solution worked fine practically.
The removed members formed a replicaset, one of them became primary and other became secondary. Data was replicated among them.
And this replica set seems to be isolated from the the initial replica set from which they were removed.
Concerns
1) I had to use forced reconfiguration. Not sure about the consequences.
"errmsg" : "replSetReconfig should only be run on PRIMARY, but my state is REMOVED; use the \"force\" argument to override",
2) Is the new replica set actually new one? In the rs.config()
replicaSetId is same as old one.
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
I had to use same value for _id of members as in config of old replica set
"errmsg" : "New and old configurations both have members with host of localhost:27034 but in the new configuration the _id field is 1 and in the old configuration it is 3 for replica set rs0",
Is this solution good?
Is there any better solution?
Note: I need to retain data from old replica set (data which was present at the time of removal) in the new replica set.
As you have suspected, the procedure did not create a new replica set. Rather, it's a continuation of the old replica set, albeit superficially they look different.
There is actually a procedure in the MongoDB documentation to do what you want: Restore a Replica Set from MongoDB Backups. The difference being, you're not restoring from a backup. Rather, you're using one of the removed secondaries to seed a new replica set.
Hence you need to modify the first step in the procedure mentioned in the link above. The rest of the procedure would still be the same:
Restart the removed secondary as a standalone (without the --replSet parameter) and connect to it using the mongo shell.
Drop the local database in the standalone node:
use local
db.dropDatabase()
Restart the ex-secondary, this time with the --replSet parameter (with a new replica set name)
Connect to it using the mongo shell.
rs.initiate() the new set.
After this, the new set should have a different replicaSetId compared to the old set. In my quick test of the procedure above, this is the result I see:
Old set:
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d72a1c6c4de948ff5d8")
...
New set
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d000dda9e1025d6c65e")
...
As with any major deployment changes like this, please ensure that you have a backup, and thoroughly test the procedures before doing it on a production system.

MongoDB : Understanding MongoDB Internal Details .

We are using Mongo DB for our Application .
Currently when i issue the db.isMaster() command in our Mongo Shell .
It displayed this below information ( This set up is currently associated with our development box )
PRIMARY> db.isMaster()
{
"setName" : "dev",
"ismaster" : true,
"secondary" : false,
"hosts" : [
"10.11.13.111:27017",
"10.11.13.111:27018"
],
"arbiters" : [
"10.11.13.111:27019"
],
"primary" : "10.11.13.111:27017",
"me" : "10.11.13.111:27017",
"maxBsonObjectSize" : 16777216,
"ok" : 1
}
Please let me know what does the above information means ??
1 . Does it mean that it has got 1 primary and two secondary slaves ?? (One arbitrary also present in list )
How can i know if slaveOk is set to true or false ??
Thnaks in advance .
setName is the name of your replica set.
ismaster obviously indicates whether the node you are connected to master or slave.
secondary is contrary to ismaster.
hosts are host:port pairs of nodes storing data in replica set.
arbiters are host:port pairs of arbiters. These nodes can not store data, but their votes are used in master election process.
primary indicates who is primary.
me - indicates the node you are connected to.
maxBsonObjectSize - 16MB for now. Just a very global constant.
ok - kinda of a return code.
All can be found here. And regarding your questions:
No, you have two nodes. One primary (10.11.13.111:27017) and one slave (10.11.13.111:27018)
Check this. It is cursor operation.

Starting over with replica configuration in mongodb

I did a mistake when configuring replica sets in mongodb. I think that what I did wrong is that I did a rs.initialize() on both nodes, which made them confused in some way. I'm not sure.
Now all I want to do is start over, but I couldn't find a way to de-initialize a node. So I followed the advice to delete the local* db files, thus resetting the configurations. I did that, and now nothing works.
> rs.initiate()
{
"info2" : "no configuration explicitly specified -- making one",
"me" : "0.0.0.0:27017",
"errmsg" : "couldn't initiate : can't find self in the replset config",
"ok" : 0
}
> rs.conf()
null
I tried to remove and reinstall the package (I'm doing this on Ubuntu servers) which just meant that my mongodb.conf disappeared and my init script stopped working. This is of course easy enough to solve.
So how do I start over?
Note: I did look at this answer, but since rs.conf() doesn't work this doesn't work either.
You'll also get this error if your machine's hostname doesn't map back to 127.0.0.1. Update your /etc/hosts and/or your /etc/hostname, and rs.initiate() with no config should work.
If you force a reconfig with a config that you have generated, does it resolve the issue?
You could do this similar to the follow from the {{mongo}} shell:
> cfg = {
... "_id" : "rs0",
... "version" : 1,
... "members" : [
... {
... "_id" : 0,
... "host" : "0.0.0.0:27017"
... }
... ]
... }
>rs.reconfig(cfg, {force:true})
You may need to tune the cfg variable to have your hostname and portname, as the can't find self in new replset config error will be returned to the shell if the repl set can't find the node it is running from in the config.
If you just comment out bind_ip in /etc/mongod.conf this will achieve the correct result so that you can reissue a rs.initiate() command to set-up or reconfig the replica.

Need list of config servers MongoDB

I need to grab (within the C# driver for MongoDB) a list of all the config servers connected to my instance of Mongo-s. Or, failing that, I would settle for a way to grab ALL the servers and a way to go through them one by one telling which are configsvr and which are something else. I was thinking of the getShardMap command, but I still have no idea how to look at a server (programmatically) and decide if it's a configsvr or not.
Thanks.
mongos> db.runCommand("getShardMap")
{
"map" : {
"node2:27021" : "node2:27021",
"node3:27021" : "node3:27021",
"node4:27021" : "node4:27021",
"config" : "node2:27019,node3:27019,node4:27019",
"shard0000" : "node2:27021",
"shard0001" : "node3:27021",
"shard0002" : "node4:27021"
},
"ok" : 1
}
getShardMap command gives the config string that is passed to mongos server. You can parse the string to get the list of config servers.
The only way I can think of to get this info is to run the getCmdLineOpts command on a mongos and look at the --configdb argument it was passed. I'm not sure how you run admin commands in the C# driver, but I'd imagine it's something like:
db.RunCommand("getCmdLineOpts");

How to modify replica set config?

I have a mongo 2 node cluster running, with this replica set config.
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.100:15000'}]
}
I have to move these both nodes on to new servers. I have copied everything from old to new servers, but I'm running into issues while reconfiguring the replica config due to ip change on the 2nd node.
I have tried this.
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.200:15000'}]
}
rs.reconfig(config)
{
"startupStatus" : 1,
"errmsg" : "loading local.system.replset config (LOADINGCONFIG)",
"ok" : 0
}
It shows above message, but change is not happening.
I also tried changing replica set name but pointing to the same data dirs.
I am getting the following error:
rs.initiate()
{
"errmsg" : "local.oplog.rs is not empty on the initiating member. cannot initiate.",
"ok" : 0
}
What are the right steps to change the IP but keeping the data on the 2nd node, or do i need to recreate/resync the 2nd node?
Well , I had the same problem.
I had to delete all replication and oplog.
use local
db.dropDatabase()
restart your mongo with new set name
config = {_id: "repl1", members:[
{_id: 0, host: 'localhost:15000'},
{_id: 1, host: '192.168.2.100:15000'}]
}
rs.initiate(config)
I hope this works for you too
You can use force option when reconfiguring replica set:
rs.reconfig(config, {force: true})
Note that, as Adam already suggested in the comments, you should have at least 3 nodes: 2 full nodes and 1 arbiter (minimum supported configuration) or 3 full nodes (minimum recommended configuration) so that primary node can be elected.
I realise this is an old post, but I discovered I was getting this exact same error when trying to change the port used by secondaries in my replica set.
In my case, I needed to stop the secondary whose config I was changing, and bring it up on its new address and port BEFORE applying the changed config on the Primary.
This is in the mongo documentation, but the order in which I had to bring things up and down was something I'd mis-read on the first pass, so for clarity I've repeated that here:
Shut down the secondary member of the replica set you are moving.
Bring that secondary back up at its new address
Make the configuration change as detailed in the original post above
You can use rs.reconfig option. First retrieve the current configuration with rs.conf(). Modify the configuration document as needed, and then pass the modified document to rs.reconfig()
More info in docs.