We are using Mongo DB for our Application .
Currently when i issue the db.isMaster() command in our Mongo Shell .
It displayed this below information ( This set up is currently associated with our development box )
PRIMARY> db.isMaster()
{
"setName" : "dev",
"ismaster" : true,
"secondary" : false,
"hosts" : [
"10.11.13.111:27017",
"10.11.13.111:27018"
],
"arbiters" : [
"10.11.13.111:27019"
],
"primary" : "10.11.13.111:27017",
"me" : "10.11.13.111:27017",
"maxBsonObjectSize" : 16777216,
"ok" : 1
}
Please let me know what does the above information means ??
1 . Does it mean that it has got 1 primary and two secondary slaves ?? (One arbitrary also present in list )
How can i know if slaveOk is set to true or false ??
Thnaks in advance .
setName is the name of your replica set.
ismaster obviously indicates whether the node you are connected to master or slave.
secondary is contrary to ismaster.
hosts are host:port pairs of nodes storing data in replica set.
arbiters are host:port pairs of arbiters. These nodes can not store data, but their votes are used in master election process.
primary indicates who is primary.
me - indicates the node you are connected to.
maxBsonObjectSize - 16MB for now. Just a very global constant.
ok - kinda of a return code.
All can be found here. And regarding your questions:
No, you have two nodes. One primary (10.11.13.111:27017) and one slave (10.11.13.111:27018)
Check this. It is cursor operation.
Related
How to configure removed members of a replica set to form new replica set?
I have a replica set with 4 mongod instances
Output of rs.config()
{
"_id" : "rs0",
"members" : [
{
"_id" : 0,
"host" : "localhost:27031"
},
{
"_id" : 1,
"host" : "localhost:27032"
},
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
I removed 2 instances from the replica set
rs.remove("localhost:27033")
rs.remove("localhost:27034")
Now my requirement is to form a new replica set with these 2 removed members. What is the best way for that?
My current solution
connect to removed member
mongo --port 27033
and execute
conf = {
"_id" : "rs0",
"members" : [
{
"_id" : 2,
"host" : "localhost:27033"
},
{
"_id" : 3,
"host" : "localhost:27034"
}
],
"settings" : {
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
}
}
and then
rs.reconfig(conf, {force:true})
Outcome
This solution worked fine practically.
The removed members formed a replicaset, one of them became primary and other became secondary. Data was replicated among them.
And this replica set seems to be isolated from the the initial replica set from which they were removed.
Concerns
1) I had to use forced reconfiguration. Not sure about the consequences.
"errmsg" : "replSetReconfig should only be run on PRIMARY, but my state is REMOVED; use the \"force\" argument to override",
2) Is the new replica set actually new one? In the rs.config()
replicaSetId is same as old one.
"replicaSetId" : ObjectId("5cf22332f5b9d21b01b9b6b2")
I had to use same value for _id of members as in config of old replica set
"errmsg" : "New and old configurations both have members with host of localhost:27034 but in the new configuration the _id field is 1 and in the old configuration it is 3 for replica set rs0",
Is this solution good?
Is there any better solution?
Note: I need to retain data from old replica set (data which was present at the time of removal) in the new replica set.
As you have suspected, the procedure did not create a new replica set. Rather, it's a continuation of the old replica set, albeit superficially they look different.
There is actually a procedure in the MongoDB documentation to do what you want: Restore a Replica Set from MongoDB Backups. The difference being, you're not restoring from a backup. Rather, you're using one of the removed secondaries to seed a new replica set.
Hence you need to modify the first step in the procedure mentioned in the link above. The rest of the procedure would still be the same:
Restart the removed secondary as a standalone (without the --replSet parameter) and connect to it using the mongo shell.
Drop the local database in the standalone node:
use local
db.dropDatabase()
Restart the ex-secondary, this time with the --replSet parameter (with a new replica set name)
Connect to it using the mongo shell.
rs.initiate() the new set.
After this, the new set should have a different replicaSetId compared to the old set. In my quick test of the procedure above, this is the result I see:
Old set:
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d72a1c6c4de948ff5d8")
...
New set
> rs.conf()
...
"replicaSetId": ObjectId("5cf45d000dda9e1025d6c65e")
...
As with any major deployment changes like this, please ensure that you have a backup, and thoroughly test the procedures before doing it on a production system.
On mongo 3.2.17 I have the following output when running rs.initiate(). I need "ok" equal to 1. I don't know how to modified the configuration. Any suggestion?
{
"info2" : "no configuration specified. Using a default configuration for
the set",
"me" : "vpsxxxxxx:27017",
"info" : "try querying local.system.replset to see current
configuration",
"ok" : 0,
"errmsg" : "already initialized",
"code" : 23
}
You are getting this error because you have already initialized replication on your machine. This would work on a fresh instance. In your case try using reconfig instead of initiate
rs.reconfig(config, {force: true})
You can use force option when reconfiguring replica set. Make sure you have at least 3 nodes: 2 full nodes and 1 arbiter (minimum supported configuration) or 3 full nodes (minimum recommended configuration) so that primary node can be elected.
MongoDB shard needs to know about the members of a replicaset. Is the member list discovery dynamic? I mean if we add a node to an existing replicaset which is already configured as a shard on the config servers, does the shard automatically update or do we have do manually update the shard configuration with any new member added to the replica?
In older versions of Mongo, prior to 2.0.3 all of the replica set members needed to be specified when adding a shard. Thus it seems fair to conclude that when a shard is added and it only needs to know one of the members of the replica set, then all activity between replica set members is delegated to the replica set.
Probably the optimal way to be sure is fire up a test scenario on your own machine. But there is nothing to suggest there is any additional configuration to sharding that should be required.
And as a bit of an update as I had nothing to do over having lunch :) I just spun up the a load of instances as mapped out in the listed tutorial:
http://docs.mongodb.org/manual/tutorial/add-shards-to-shard-cluster/
A few differences being use of sh.addShard in the first member of replica set added only, rather than the all members syntax shown in the docs.
Once the shards were up. I just added two more replica set nodes to the firstset.
http://docs.mongodb.org/manual/tutorial/expand-replica-set/
Without anything else let's see the status from mongos
mongos> db.printShardingStatus()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"version" : 4,
"minCompatibleVersion" : 4,
"currentVersion" : 5,
"clusterId" : ObjectId("52f2f77a538f784f4413e6b9")
}
shards:
{ "_id" : "firstset",
"host" :"firstset/localhost:10001,localhost:10002,localhost:10003,localhost:10004,localhost:10005" }
{ "_id" : "secondset", "host" : "secondset/localhost:20001,localhost:20002,localhost:20003" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test", "partitioned" : true, "primary" : "firstset" }
test.test_collection
shard key: { "number" : 1 }
chunks:
secondset 23
firstset 191
So the shard is still moving chunks and the new nodes just finsihed initializing as I was typing.
And that's all there is to adding additional nodes to a replica set on a shard. Most of this was done during a 1 million document insert.
I want to set
mongoClient.setWriteConcern(WriteConcern.REPLICAS_SAFE);
only if replica set is present.
But in sharded environment when I do:
mongoClient.getReplicaSetStatus();
It returns null even though I have replica set.
To mongo client I am passing mongos IP.
Most MongoDB drivers, in particular Java driver which you are using will throw an exception if you try to set REPLICA_ACKNOWLEDGED writeConcern when it's not possible to get an acknowledgement from two or more nodes.
From the docs:
WriteConcern.REPLICA_ACKNOWLEDGED Tries to write to two separate nodes. [...] will
throw an exception if two writes are not possible.
See the following for more details:
http://docs.mongodb.org/manual/reference/write-concern/
http://docs.mongodb.org/ecosystem/drivers/java-replica-set-semantics/
In my testing with mongo shell, if you provide REPLICA_ACKNOWLEDGED (formerly called REPLICA_SAFE) concern to 'getlasterror' command, you will get an error when you are not communicating with a replica set. When talking to mongos process, the error will be:
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"wnote" : "no replication has been enabled, so w=2.0 won't work",
"err" : "norepl",
"ok" : 1
}
It is not the case that the client will hang forever without wtimeout being specified, that would only be the case if there is a replica set but two nodes are not available for writes indefinitely.
Note that using "majority" as w value for write concern will work correctly through mongos - note the difference in writeConcern responses:
mongos> db.coll.insert({}); db.runCommand({getlasterror:1,w:"majority"})
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"err" : null,
"ok" : 1
}
First verify that your replica set has a PRIMARY using the mongo shell command rs.status()
Then if that worked, verify that you are connecting to the database correctly:
MongoClient mongoClient = new MongoClient( "hostname" , 27017 );
If both of those are true then there should be no reason mongoClient.getReplicaSetStatus() should return NULL. It should be returning a ReplicaSetStatus object.
Recently we made live the mongodb sharding concept and its working fine in production server. But we have configured the public IP address instead of internal IP. So we have to change the internal ip in mongodb db sharding.
Please clarify whether its possible or not. If possible means, please share your input.
public ip example:
conf = {_id : "data1",members : [{_id : 0, host : "10.17.18.01:10001", votes : 2},{_id : 1, host : "10.17.19.02:10002", votes : 1},{_id:2, host: "10.17.19.03:10003", votes : 3, arbiterOnly: true}]}
internal ip example
conf = {_id : "data1",members : [{_id : 0, host : "20.17.18.01:10001", votes : 2},{_id : 1, host : "20.17.19.02:10002", votes : 1},{_id:2, host: "20.17.19.03:10003", votes : 3, arbiterOnly: true}]}
whether it will work. Pls suggest.
Regards,
Kumaran
You said you're trying to update the IPs in the sharding system, but the config documents you provided as an example look like a replica set configuration. If it's actually your replica set configuration you want to update, you should just be able to remove the entry for the old IP address from the replica set configuration, then add the node back in with the new IP. See http://www.mongodb.org/display/DOCS/Replica+Set+Configuration and http://www.mongodb.org/display/DOCS/Reconfiguring+when+Members+are+Up for more details.
If it's actually the sharding configuration you want to update, it will be a bit more complicated.
Throwing in an answer, even though this is a dated question for anyone else who might view this.
I would recommend using host names / host entries on your servers to handle local and external ips. However, to update the hosts in your case, you would have to change the replica set config.
Log into the Primary in the replica set then do the following:
> cfg = rs.conf()
> cfg.members[0].host = "[new host1]:[port]"
> cfg.members[1].host = "[new host2]:[port]"
> cfg.members[2].host = "[new host3]:[port]"
cfg.members is obviously a zero-index array, you can reuse that for how every many replicas you have.
> rs.reconfig( cfg )
From there, you would want to re-add your shards with the newly specified hosts.
from inside mongos.
simply use the following command to update the IPs of the shard servers:
db.shards.update({_id: <<"shard name">>} , {$set: {"host" : "newIP:27018"}})
Example:
db.shards.update({_id: "shard000"} , {$set: {"host" : "172.31.1.1:27018"}})
172.31.1.1 is the private ip address of your shard server in the private network.
avoid using a dynamic ip address.
If you want to do any modification in the Shard configuration then you should
use config
db.shards.update( { _id : } , { $set : { ... } } )
Please make sure that you restart your config server and mongos after making this change.