I need to grab (within the C# driver for MongoDB) a list of all the config servers connected to my instance of Mongo-s. Or, failing that, I would settle for a way to grab ALL the servers and a way to go through them one by one telling which are configsvr and which are something else. I was thinking of the getShardMap command, but I still have no idea how to look at a server (programmatically) and decide if it's a configsvr or not.
Thanks.
mongos> db.runCommand("getShardMap")
{
"map" : {
"node2:27021" : "node2:27021",
"node3:27021" : "node3:27021",
"node4:27021" : "node4:27021",
"config" : "node2:27019,node3:27019,node4:27019",
"shard0000" : "node2:27021",
"shard0001" : "node3:27021",
"shard0002" : "node4:27021"
},
"ok" : 1
}
getShardMap command gives the config string that is passed to mongos server. You can parse the string to get the list of config servers.
The only way I can think of to get this info is to run the getCmdLineOpts command on a mongos and look at the --configdb argument it was passed. I'm not sure how you run admin commands in the C# driver, but I'd imagine it's something like:
db.RunCommand("getCmdLineOpts");
Related
I want to set
mongoClient.setWriteConcern(WriteConcern.REPLICAS_SAFE);
only if replica set is present.
But in sharded environment when I do:
mongoClient.getReplicaSetStatus();
It returns null even though I have replica set.
To mongo client I am passing mongos IP.
Most MongoDB drivers, in particular Java driver which you are using will throw an exception if you try to set REPLICA_ACKNOWLEDGED writeConcern when it's not possible to get an acknowledgement from two or more nodes.
From the docs:
WriteConcern.REPLICA_ACKNOWLEDGED Tries to write to two separate nodes. [...] will
throw an exception if two writes are not possible.
See the following for more details:
http://docs.mongodb.org/manual/reference/write-concern/
http://docs.mongodb.org/ecosystem/drivers/java-replica-set-semantics/
In my testing with mongo shell, if you provide REPLICA_ACKNOWLEDGED (formerly called REPLICA_SAFE) concern to 'getlasterror' command, you will get an error when you are not communicating with a replica set. When talking to mongos process, the error will be:
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"wnote" : "no replication has been enabled, so w=2.0 won't work",
"err" : "norepl",
"ok" : 1
}
It is not the case that the client will hang forever without wtimeout being specified, that would only be the case if there is a replica set but two nodes are not available for writes indefinitely.
Note that using "majority" as w value for write concern will work correctly through mongos - note the difference in writeConcern responses:
mongos> db.coll.insert({}); db.runCommand({getlasterror:1,w:"majority"})
{
"singleShard" : "localhost:30001",
"n" : 0,
"connectionId" : 3,
"err" : null,
"ok" : 1
}
First verify that your replica set has a PRIMARY using the mongo shell command rs.status()
Then if that worked, verify that you are connecting to the database correctly:
MongoClient mongoClient = new MongoClient( "hostname" , 27017 );
If both of those are true then there should be no reason mongoClient.getReplicaSetStatus() should return NULL. It should be returning a ReplicaSetStatus object.
I am using the MongoDB 2.4.3, and following the wizard:
http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
But when adding the other members into replica-set, get the following error:
root#vm3:~# mongo
MongoDB shell version: 2.4.3
connecting to: test
rs1:PRIMARY> rs.add("vm1")
{
"errmsg" : "exception: set name does not match the set name host vm1:27017 expects",
"code" : 13145,
"ok" : 0
}
rs1:PRIMARY> rs.add("vm4")
{
"errmsg" : "exception: set name does not match the set name host vm4:27017 expects",
"code" : 13145,
"ok" : 0
}
vm1, vm3 and vm4 know each other because I configured their /etc/hosts files correctly.
Any idea? I don't understand what does this error message mean!
After restarting all vms, it works now.
root#vm3:~# mongo
MongoDB shell version: 2.4.3
connecting to: test
rs1:PRIMARY> rs.add("vm4")
{ "ok" : 1 }
rs1:PRIMARY> rs.add("vm1")
{ "ok" : 1 }
In my case, just restart virtual machines, every thing is fine.
If you are re-installing a MongoDB instance, the replSet may be living in your data file on the drive. I had the same problem setting up a new replica set. The problem was from changing the replica set name after bringing up instances with an older replSet name. I deleted the data files, ran my install scripts again and it worked just fine.
I did a mistake when configuring replica sets in mongodb. I think that what I did wrong is that I did a rs.initialize() on both nodes, which made them confused in some way. I'm not sure.
Now all I want to do is start over, but I couldn't find a way to de-initialize a node. So I followed the advice to delete the local* db files, thus resetting the configurations. I did that, and now nothing works.
> rs.initiate()
{
"info2" : "no configuration explicitly specified -- making one",
"me" : "0.0.0.0:27017",
"errmsg" : "couldn't initiate : can't find self in the replset config",
"ok" : 0
}
> rs.conf()
null
I tried to remove and reinstall the package (I'm doing this on Ubuntu servers) which just meant that my mongodb.conf disappeared and my init script stopped working. This is of course easy enough to solve.
So how do I start over?
Note: I did look at this answer, but since rs.conf() doesn't work this doesn't work either.
You'll also get this error if your machine's hostname doesn't map back to 127.0.0.1. Update your /etc/hosts and/or your /etc/hostname, and rs.initiate() with no config should work.
If you force a reconfig with a config that you have generated, does it resolve the issue?
You could do this similar to the follow from the {{mongo}} shell:
> cfg = {
... "_id" : "rs0",
... "version" : 1,
... "members" : [
... {
... "_id" : 0,
... "host" : "0.0.0.0:27017"
... }
... ]
... }
>rs.reconfig(cfg, {force:true})
You may need to tune the cfg variable to have your hostname and portname, as the can't find self in new replset config error will be returned to the shell if the repl set can't find the node it is running from in the config.
If you just comment out bind_ip in /etc/mongod.conf this will achieve the correct result so that you can reissue a rs.initiate() command to set-up or reconfig the replica.
If I have a mongo instance running, how can I check what port numbers it is listening on from the shell? I thought that db.serverStatus() would do it but I don't see it. I see this
"connections" : {
"current" : 3,
"available" : 816
Which is close... but no. Suggestions? I've read the docs and can't seem to find any command that will do this.
You can do this from the Operating System shell by running:
sudo lsof -iTCP -sTCP:LISTEN | grep mongo
From the system shell you can use lsof (see Derick's answer below) or netstat -an to view what a process is actually doing. However, assuming you only have access to the mongo shell (which your question title implies), then you can run the serverCmdLineOpts() command. That output will give you all the arguments passed on the command line (argv) and the ones from the config file (parsed) and you can infer the ports mongod is listening based on that information. Here's an example:
db.serverCmdLineOpts()
{
"argv" : [
"./mongod",
"-replSet",
"test",
"--rest",
"--dbpath",
"/data/test/r1",
"--port",
"30001"
],
"parsed" : {
"dbpath" : "/data/test/r1",
"port" : 30001,
"replSet" : "test",
"rest" : true
},
"ok" : 1
}
If you have not passed specific port options like the ones above, then the mongod will be listening on 27017 and 28017 (http console) by default. Note: there are a couple of other arguments that can alter ports without being explicit, see here:
https://docs.mongodb.org/manual/reference/configuration-options/#sharding.clusterRole
Try this:
db.runCommand({whatsmyuri : 1})
It will display both the IP address and the port number.
MongoDB only listens on one port by default (27017). If the --rest interface is active, port 28017 (27017+1000) will also be open handling web requests for details.
MongoDB supports a getParameter command, but that only works if you're already connected to the Database (at which point you already know the port).
You can try, from the mongo shell:
db.getMongo()
Use this command to test that the mongo shell has a connection to the
proper database instance.
connection to <IP>:<PORT>
db.collection.explain()
For unsharded collections, explain returns the following serverInfo
information for the MongoDB instance:
"serverInfo" : {
"host" : <string>,
"port" : <int>,
"version" : <string>,
"gitVersion" : <string>
}
Default MongoDB Port
Recently we made live the mongodb sharding concept and its working fine in production server. But we have configured the public IP address instead of internal IP. So we have to change the internal ip in mongodb db sharding.
Please clarify whether its possible or not. If possible means, please share your input.
public ip example:
conf = {_id : "data1",members : [{_id : 0, host : "10.17.18.01:10001", votes : 2},{_id : 1, host : "10.17.19.02:10002", votes : 1},{_id:2, host: "10.17.19.03:10003", votes : 3, arbiterOnly: true}]}
internal ip example
conf = {_id : "data1",members : [{_id : 0, host : "20.17.18.01:10001", votes : 2},{_id : 1, host : "20.17.19.02:10002", votes : 1},{_id:2, host: "20.17.19.03:10003", votes : 3, arbiterOnly: true}]}
whether it will work. Pls suggest.
Regards,
Kumaran
You said you're trying to update the IPs in the sharding system, but the config documents you provided as an example look like a replica set configuration. If it's actually your replica set configuration you want to update, you should just be able to remove the entry for the old IP address from the replica set configuration, then add the node back in with the new IP. See http://www.mongodb.org/display/DOCS/Replica+Set+Configuration and http://www.mongodb.org/display/DOCS/Reconfiguring+when+Members+are+Up for more details.
If it's actually the sharding configuration you want to update, it will be a bit more complicated.
Throwing in an answer, even though this is a dated question for anyone else who might view this.
I would recommend using host names / host entries on your servers to handle local and external ips. However, to update the hosts in your case, you would have to change the replica set config.
Log into the Primary in the replica set then do the following:
> cfg = rs.conf()
> cfg.members[0].host = "[new host1]:[port]"
> cfg.members[1].host = "[new host2]:[port]"
> cfg.members[2].host = "[new host3]:[port]"
cfg.members is obviously a zero-index array, you can reuse that for how every many replicas you have.
> rs.reconfig( cfg )
From there, you would want to re-add your shards with the newly specified hosts.
from inside mongos.
simply use the following command to update the IPs of the shard servers:
db.shards.update({_id: <<"shard name">>} , {$set: {"host" : "newIP:27018"}})
Example:
db.shards.update({_id: "shard000"} , {$set: {"host" : "172.31.1.1:27018"}})
172.31.1.1 is the private ip address of your shard server in the private network.
avoid using a dynamic ip address.
If you want to do any modification in the Shard configuration then you should
use config
db.shards.update( { _id : } , { $set : { ... } } )
Please make sure that you restart your config server and mongos after making this change.