Check the current number of connections to MongoDb - mongodb

What is the command to get the number of clients connected to a particular MongoDB server?

connect to the admin database and run db.serverStatus():
> var status = db.serverStatus()
> status.connections
{"current" : 21, "available" : 15979}
>
You can directly get by querying
db.serverStatus().connections
To understand what does MongoDb's db.serverStatus().connections response mean, read the documentation here.
connections
"connections" : {
"current" : <num>,
"available" : <num>,
"totalCreated" : NumberLong(<num>)
},
connections
A document that reports on the status of the connections. Use these values to assess the current load and capacity requirements of the server.
connections.current
The number of incoming connections from clients to the database server . This number includes the current shell session. Consider the value of connections.available to add more context to this datum.
The value will include all incoming connections including any shell connections or connections from other servers, such as replica set members or mongos instances.
connections.available
The number of unused incoming connections available. Consider this value in combination with the value of connections.current to understand the connection load on the database, and the UNIX ulimit Settings document for more information about system thresholds on available connections.
connections.totalCreated
Count of all incoming connections created to the server. This number includes connections that have since closed.

Connection Count by ClientIP, with Total
We use this to view the number of connections by IPAddress with a total connection count. This was really helpful in debugging an issue... just get there before hit max connections!
For Mongo Shell:
db.currentOp(true).inprog.reduce((accumulator, connection) => { ipaddress = connection.client ? connection.client.split(":")[0] : "Internal"; accumulator[ipaddress] = (accumulator[ipaddress] || 0) + 1; accumulator["TOTAL_CONNECTION_COUNT"]++; return accumulator; }, { TOTAL_CONNECTION_COUNT: 0 })
Formatted:
db.currentOp(true).inprog.reduce(
(accumulator, connection) => {
ipaddress = connection.client ? connection.client.split(":")[0] : "Internal";
accumulator[ipaddress] = (accumulator[ipaddress] || 0) + 1;
accumulator["TOTAL_CONNECTION_COUNT"]++;
return accumulator;
},
{ TOTAL_CONNECTION_COUNT: 0 }
)
Example return:
{
"TOTAL_CONNECTION_COUNT" : 331,
"192.168.253.72" : 8,
"192.168.254.42" : 17,
"127.0.0.1" : 3,
"192.168.248.66" : 2,
"11.178.12.244" : 2,
"Internal" : 41,
"3.100.12.33" : 86,
"11.148.23.34" : 168,
"81.127.34.11" : 1,
"84.147.25.17" : 3
}
(the 192.x.x.x addresses at Atlas internal monitoring)
"Internal" are internal processes that don't have an external client. You can view a list of these with this:
db.currentOp(true).inprog.filter(connection => !connection.client).map(connection => connection.desc);

db.serverStatus() gives no of connections opend and avail but not shows the connections from which client. For more info you can use this command sudo lsof | grep mongod | grep TCP. I need it when i did replication and primary node have many client connection greater than secondary.
$ sudo lsof | grep mongod | grep TCP
mongod 5733 Al 6u IPv4 0x08761278 0t0 TCP *:28017 (LISTEN)
mongod 5733 Al 7u IPv4 0x07c7eb98 0t0 TCP *:27017 (LISTEN)
mongod 5733 Al 9u IPv4 0x08761688 0t0 TCP 192.168.1.103:27017->192.168.1.103:64752 (ESTABLISHED)
mongod 5733 Al 12u IPv4 0x08761a98 0t0 TCP 192.168.1.103:27017->192.168.1.103:64754 (ESTABLISHED)
mongod 5733 Al 13u IPv4 0x095fa748 0t0 TCP 192.168.1.103:27017->192.168.1.103:64770 (ESTABLISHED)
mongod 5733 Al 14u IPv4 0x095f86c8 0t0 TCP 192.168.1.103:27017->192.168.1.103:64775 (ESTABLISHED)
mongod 5733 Al 17u IPv4 0x08764748 0t0 TCP 192.168.1.103:27017->192.168.1.103:64777 (ESTABLISHED)
This shows that I currently have five connections open to the MongoDB port (27017) on my computer. In my case I'm connecting to MongoDB from a Scalatra server, and I'm using the MongoDB Casbah driver, but you'll see the same lsof TCP connections regardless of the client used (as long as they're connecting using TCP/IP).

You can just use
db.serverStatus().connections
Also, this function can help you spot the IP addresses connected to your Mongo DB
db.currentOp(true).inprog.forEach(function(x) { print(x.client) })

I tried to see all connections for mongo database by following command.
netstat -anp --tcp --udp | grep mongo
This command can show every tcp connection for mongodb in more detail.
tcp 0 0 10.26.2.185:27017 10.26.2.1:2715 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.1:1702 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.185:39506 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.185:40021 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.185:39509 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.184:46062 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.184:46073 ESTABLISHED 1442/./mongod
tcp 0 0 10.26.2.185:27017 10.26.2.184:46074 ESTABLISHED 1442/./mongod

In OS X, too see the connections directly on the network interface, just do:
$ lsof -n -i4TCP:27017
mongod 2191 inanc 7u IPv4 0xab6d9f844e21142f 0t0 TCP 127.0.0.1:27017 (LISTEN)
mongod 2191 inanc 33u IPv4 0xab6d9f84604cd757 0t0 TCP 127.0.0.1:27017->127.0.0.1:56078 (ESTABLISHED)
stores.te 18704 inanc 6u IPv4 0xab6d9f84604d404f 0t0 TCP 127.0.0.1:56078->127.0.0.1:27017 (ESTABLISHED)
No need to use grep etc, just use the lsof's arguments.
Too see the connections on MongoDb's CLI, see #milan's answer (which I just edited).

Also some more details on the connections with:
db.currentOp(true)
Taken from: https://jira.mongodb.org/browse/SERVER-5085

Sorry because this is an old post and currently there is more options than before.
db.getSiblingDB("admin").aggregate( [
{ $currentOp: { allUsers: true, idleConnections: true, idleSessions: true } }
,{$project:{
"_id":0
,client:{$arrayElemAt:[ {$split:["$client",":"]}, 0 ] }
,curr_active:{$cond:[{$eq:["$active",true]},1,0]}
,curr_inactive:{$cond:[{$eq:["$active",false]},1,0]}
}
}
,{$match:{client:{$ne: null}}}
,{$group:{_id:"$client",curr_active:{$sum:"$curr_active"},curr_inactive:{$sum:"$curr_inactive"},total:{$sum:1}}}
,{$sort:{total:-1}}
] )
Output example:
{ "_id" : "xxx.xxx.xxx.78", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.76", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.73", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.77", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.74", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.75", "curr_active" : 0, "curr_inactive" : 1428, "total" : 1428 }
{ "_id" : "xxx.xxx.xxx.58", "curr_active" : 0, "curr_inactive" : 510, "total" : 510 }
{ "_id" : "xxx.xxx.xxx.57", "curr_active" : 0, "curr_inactive" : 459, "total" : 459 }
{ "_id" : "xxx.xxx.xxx.55", "curr_active" : 0, "curr_inactive" : 459, "total" : 459 }
{ "_id" : "xxx.xxx.xxx.56", "curr_active" : 0, "curr_inactive" : 408, "total" : 408 }
{ "_id" : "xxx.xxx.xxx.47", "curr_active" : 1, "curr_inactive" : 11, "total" : 12 }
{ "_id" : "xxx.xxx.xxx.48", "curr_active" : 1, "curr_inactive" : 7, "total" : 8 }
{ "_id" : "xxx.xxx.xxx.51", "curr_active" : 0, "curr_inactive" : 8, "total" : 8 }
{ "_id" : "xxx.xxx.xxx.46", "curr_active" : 0, "curr_inactive" : 8, "total" : 8 }
{ "_id" : "xxx.xxx.xxx.52", "curr_active" : 0, "curr_inactive" : 6, "total" : 6 }
{ "_id" : "127.0.0.1", "curr_active" : 1, "curr_inactive" : 0, "total" : 1 }
{ "_id" : "xxx.xxx.xxx.3", "curr_active" : 0, "curr_inactive" : 1, "total" : 1 }

Connect to MongoDB using mongo-shell and run following command.
db.serverStatus().connections
e.g:
mongo> db.serverStatus().connections
{ "current" : 3, "available" : 816, "totalCreated" : NumberLong(1270) }

db.runCommand( { "connPoolStats" : 1 } )
{
"numClientConnections" : 0,
"numAScopedConnections" : 0,
"totalInUse" : 0,
"totalAvailable" : 0,
"totalCreated" : 0,
"hosts" : {
},
"replicaSets" : {
},
"ok" : 1
}

Connect with your mongodb instance from local system
sudo mongo "mongodb://MONGO_HOST_IP:27017" --authenticationDatabase admin
It ll let you know all connected clients and their details
db.currentOp(true)

Alternatively you can check connection status by logging into Mongo Atlas and then navigating to your cluster.

Related

Mongos instance can't communicate with the database

So I have a sharded cluster with 2 config servers, 2 shards each with 2 replicas and 2 mongos instances, everything running on different VMs.
However, after configuring all of it, I finally tried to interact with the database which is empty with a simple show dbs query from the mongos instance, but it threw me the following error (after thinking for like 1 min):
uncaught exception: Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set rep",
"code" : 133,
"codeName" : "FailedToSatisfyReadPreference",
"operationTime" : Timestamp(1648722327, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1648722327, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
Everything seems to be well configured and when I do sh.status() from the mongos instance it identifies the shards and replicas as such:
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("62421dd6b5f9640f309faca0")
}
shards:
{ "_id" : "rep", "host" : "rep/192.168.86.136:26000,192.168.86.141:26001", "state" : 1 }
{ "_id" : "repb", "host" : "repb/192.168.86.142:26002,192.168.86.143:26003", "state" : 1 }
active mongoses:
"4.4.8" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Empty host component parsing HostAndPort from ""
Time of Reported error: Thu Mar 31 2022 11:06:39 GMT+0100 (WEST)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
rep 919
repb 105
too many chunks to print, use verbose if you want to force print
{ "_id" : "testdb", "primary" : "rep", "partitioned" : false, "version" : { "uuid" : UUID("2e584dcd-25ea-4ba4-805c-b40928e26511"), "lastMod" : 1 } }
Maybe a firewall issue.
Every node in your cluster must be able to reach any other node via according port. See
Simple HTTP/TCP health check for MongoDB
Try this script to check each member of each replica set:
const MONGO_PASSWROD = '*******'
const AUTH_SOURCE = 'admin'
const user = db.runCommand({ connectionStatus: 1 }).authInfo.authenticatedUsers.shift().user;
const map = db.adminCommand("getShardMap").map;
for (let rs of Object.keys(map)) {
let uri = map[rs].split("/");
let connectionString = `mongodb://${user}:${MONGO_PASSWROD}#${uri[1]}/admin?replicaSet=${uri[0]}&authSource=${AUTH_SOURCE}`;
let replicaSet = Mongo(connectionString).getDB("admin");
for (let member of replicaSet.adminCommand({ replSetGetStatus: 1 }).members) {
if (!replicaSet.hello().hosts.includes(member.name)) continue;
printjsononeline({ replicaSet: rs, host: member.name, stateStr: member.stateStr, health: member.health });
if (member.health != 1 || !Array("PRIMARY", "SECONDARY").includes(member.stateStr))
print(`Member state of ${member.name} is '${member.stateStr}'`);
}
}
Turns out I configured the replica set wrongly, so all I had to do was recreate the volumes of all VMs and configure it all again from scratch. Now it works as it should.

Sharded mongodb cluster use only one shard

I'm trying to create a sharded mongodb cluster, I have 3 VM to use and I was trying to get the following setup:
VM1) Node.js Application + mongos instance
VM2) CSRS PRIMARY + Shard
VM3) CSRS SECONDARY + Shard
I'm not interested in replication but I need sharding because I will have to query over large number of results.
To achieve what stated above I have done the following:
On both VM2 and VM3
mongod –configsvr --replSet configM4C –bind_ip <ipVM2/3> --port 27040
mongod –shardsvr --replSet shardM4C --bind_ip <ipVM2/3> --port 27041
Then, connected to the configM4C on VM2(mongo --host --port 27040) and run
rs.initiate({
_id:”configM4C”,
configsvr: true,
members: [
{_id: 0, host: “<ipVM2>:27040”},
{_id:1, host: “<ipVM3>:27040”}
]
})
Then, connected to shardM4C on VM2(mongo --host --port 27041) and run
rs.initiate({
_id: shardM4C,
members: [
{_id: 0, host: “<ip1>:27041”},
{_id:1, host: “<ip2>:27041”}
]
})
Started mongos on VM1
mongos –configdb configM4C/<ipVM2>:27040,<ipVM3>:27040 --port 27042
Connected to mongos on VM1(mongo --port 27042):
sh.addShard("<ipVM2>:27041)
sh.addShard("<ipVM3>:27041)
sh.enableSharding("pings")
sh.shardCollection("pings.pings", {provider:1, from_zone: 1, to_zone: 1})
Anyway at this point if I run db.pings.getShardDistribution() I get:
Shard shardM4C at shardM4C/<ipVM3>:27041,<ipVM2>:27041
data : 19.47MiB docs : 102876 chunks : 1
estimated data per chunk : 19.47MiB
estimated docs per chunk : 102876
Totals
data : 19.47MiB docs : 102876 chunks : 1
Shard shardM4C contains 100% data, 100% docs in cluster, avg obj size on shard : 198B
sh.status():
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5b58a6b4986cc2d94694a3f9")
}
shards:
{ "_id" : "shardM4C", "host" : "shardM4C/10.0.0.4:27041,10.0.0.7:27041", "state" : 1 }
active mongoses:
"3.6.3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: "primary" } for set shardM4C
Time of Reported error: Wed Jul 25 2018 21:44:24 GMT+0000 (UTC)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shardM4C 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shardM4C Timestamp(1, 0)
{ "_id" : "pings", "primary" : "shardM4C", "partitioned" : true }
pings.pings
shard key: { "provider" : 1 }
unique: false
balancing: true
chunks:
shardM4C 1
{ "provider" : { "$minKey" : 1 } } -->> { "provider" : { "$maxKey" : 1 } } on : shardM4C Timestamp(1, 0)
{ "_id" : "test", "primary" : "shardM4C", "partitioned" : false }
And if I run a query in the executionStats I get:
...
"queryPlanner" : {
"mongosPlannerVersion" : 1,
"winningPlan" : {
"stage" : "SINGLE_SHARD",
"shards" : [
{
"shardName" : "shardM4C",
"connectionString" : "shardM4C/<ipVM3>:27041,<ipVM2>:27041",
"serverInfo" : {
"host" : "dpiscaglia-2",
"port" : 27041,
"version" : "3.6.3",
"gitVersion" : "9586e557d54ef70f9ca4b43c26892cd55257e1a5"
},
...
Both results makes me think that the query is actually working on only one shard, and the second is used only for replication and as a backup(which is not what i want), am i correct?
If yes could you please help me understand what I did wrong?
Thank you very much in advance for your help
Have a good day

Error when try to connect to Replica Set in Mongo

When I try to connect to mongo replica set in AWS I get this error:
slavenode:27017: [Errno -2] Name or service not
known,ip-XXX-XX-XX-XX:27017: [Errno -2] Name or service not known
(where XXX-XX.. corresponds to my actual ip address)
The code to connect is shown below:
client = MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/myFirstDB?replicaSet=rs0")
db = client.myFirstDB
try:
db.command("serverStatus")
except Exception as e:
print(e)
else:
print("You are connected!")
client.close()
(where in Master-PublicIP and Slave-PublicIP I have the actual IPv4 Public IP's from AWS console)
I have already a replica set and the configuration is:
rs0:PRIMARY> rs.conf()
{
"_id" : "rs0",
"version" : 2,
"members" : [
{
"_id" : 0,
"host" : "ip-XXX-XX-XX-XXX:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
},
{
"_id" : 1,
"host" : "SlaveNode:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatTimeoutSecs" : 10,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
}
}
}
I have create the /data/db in PRIMARY and the /data/db1 in SECONDARY and I have give the proper ownership with sudo chmod -R 755 /data/db
My MongoDB version is 3.0.15. Is anyone know what is going wrong?
Thanks in advance.
Have you tried removing the myFirstDB from within the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0")
Because your next line then specifies which db you want to use
db = client.myFirstDB
Or I think you can specify the db by putting a dot after the closing brace on the MongoClient()
MongoClient("mongodb://Master-PublicIP:27017,Slave-PublicIP:27017/?replicaSet=rs0").myFirstDB
I manage to solve the problem. As #N3i1 suggests in commnets, I use the Public DNS (IPv4). There was an issue with the hosts that I had declare in /etc/hosts.
In this file I had define the ips of master/ slaves with some names. For some reason this didn't work. I delete them and then I reconfigure the replica set configuration.
In PRIMARY in mongo shell I did:
cfg = {"_id" : "rs0", "members" : [{"_id" : 0,"host" : "Public DNS (IPv4):27017"},{"_id" : 1,"host" : "Public DNS (IPv4):27017"}]}
rs.reconfig(cfg,{force: true});
Then I connect in the replica set with python with:
MongoClient("mongodb://Public DNS (IPv4):27017,Public DNS (IPv4):27017/?replicaSet=rs0")
Of course change the Public DNS (IPv4) adresses with yours.

replica set in MongoDB using docker, primary has error and stops being primary when another member is added to the set

I have two docker containers running a mongo instance each, they were initialized like this:
docker run --name mongodb-shard-1-node-1 -d -v mongodb-shard-1-node-1:/data/db -p 27031:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-1 it shows the ip 172.17.0.2
docker run --name mongodb-shard-1-node-2 -d -v mongodb-shard-1-node-2:/data/db -p 27020:27017 mongo --replSet rs0 --smallfiles --oplogSize 128
when i do docker inspect mongodb-shard-1-node-2 it shows the ip 172.17.0.4
So i proceed to access mongodb-shard-1-node-1 by using docker exec -it mongodb-shard-1-node-1 mongo and i initialize it as the primary member like this:
rs.initiate()
{
"info2" : "no configuration specified. Using a default configuration for the set",
"me" : "ee3c41ef76b2:27017",
"ok" : 1
}
Then I proceed to add the mongodb-shard-1-node-2 to this member for it to work as a secondary member, at first it looks like it worked:
rs0:PRIMARY> rs.add("172.17.0.4:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2016-05-20T01:04:02.095Z"),
"myState" : 1,
"term" : NumberLong(1),
"members" : [ervalMillis" : NumberLong(2000),
{ "_id" : 0,
"name" : "ee3c41ef76b2:27017",
"state" : 1,,
"uptime" : 27,PRIMARY",
"optime""ts" : Timestamp(1463706237, 1),
}, "t" : NumberLong(1)
"infoMessage" : "could not find member to sync from",
"electionDate" : ISODate("2016-05-20T01:03:43Z"),
"self" : truen" : 2,
{,
"name" : "172.17.0.4:27017",
"state" : 0,,
"uptime" : 4,"STARTUP",
"optime""ts" : Timestamp(0, 0),
}, "t" : NumberLong(-1)
"lastHeartbeat" : ISODate("2016-05-20T01:04:01.187Z"),
"pingMs" : NumberLong(0),Date("1970-01-01T00:00:00Z"),
} "configVersion" : -2
"ok" : 1
}
but right away it fails for some reason and i have no idea why, here's what i get:
rs0:PRIMARY> rs.status()
2016-05-20T01:04:18.007+0000 E QUERY [thread1] Error: error doing query:
failed: network error while attempting to run command 'replSetGetStatus' on host '127.0.0.1:27017' :
DB.prototype.runCommand#src/mongo/shell/db.js:135:1
DB.prototype.adminCommand#src/mongo/shell/db.js:153:16
rs.status#src/mongo/shell/utils.js:1090:12
#(shell):1:1
2016-05-20T01:04:18.012+0000 I NETWORK [thread1] trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2016-05-20T01:04:18.018+0000 I NETWORK [thread1] reconnect 127.0.0.1:27017 (127.0.0.1) ok
rs0:SECONDARY
What is wrong? How do I fix it?
Edit: just to clarify, i had already tried the connections between the containers by doing what this part of the documentation says at: Test Connections Between all Members
Had my question answered here:
https://dba.stackexchange.com/a/139145/91866
I'm gonna quote the whole answer:
Your primary is trying to auto-configure itself as ee3c41ef76b2:27017 and that then resolves to the loopback (127.0.0.1) which is then not responding on the container as it expects. Depending on what the second container does to resolve ee3c41ef76b2, and especially it it does not resolve to 172.17.0.2, it will probably not be able to talk to the primary either.
Assuming you are correct about the connectivity (and you have verified that the instances are listening on the IP and not just the loopbasck) then you need to override the automatic detection and be explicit when you are calling rs.initiate(), something like this:
rs.initiate(
{
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "172.17.0.2:27017" },
{ _id: 1, host : "172.17.0.4:27017" }
]
}
)

mongodb replicaset host name change error

I have a mongodb replicaset on ubuntu.. In replica set, hosts are defined as localhost. You can see ;
{
"_id" : "myrep",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "localhost:27017"
},
{
"_id" : 2,
"host" : "localhost:27018"
},
{
"_id" : 1,
"host" : "localhost:27019",
"priority" : 0
}
]
}
I want to change host adresses with real ip of server. But when i run rs.reconfig, I get error :
{
"assertion" : "hosts cannot switch between localhost and hostname",
"assertionCode" : 13645,
"errmsg" : "db assertion failure",
"ok" : 0
}
How can i solve it ?
Thank you.
There is a cleaner way to do this:
use local
cfg = db.system.replset.findOne({_id:"replicaSetName"})
cfg.members[0].host="newHost:27017"
db.system.replset.update({_id:"replicaSetName"},cfg)
then restart mongo
The only way I found to change host names is recreating replica set.. To make it right db directories need to be cleaned.. Then starting all servers with replication mode after that creating new repset with new host names fixed it.