We have 6 node clusters. Three hidden nodes, two nodes to 24 hours delay.
ecset01:PRIMARY> cfg.members[5].slaveDelay = 0
0
ecset01:PRIMARY> rs.reconfig(cfg)
Mon Jan 12 11:30:15.802 DBClientCursor::init call() failed
Mon Jan 12 11:30:15.804 trying reconnect to 127.0.0.1:27017
Mon Jan 12 11:30:15.804 reconnect 127.0.0.1:27017 ok
Mon Jan 12 11:30:16.007 DBClientCursor::init call() failed
Mon Jan 12 11:30:16.008 JavaScript execution failed: Error: DBClientBase::findN: transport error: 127.0.0.1:27017 ns: admin.$cmd query: { authenticate: 1, nonce: "fe555b6fcb676ba7", user: "admin", key: "a2d59cbc51cf8c61b4cb45b7f4f8db80" } at src/mongo/shell/query.js:L78
>
Mon Jan 12 11:30:20.139 trying reconnect to 127.0.0.1:27017
Mon Jan 12 11:30:20.139 reconnect 127.0.0.1:27017 ok
ecset01:SECONDARY>
I like to know how to change slaveDelay =0 without impacting Primary.
You cannot. Reconfiguring the replica set can cause the primary to step down, causing a new election. The election will be brief if the replica set is healthy and you are only changing the delay, but you should still try to change configurations on your replica set during a maintenance window.
Related
Setup: replica set with 5 nodes, version 3.4.5.
Trying to switch PRIMARY with rs.stepDown(60, 30) but consistently getting the error:
rs0:PRIMARY> rs.stepDown(60, 30)
{
"ok" : 0,
"errmsg" : "No electable secondaries caught up as of 2017-07-11T00:21:11.205+0000. Please use {force: true} to force node to step down.",
"code" : 50,
"codeName" : "ExceededTimeLimit"
}
However, rs.printSlaveReplicationInfo() running in a parallel terminal confirms that all replicas are fully caught up:
rs0:PRIMARY> rs.printSlaveReplicationInfo()
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
source: X.X.X.X:27017
syncedTo: Tue Jul 11 2017 00:21:11 GMT+0000 (UTC)
0 secs (0 hrs) behind the primary
Am I doing something wrong?
UPD: I've checked long running operations before and during rs.stepDown as was suggested below and it looks like this:
# Before rs.stepDown
$ watch "mongo --quiet --eval 'JSON.stringify(db.currentOp())' | jq -r '.inprog[] | \"\(.secs_running) \(.desc) \(.op)\"' | sort -rnk1"
984287 rsSync none
984287 ReplBatcher none
67 WT RecordStoreThread: local.oplog.rs none
null SyncSourceFeedback none
null NoopWriter none
0 conn615153 command
0 conn614948 update
0 conn614748 getmore
...
# During rs.stepDown
984329 rsSync none
984329 ReplBatcher none
108 WT RecordStoreThread: local.oplog.rs none
16 conn615138 command
16 conn615136 command
16 conn615085 update
16 conn615079 insert
...
Basically, long running user operations seem to happen as a result of rs.stepDown() as secs_running becomes nonzero once PRIMARY attempts to switch over and keeps growing all the way up until stepDown fails. Then everything gets back to normal.
Any ideas on why this happens and whether that's normal at all?
I have used below command to step down to secondary
db.adminCommand( { replSetStepDown: 120, secondaryCatchUpPeriodSecs: 15, force: true } )
You can find this in below mongodb official documentation
https://docs.mongodb.com/manual/reference/command/replSetStepDown/
To close the loop on this question, it was determined that the failed stepdown was due to time going backward on the host.
MongoDB 3.4.6 is more resilient to time issues on the host, and upgrading the deployment fixes the stalling issues.
Before stepping down, rs.stepDown() will attempt to terminate long running user operations that would block the primary from stepping down, such as an index build, a write operation or a map-reduce job.
Do you have some long running jobs on going? Check db. Check result of db.currentOp()
You can try to set longer stepping down time rs.stepDown(60, 360).
Quoting an answer from https://jira.mongodb.org/browse/SERVER-27015:
This is most likely due to the fact that by default the shutdown command will only succeed on a primary if the secondaries are fully caught up at the exact moment that the shutdown command is executed.
I faced a similar issue and tried the db.shutdownServer() command several times, however it worked exactly when the secondary was 0 seconds behind the primary.
Can anyone say are if there is any practical limit for the number of databases in mongodb? I've started to have serious problems when I passed 120 databases. Simple things like :
> show dbs
Mon Feb 10 16:35:32 DBClientCursor::init call() failed
Mon Feb 10 16:35:32 query failed : admin.$cmd { listDatabases: 1.0 } to: 127.0.0.1:27017
Mon Feb 10 16:35:32 Error: error doing query: failed src/mongo/shell/collection.js:155
Mon Feb 10 16:35:32 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:35:32 reconnect 127.0.0.1:27017 failed couldn't connect to server 127.0.0.1:27017
>
Mon Feb 10 16:36:01 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:36:01 reconnect 127.0.0.1:27017 failed couldn't connect to server 127.0.0.1:27017
>
Mon Feb 10 16:37:01 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:37:01 reconnect 127.0.0.1:27017 ok
and
> getMemInfo()
{ "virtual" : 32, "resident" : 7 }
Mon Feb 10 16:39:00 DBClientCursor::init call() failed
Mon Feb 10 16:39:00 query failed : admin.$cmd { replSetGetStatus: 1.0, forShell: 1.0 } to: 127.0.0.1:27017
> shell
Mon Feb 10 16:39:38 ReferenceError: shell is not defined (shell):1
Mon Feb 10 16:39:38 trying reconnect to 127.0.0.1:27017
Mon Feb 10 16:39:38 reconnect 127.0.0.1:27017 ok
Yet the log file stayed enigmatic
What version of mongodb are you running on what host?
Here is a test on CenOS 6.5, mongodb 2.2 x86_64 direct from EPEL
Here is a sample python script that creates 1000 databases
from pymongo import MongoClient
mc = MongoClient()
for i in range(5000):
print i
mc['db%s'%(i)].test.insert({"test":True})
output:
...snip...
506
Traceback (most recent call last):
File "overload_mongo.py", line 6, in <module>
mc['db%s'%(i)].test.insert({"test":True})
File "/usr/lib64/python2.6/site-packages/pymongo/collection.py", line 357, in insert
continue_on_error, self.__uuid_subtype), safe)
File "/usr/lib64/python2.6/site-packages/pymongo/mongo_client.py", line 929, in _send_message
raise AutoReconnect(str(e))
pymongo.errors.AutoReconnect: [Errno 104] Connection reset by peer
There it is, looking at the log
ERROR: Uncaught std::exception: boost::filesystem::basic_directory_iterator constructor: Too many open files: "/index/bauman/db/_tmp/esort.1392056635.506/", terminating
The good ole too many open files problem
If you are on a enterprise linux platform, you can drop this file into /etc/security/limits.d/mongodb.conf and start a new session
mongodb hard nofile 99999
mongodb soft nofile 99999
mongodb hard nproc 99999
mongodb soft nproc 99999
I dont know how to achieve a similar result on windows.
The 'problem' lies in that MongoDB wants to memory map every single database file, so you need your hostOS to allow it to do so.
Same code as above
python overload_mongo.py
Output
...snip...
995
996
997
998
999
All better
I got the following error when I try to shutdown mongodb in my VM Ubuntu.
I am running 12.10 Ubuntu headless server.
The current Mongodb Shell Version is 2.0.6
use admin
switched to db admin
> db.shutdownServer()
Tue Dec 10 14:17:03 DBClientCursor::init call() failed
Tue Dec 10 14:17:03 query failed : admin.$cmd { shutdown: 1.0 } to: 127.0.0.1
server should be down...
Tue Dec 10 14:17:03 trying reconnect to 127.0.0.1
Tue Dec 10 14:17:03 reconnect 127.0.0.1 ok
Tue Dec 10 14:17:03 Socket recv() errno:104 Connection reset by peer 127.0.0.1:27017
Tue Dec 10 14:17:03 SocketException: remote: 127.0.0.1:27017 error: 9001 socket exception [1] server [127.0.0.1:27017]
Tue Dec 10 14:17:03 DBClientCursor::init call() failed
Tue Dec 10 14:17:03 query failed : admin.$cmd { getlasterror: 1.0, w: 1.0 } to: 127.0.0.1
Tue Dec 10 14:17:03 Error: error doing query: failed shell/collection.js:151
What should I do?
My reason for trying to shut it down is because I want to update to mongo 2.2.
Please advise.
Although the messaging is confusing, this is actually expected behaviour if you shutdown via the mongo shell. Since you ran the db.shutdownServer() command through the mongo shell it can no longer connect to the server and this is essentially indicating the shell has been disconnected.
The mongo shell tries to automatically reconnect when you hit enter, which results in the messages like "trying to reconnect ...".
There is an open issue to improve this behaviour/messaging if you'd like to upvote/watch it: SERVER-5467.
I am trying to configure a standalone mongodb replica set with 3 instances. I seem to have gotten into a funky state. Two of my instances went down, and I was left with all secondary nodes. I tried to follow this: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
I got this error though:
rs0:SECONDARY> rs.reconfig(cfg, {force : true})
{
"errmsg" : "exception: need most members up to reconfigure, not ok : obfuscated_hostname:27019",
"code" : 13144,
"ok" : 0
}
When I look at the logs I see this:
Fri Aug 2 20:45:11.895 [initandlisten] options: { config: "/etc/mongodb1.conf",
dbpath: "/var/lib/mongodb1", logappend: "true", logpath: "/var/log/mongodb/mongodb1.log",
port: 27018, replSet: "rs0" }
Fri Aug 2 20:45:11.897 [initandlisten] journal dir=/var/lib/mongodb1/journal
Fri Aug 2 20:45:11.897 [initandlisten] recover begin
Fri Aug 2 20:45:11.897 [initandlisten] recover lsn: 0
Fri Aug 2 20:45:11.897 [initandlisten] recover /var/lib/mongodb1/journal/j._0
Fri Aug 2 20:45:11.899 [initandlisten] recover cleaning up
Fri Aug 2 20:45:11.899 [initandlisten] removeJournalFiles
Fri Aug 2 20:45:11.899 [initandlisten] recover done
Fri Aug 2 20:45:11.923 [initandlisten] waiting for connections on port 27018
Fri Aug 2 20:45:11.925 [websvr] admin web console waiting for connections on port 28018
Fri Aug 2 20:45:11.927 [rsStart] replSet I am hostname_obfuscated:27018
Fri Aug 2 20:45:11.927 [rsStart] replSet STARTUP2
Fri Aug 2 20:45:11.929 [rsHealthPoll] replset info hostname_obf:27017 thinks that we are down
Fri Aug 2 20:45:11.929 [rsHealthPoll] replSet member hostname_obf:27017 is up
Fri Aug 2 20:45:11.929 [rsHealthPoll] replSet member hostname_obf:27017 is now in state SECONDARY
Fri Aug 2 20:45:12.587 [initandlisten] connection accepted from ip_obf:52446 #1 (1 connection now open)
Fri Aug 2 20:45:12.587 [initandlisten] connection accepted from ip_obf:52447 #2 (2 connections now open)
Fri Aug 2 20:45:12.588 [conn1] end connection ip_obf:52446 (1 connection now open)
Fri Aug 2 20:45:12.928 [rsSync] replSet SECONDARY
I'm unable to connect to the mongo instances, even though the logs say that it is up and running. Any ideas on what to do here?
You did not mention which version of mongodb you are using, but I assume it is post-2.0.
I think the problem with your forced reconfiguration is that after this reconfiguration, you still need to have the minimum number of nodes for a functioning replica set, i.e. 3. But since you originally had 3 members and lost 2, there is no way you could turn that single surviving node into a functioning replica set.
Your only option for recovery would be to bring up the surviving node as a stand-alone server, backup the database, and then create a new 3-node replica set with that data.
Yes you can turn up a single secondary replica to primary if the secondary server is running fine.Do follow the below simple steps:
Step 1: Connect to member and check the current configuration
rs.conf()
Step 2: Save the current configuration to another variable.
x = rs.conf()
Step 3: Select the id,host and port of the member that is to be made as primary.
x.members = [{"_id":1,"host" : "localhost.localdomain:27017"}]
Step 4: Reconfigure the new replica set by force.
rs.reconfig(x, {force:true})
Now the desired member will be promoted as the primary.
i have a sharded mongo env, everything was ok but recently i noticed that
the shards have big difference:
chunks:
ProductionShardC 939
ProductionShardB 986
ProductionShardA 855
edPrimaryShard 1204
balancer is running and i can see it also in the locks:
db.locks.find( { _id : "balancer" } ).pretty()
{
"_id" : "balancer",
"process" : "ip-10-0-0-100:27017:1371132087:1804289383",
"state" : 2,
"ts" : ObjectId("51e1e5d75e1777de5f007ea5"),
"when" : ISODate("2013-07-13T23:42:15.660Z"),
"who" : "ip-10-0-0-100:27017:1371132087:1804289383:Balancer:846930886",
"why" : "doing balance round"
}
here is the /var/log/mongo/mongos.log of mongos
cat mongos.log
Sun Aug 4 15:33:29.859 [mongosMain] MongoS version 2.4.4 starting: pid=8520 port=27017 64-bit host=ip-10-0-0-100 (--help for usage)
Sun Aug 4 15:33:29.859 [mongosMain] git version: 4ec1fb96702c9d4c57b1e06dd34eb73a16e407d2
Sun Aug 4 15:33:29.859 [mongosMain] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49
Sun Aug 4 15:33:29.859 [mongosMain] options: { configdb: "10.0.1.200:27019,10.0.1.201:27019,10.0.1.202:27019", keyFile: "/media/Data/db/mongoKeyFile", logpath: "/var/log/mongo/mongos.log" }
Sun Aug 4 15:33:30.078 [mongosMain] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:30.079 [mongosMain] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:30.080 [mongosMain] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:30.092 [mongosMain] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:30.093 [mongosMain] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:30.093 [mongosMain] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:30.809 [mongosMain] waiting for connections on port 27017
Sun Aug 4 15:33:30.809 [Balancer] about to contact config servers and shards
Sun Aug 4 15:33:30.810 [websvr] admin web console waiting for connections on port 28017
Sun Aug 4 15:33:30.810 [Balancer] starting new replica set monitor for replica set edPrimaryShard with seed of 10.0.1.150:27017,10.0.1.151:27017,10.0.1.152:27017
Sun Aug 4 15:33:30.811 [Balancer] successfully connected to seed 10.0.1.150:27017 for replica set edPrimaryShard
Sun Aug 4 15:33:30.811 [Balancer] changing hosts to { 0: "10.0.1.150:27017", 1: "10.0.1.152:27017", 2: "10.0.1.151:27017" } from edPrimaryShard/
Sun Aug 4 15:33:30.811 [Balancer] trying to add new host 10.0.1.150:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.812 [Balancer] successfully connected to new host 10.0.1.150:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:30.812 [Balancer] trying to add new host 10.0.1.151:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] successfully connected to new host 10.0.1.151:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] trying to add new host 10.0.1.152:27017 to replica set edPrimaryShard
Sun Aug 4 15:33:30.813 [Balancer] successfully connected to new host 10.0.1.152:27017 in replica set edPrimaryShard
Sun Aug 4 15:33:31.013 [Balancer] Primary for replica set edPrimaryShard changed to 10.0.1.150:27017
Sun Aug 4 15:33:31.019 [Balancer] replica set monitor for replica set edPrimaryShard started, address is edPrimaryShard/10.0.1.150:27017,10.0.1.151:27017,10.0.1.152:27017
Sun Aug 4 15:33:31.019 [ReplicaSetMonitorWatcher] starting
Sun Aug 4 15:33:31.021 [Balancer] starting new replica set monitor for replica set ProductionShardA with seed of 10.0.1.160:27017,10.0.1.161:27017,10.0.1.162:27017
Sun Aug 4 15:33:31.021 [Balancer] successfully connected to seed 10.0.1.160:27017 for replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] changing hosts to { 0: "10.0.1.160:27017", 1: "10.0.1.162:27017", 2: "10.0.1.161:27017" } from ProductionShardA/
Sun Aug 4 15:33:31.022 [Balancer] trying to add new host 10.0.1.160:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] successfully connected to new host 10.0.1.160:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.022 [Balancer] trying to add new host 10.0.1.161:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.023 [Balancer] successfully connected to new host 10.0.1.161:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.023 [Balancer] trying to add new host 10.0.1.162:27017 to replica set ProductionShardA
Sun Aug 4 15:33:31.024 [Balancer] successfully connected to new host 10.0.1.162:27017 in replica set ProductionShardA
Sun Aug 4 15:33:31.187 [Balancer] Primary for replica set ProductionShardA changed to 10.0.1.160:27017
Sun Aug 4 15:33:31.232 [Balancer] replica set monitor for replica set ProductionShardA started, address is ProductionShardA/10.0.1.160:27017,10.0.1.161:27017,10.0.1.162:27017
Sun Aug 4 15:33:31.234 [Balancer] starting new replica set monitor for replica set ProductionShardB with seed of 10.0.1.170:27017,10.0.1.171:27017,10.0.1.172:27017
Sun Aug 4 15:33:31.235 [Balancer] successfully connected to seed 10.0.1.170:27017 for replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] changing hosts to { 0: "10.0.1.170:27017", 1: "10.0.1.172:27017", 2: "10.0.1.171:27017" } from ProductionShardB/
Sun Aug 4 15:33:31.237 [Balancer] trying to add new host 10.0.1.170:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] successfully connected to new host 10.0.1.170:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.237 [Balancer] trying to add new host 10.0.1.171:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] successfully connected to new host 10.0.1.171:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] trying to add new host 10.0.1.172:27017 to replica set ProductionShardB
Sun Aug 4 15:33:31.238 [Balancer] successfully connected to new host 10.0.1.172:27017 in replica set ProductionShardB
Sun Aug 4 15:33:31.361 [Balancer] Primary for replica set ProductionShardB changed to 10.0.1.170:27017
Sun Aug 4 15:33:31.379 [Balancer] replica set monitor for replica set ProductionShardB started, address is ProductionShardB/10.0.1.170:27017,10.0.1.171:27017,10.0.1.172:27017
Sun Aug 4 15:33:31.383 [Balancer] starting new replica set monitor for replica set ProductionShardC with seed of 10.0.1.180:27017,10.0.1.181:27017,10.0.1.182:27017
Sun Aug 4 15:33:31.383 [Balancer] successfully connected to seed 10.0.1.180:27017 for replica set ProductionShardC
Sun Aug 4 15:33:31.384 [Balancer] changing hosts to { 0: "10.0.1.180:27017", 1: "10.0.1.182:27017", 2: "10.0.1.181:27017" } from ProductionShardC/
Sun Aug 4 15:33:31.384 [Balancer] trying to add new host 10.0.1.180:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] successfully connected to new host 10.0.1.180:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] trying to add new host 10.0.1.181:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] successfully connected to new host 10.0.1.181:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.385 [Balancer] trying to add new host 10.0.1.182:27017 to replica set ProductionShardC
Sun Aug 4 15:33:31.386 [Balancer] successfully connected to new host 10.0.1.182:27017 in replica set ProductionShardC
Sun Aug 4 15:33:31.499 [Balancer] Primary for replica set ProductionShardC changed to 10.0.1.180:27017
Sun Aug 4 15:33:31.510 [Balancer] replica set monitor for replica set ProductionShardC started, address is ProductionShardC/10.0.1.180:27017,10.0.1.181:27017,10.0.1.182:27017
Sun Aug 4 15:33:31.513 [Balancer] config servers and shards contacted successfully
Sun Aug 4 15:33:31.513 [Balancer] balancer id: ip-10-0-0-100:27017 started at Aug 4 15:33:31
Sun Aug 4 15:33:31.513 [Balancer] SyncClusterConnection connecting to [10.0.1.200:27019]
Sun Aug 4 15:33:31.514 [Balancer] SyncClusterConnection connecting to [10.0.1.201:27019]
Sun Aug 4 15:33:31.514 [Balancer] SyncClusterConnection connecting to [10.0.1.202:27019]
Sun Aug 4 15:33:31.537 [LockPinger] creating distributed lock ping thread for 10.0.1.200:27019,10.0.1.201:27019,10.0.1.202:27019 and process ip-10-0-0-100:27017:1375619611:1804289383 (sleeping for 30000ms)
Sun Aug 4 15:33:35.777 [mongosMain] connection accepted from 84.108.44.142:50916 #1 (1 connection now open)
Sun Aug 4 15:33:35.963 [conn1] authenticate db: admin { authenticate: 1, user: "root", nonce: "50c90ba9496d0a2d", key: "52390c478fffe89d03b776dd14e7c0d6" }
Sun Aug 4 15:33:37.704 [conn1] ChunkManager: time to load chunks for profiles.devices: 104ms sequenceNumber: 2 version: 2898|1177||51bb0e3a5e1777de5ffbf898 based on: (empty)
Sun Aug 4 15:33:37.712 [conn1] ChunkManager: time to load chunks for profiles.user_devices: 4ms sequenceNumber: 3 version: 92|25||51bb10be5e1777de5ffbf8d5 based on: (empty)
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.150:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.151:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.715 [conn1] creating WriteBackListener for: 10.0.1.152:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.160:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.161:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.718 [conn1] creating WriteBackListener for: 10.0.1.162:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.170:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.171:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.722 [conn1] creating WriteBackListener for: 10.0.1.172:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.180:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.181:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:37.725 [conn1] creating WriteBackListener for: 10.0.1.182:27017 serverID: 51fe4a1a309fab9136fcd24a
Sun Aug 4 15:33:39.468 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.468 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.469 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.469 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: userFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: systemFlags
Sun Aug 4 15:33:39.470 [conn1] warning: mongos collstats doesn't know about: userFlags
why it have such a big different? 1 shard has 855 and another 1204
how can i fix it?