when I use mongostat ,it shows insert query & etc ...
insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time
27985 *0 *0 *0 217 440|0 4.3% 79.9% 0 2.47G 598M 0|0 1|1 4.69m 8.01m 20 wh PRI Dec 17 14:41:23.423
23750 *0 *0 *0 193 393|0 5.5% 78.2% 0 2.47G 599M 0|0 1|1 4.05m 7.01m 20 wh PRI Dec 17 14:41:24.417
26240 *0 *0 *0 208 433|0 3.6% 79.2% 0 2.47G 591M 0|0 1|1 4.38m 7.60m 20 wh PRI Dec 17 14:41:25.417
27490 *0 *0 *0 227 451|0 5.7% 82.7% 0 2.47G 602M 0|0 2|0 4.56m 8.01m 20 wh PRI Dec 17 14:41:26.418
*0 *0 *0 *0 0 5|0 3.2% 79.2% 0 2.47G 612M 0|0 1|0 621b 16.9m 20 wh PRI Dec 17 14:41:27.418
*0 *0 *0 *0 0 6|0 3.2% 79.2% 0 2.47G 612M 0|0 1|0 1.15k 71.7k 20 wh PRI Dec 17 14:41:28.419
20994 2 3 *0 169 353|0 4.8% 80.7% 0 2.47G 613M 0|0 1|0 3.52m 5.98m 20 wh PRI Dec 17 14:41:29.419
28202 *0 *0 *0 250 501|0 6.4% 81.1% 0 2.47G 599M 0|0 1|0 4.76m 8.33m 20 wh PRI Dec 17 14:41:30.417
29650 *0 *0 *0 227 453|0 2.7% 78.2% 0 2.47G 596M 0|0 1|0 4.94m 8.57m 20 wh PRI Dec 17 14:41:31.436
26487 *0 *0 *0 213 440|0 4.8% 80.3% 0 2.47G 593M 0|0 1|0 4.44m 9.37m 20 wh PRI Dec 17 14:41:32.441
What does these parameters represent?
What are the two parts seperated by | in command qrw qre ?
This is a explained in the docs, but only implicitly. After a bit of digging I came to this result:
The fields qrw and arw are a combination of qr + qw and ar + aw with the following meanings:
qr: The length of the queue of clients waiting to read data from the MongoDB instance.
qw: The length of the queue of clients waiting to write data from the MongoDB instance.
ar: The number of active clients performing read operations.
aw: The number of active clients performing write operations.
All fields are described in the official docs: https://docs.mongodb.com/manual/reference/program/mongostat/#fields
Related
I have a collection it have 2 billion documents,and it have 3 shardings I allot 20G memory to every sharding .It write very slow now and I use mongostate to monitor it, It only write 200 documents every second avg,and others is noraml,and I do not have any index,and I don't know what to do.
insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time
127 *0 *0 *0 9 9|0 2.8% 76.5% 0 12.0G 9.59G 0|0 1|6 224k 263k 80 shard1 PRI Mar 11 16:08:38.710
115 *0 *0 *0 6 13|0 2.7% 76.6% 0 12.0G 9.60G 0|1 1|3 185k 166k 80 shard1 PRI Mar 11 16:08:39.625
137 *0 *0 *0 11 8|0 2.7% 76.7% 0 12.0G 9.60G 0|0 1|3 231k 369k 80 shard1 PRI Mar 11 16:08:40.625
135 *0 *0 *0 21 27|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|2 263k 340k 80 shard1 PRI Mar 11 16:08:41.625
119 *0 *0 *0 9 15|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|6 220k 212k 80 shard1 PRI Mar 11 16:08:42.702
111 *0 *0 *0 4 9|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|8 195k 206k 80 shard1 PRI Mar 11 16:08:44.115
154 *0 *0 *0 15 19|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|3 282k 441k 80 shard1 PRI Mar 11 16:08:44.625
150 *0 *0 *0 13 20|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|1 258k 355k 80 shard1 PRI Mar 11 16:08:45.625
127 *0 *0 *0 15 17|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|4 231k 311k 79 shard1 PRI Mar 11 16:08:46.625
113 *0 *0 *0 11 11|0 3.0% 76.9% 0 12.0G 9.60G 0|0 1|5 215k 278k 79 shard1 PRI Mar 11 16:08:47.738
and IOPS
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 161.00 299.50 3.99 4.86 39.34 8.55 18.47 49.90 1.
Options:
1) Improve storage performance:
A. Add better disks HDD(bad choise)->SSD(better)->NVME SSD(excellent)
B. Tweak operating system/file system ( mount with noatime , split across phisical volumes for better IOPS )
C. Mount journal files to different volume.
2) Improvements from mongoDB:
A. Pre-split collection.
B. Add more shards.
C. Switch off FTDC.
D. Optimize writeConcern.
E. You say you dont have any indexes and you have sharded in 3x shards ? ( I think you need to have at least one index to be able to shard the collection , you maybe need to evaluate and check if this is correct shard index )
F. Reduce number of replicaSet members if possible in the initial loading. ( majority write concern in 7 members will requiere insert to be confirmed by 4x members )
3) Improvements from application:
A. Reduce writeConcern if possible.
B. Insert in paralel batches instead of single sequential inserts.
Please, share some more details like schema , shard distribution etc?
I have a spark streaming application that fetches log records from Kafka and inserts all the access log records into mongodb. The application runs normally in the first few batches,but after some batches there appears some tasks in a job takes quite a long time to insert data into mongodb.I guess it should be a problem with my mongodb connection pool configuration, but I have tried changing quite a lot with no promotion.
Here are the results from web ui:
time takes for each job
time takes for abnormal tasks
Spark: version-1.5.1 on yarn (well,this may be really too old.)
Mongodb: version 3.4.4 running on four machines with 12 shards.Each machine is 160G+ and 40 CPUs.
Codes for mongodb connection pool:
private MongoManager() {
if (mongoClient == null) {
MongoClientOptions.Builder build = new MongoClientOptions.Builder();
build.connectionsPerHost(200);
build.socketTimeout(1000);
build.threadsAllowedToBlockForConnectionMultiplier(200);
build.maxWaitTime(1000 * 60 * 2);
build.connectTimeout(1000 * 60 * 1);
build.writeConcern(WriteConcern.UNACKNOWLEDGED);
MongoClientOptions myOptions = build.build();
try {
ServerAddress serverAddress1 = new ServerAddress(ip1, 20000);
ServerAddress serverAddress2 = new ServerAddress(ip2, 20000);
ServerAddress serverAddress3 = new ServerAddress(ip3, 20000);
ServerAddress serverAddress4 = new ServerAddress(ip4, 20000);
List<ServerAddress> lists = new ArrayList<>(8);
lists.add(serverAddress1);
lists.add(serverAddress2);
lists.add(serverAddress3);
lists.add(serverAddress4);
mongoClient = new MongoClient(lists, myOptions);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void inSertBatch(String dbName, String collectionName, List<DBObject> jsons) {
if (jsons == null || jsons.isEmpty()) {
return;
}
DB db = mongoClient.getDB(dbName);
DBCollection dbCollection = db.getCollection(collectionName);
dbCollection.insert(jsons);
}
And the spark streaming code are as below:
referDstream.foreachRDD(rdd => {
rdd.foreachPartition(partition => {
val records = partition.map(x => {
val data = x._1.split("_")
val dbObject: DBObject = new BasicDBObject()
dbObject.put("xxx","xxx")
...
dbObject
}).toList
val mg: MongoManager = MongoManager.getInstance()
mg.inSertBatch("dbname", "colname", records.asJava)
})
})
Script to submit application:
nohup ${SPARK_HOME}/bin/spark-submit --name ${jobname} --driver-cores 2 --driver-memory 8g
--num-executors 20 --executor-memory 16g --executor-cores 4
--conf "spark.executor.extraJavaOptions=-XX:+UseConcMarkSweepGC" --conf "spark.shuffle.manager=hash"
--conf "spark.shuffle.consolidateFiles=true" --driver-java-options "-XX:+UseConcMarkSweepGC"
--master ${master} --class ${APP_MAIN} --jars ${jars_path:1} ${APP_HOME}/${MAINJAR} ${sparkconf} &
Data obtained from mongo shell:
$ mongostat -h xxx.xxx.xxx.xxx:20000
insert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn time
*0 *0 *0 *0 0 14|0 0 0B 1.17G 514M 0 0|0 0|0 985b 19.4k 58 Dec 7 03:10:52.949
2999 *0 *0 *0 0 8|0 0 0B 1.17G 514M 0 0|0 0|0 517b 17.6k 58 Dec 7 03:10:53.950
15000 *0 *0 *0 0 19|0 0 0B 1.17G 514M 0 0|0 0|0 402b 17.2k 58 Dec 7 03:10:54.950
17799 *0 *0 *0 0 22|0 0 0B 1.17G 514M 0 0|0 0|0 30.5m 16.9k 58 Dec 7 03:10:55.950
15996 *0 *0 *0 0 18|0 0 0B 1.17G 514M 0 0|0 0|0 343b 16.9k 58 Dec 7 03:10:56.950
12003 *0 *0 *0 0 26|0 0 0B 1.17G 514M 0 0|0 0|0 982b 19.3k 58 Dec 7 03:10:57.949
*0 *0 *0 *0 0 6|0 0 0B 1.17G 514M 0 0|0 0|0 518b 17.6k 58 Dec 7 03:10:58.949
4704 *0 *0 *0 0 8|0 0 0B 1.17G 514M 0 0|0 0|0 10.2m 17.1k 58 Dec 7 03:10:59.950
34600 *0 *0 *0 0 64|0 0 0B 1.17G 526M 0 0|0 0|0 26.9m 16.9k 58 Dec 7 03:11:00.951
33129 *0 *0 *0 0 36|0 0 0B 1.17G 526M 0 0|0 0|0 344b 17.0k 58 Dec 7 03:11:01.949
mongos> db.serverStatus().connections
{ "current" : 57, "available" : 19943, "totalCreated" : 2707 }
Thanks for any suggestion about how to solve this problem.
Can someone explain which DB is represented by "." in locked db field in mongostat output mean? Does it mean global lock? Also its being output every 2 seconds, any specific reason why its happening every 2 seconds? Is it something to do with replication happening within the replica set?
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn set repl time
*0 *0 58 *0 191 63|0 0 4.39g 9.02g 241m 376 local:0.5% 0 0|0 0|0 28k 66k 55 rs-lol PRI 11:28:21
*0 *0 25 *0 93 30|0 0 4.39g 9.02g 241m 335 local:0.7% 0 0|0 0|0 13k 34k 55 rs-lol PRI 11:28:22
*0 *0 19 *0 49 26|0 1 4.39g 9.02g 241m 150 .:23.1% 0 0|0 0|0 9k 27k 55 rs-lol PRI 11:28:23
*0 *0 20 *0 67 25|0 0 4.39g 9.02g 241m 139 local:0.2% 0 0|0 0|0 10k 28k 55 rs-lol PRI 11:28:24
*0 *0 28 *0 102 30|0 0 4.39g 9.02g 241m 392 local:0.7% 0 0|0 0|0 14k 37k 55 rs-lol PRI 11:28:25
*0 *0 38 *0 133 41|0 0 4.39g 9.02g 241m 424 local:0.9% 0 0|0 0|0 19k 46k 55 rs-lol PRI 11:28:26
*0 *0 40 *0 144 45|0 0 4.39g 9.02g 241m 284 local:0.4% 0 0|0 0|0 20k 49k 55 rs-lol PRI 11:28:27
*0 *0 39 *0 138 43|0 0 4.39g 9.02g 241m 333 local:0.7% 0 0|0 0|0 19k 48k 55 rs-lol PRI 11:28:28
*0 *0 44 *0 159 49|0 0 4.39g 9.02g 241m 522 local:0.8% 0 0|0 0|0 22k 53k 55 rs-lol PRI 11:28:29
*0 *0 35 *0 128 37|0 0 4.39g 9.02g 241m 391 local:0.7% 0 0|0 0|0 17k 43k 55 rs-lol PRI 11:28:30
The locked db field you are referring to in your example now refers to the per DB locks as of version 2.2. Prior to version 2.2, it referred to global write locks.
I have a mongodb replica set, and now there is a problem. When I use mongostat, I found the master node as follows:
insert query update delete getmore command flushes mapped vsize res faults
*0 *0 148 *0 260 96|0 0 942g 1888g 28.2g 439
*0 *0 351 *0 350 200|0 0 942g 1888g 28.2g 672
*0 *0 350 *0 593 257|0 0 942g 1888g 28.2g 1319
*0 *0 196 *0 328 99|0 0 942g 1888g 28.2g 679
*0 *0 159 *0 255 105|0 0 942g 1888g 28.2g 551
*0 *0 264 *0 329 184|0 0 942g 1888g 28.2g 466
*0 *0 168 *0 312 96|0 0 942g 1888g 28.2g 588
*0 *0 120 *0 227 70|0 0 942g 1888g 28.2g 396
*0 *0 132 *0 236 72|0 0 942g 1888g 28.2g 441
and other nodes(secondaries):
insert query update delete getmore command flushes mapped vsize res faults
*364 659 *1353 *0 0 169|0 1 944g 1892g 28.7g 6852
*854 614 *1709 *0 0 169|0 0 944g 1892g 28.5g 6208
*603 484 *1320 *0 0 158|0 0 944g 1892g 28.6g 5383
*463 473 *1147 *0 0 152|0 0 944g 1892g 28.6g 5012
*343 488 *852 *0 0 130|0 0 944g 1892g 28.7g 4496
*614 654 *1187 *0 0 176|0 1 944g 1892g 28.7g 5058
*445 659 *1146 *0 0 183|0 0 944g 1892g 28.7g 5775
*58 43 *104 *0 0 18|0 0 944g 1892g 28.7g 299
*629 502 *1382 *0 0 152|0 0 944g 1892g 28.8g 4379
*94 42 *38 *0 0 11|0 0 944g 1892g 28.7g 1430
insert query update delete getmore command flushes mapped vsize res faults
*28 44 *10 *0 0 11|0 0 944g 1892g 27.6g 2419
*1520 1807 *3443 *0 0 592|0 0 944g 1892g 27.6g 14749
*168 67 *8 *0 0 15|0 0 944g 1892g 27.5g 1660
*1354 1920 *3471 *0 0 570|0 1 944g 1892g 27.5g 13817
*14 53 *22 *0 0 21|0 0 944g 1892g 27.5g 2167
*1526 2038 *3439 *0 0 575|0 1 944g 1892g 27.6g 14971
*256 74 *42 *0 0 13|0 0 944g 1892g 27.4g 1385
*1276 1796 *3427 *0 0 574|0 0 944g 1892g 27.5g 14610
*94 52 *62 *0 0 13|0 0 944g 1892g 27.5g 1657
*1544 1766 *3301 *0 0 507|0 1 944g 1892g 27.5g 11683
Separate read and write has been done, but not much to insert the master node, and there are a lot of inserts and updates in the slave nodes. Why?
I am inserting 100 million data into Mongodb using Java API (with 50% columns are indexed, not bulk insert due to business logic).
Table and index structure:
db.gm_std_measurements.findOne();
{
"_id" : ObjectId("530b6340e4b033fabd61fb99"),
"fkDataSeriesId" : 421,
"measDateUtc" : "2014-10-10 12:00:00",
"measDateSite" : "2014-03-15 12:00:00",
"project_id" : 379,
"measvalue" : 597.516583008608,
"refMeas" : false,
"reliability" : 1
}
db.gm_std_measurements.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "testdb.gm_std_measurements",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"fkDataSeriesId" : 1,
"measDateUtc" : 1,
"measDateSite" : 1,
"project_id" : 1
},
"ns" : "testdb.gm_std_measurements",
"name" : "default_mongodb_test_index"
}
]
At beginning mongostat tells the speed is quite good, for about 20-30k inserts per seconds. But afterward a while the performance drops down really quickly, with system load 5-10. What could be the reason?
As observed, for a lot of times, the mongostat seems to be frozen (or mongod is frozen), because there is no insert at all, and the tracing data of "locked db" is also 0.0%, is that normal?
Thanks a lot!
Below is some output of the mongostat:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
39520 *0 *0 *0 0 1|0 0 160m 506m 67m 3 testdb:61.6% 0 0|0 0|1 9m 2k 4 15:58:26
36010 *0 *0 *0 0 1|0 0 160m 506m 83m 1 testdb:55.9% 0 0|0 0|0 8m 2k 4 15:58:27
33793 *0 *0 *0 0 1|0 0 288m 762m 92m 3 testdb:57.8% 0 0|0 0|0 7m 2k 4 15:58:28
32061 *0 *0 *0 0 1|0 0 288m 762m 113m 0 testdb:55.9% 0 0|0 0|0 7m 2k 4 15:58:29
32302 *0 *0 *0 0 1|0 0 288m 762m 110m 1 testdb:60.2% 0 0|0 0|1 7m 2k 4 15:58:30
31283 *0 *0 *0 0 1|0 0 288m 762m 138m 0 testdb:57.1% 0 0|0 0|1 7m 2k 4 15:58:31
1126 *0 *0 *0 0 1|0 0 544m 1.25g 367m 0 testdb:3.4% 0 0|0 0|1 258k 2k 4 15:58:55
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
18330 *0 *0 *0 0 1|0 0 544m 1.25g 369m 1 testdb:40.8% 0 0|0 0|1 4m 2k 4 15:58:56
4235 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:7.3% 0 0|0 0|1 974k 2k 4 15:58:57
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:58:58
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:58:59
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:00
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:01
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:02
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:04
20083 *0 *0 *0 0 1|0 0 544m 1.25g 378m 0 .:23.4% 0 0|0 0|1 4m 2k 4 15:59:05
28595 *0 *0 *0 0 1|0 0 544m 1.25g 404m 0 testdb:60.0% 0 0|0 0|0 6m 2k 4 15:59:06
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
26415 *0 *0 *0 0 1|0 0 544m 1.25g 381m 0 testdb:60.8% 0 0|0 0|1 6m 2k 4 15:59:07
27161 *0 *0 *0 0 1|0 0 544m 1.25g 411m 0 testdb:59.5% 0 0|0 0|1 6m 2k 4 15:59:08
25550 *0 *0 *0 0 1|0 0 544m 1.25g 397m 0 testdb:56.6% 0 0|0 0|1 5m 2k 4 15:59:09
26245 *0 *0 *0 0 1|0 0 544m 1.25g 429m 0 testdb:60.0% 0 0|0 0|1 6m 2k 4 15:59:10
27836 *0 *0 *0 0 1|0 0 544m 1.25g 444m 0 testdb:60.0% 0 0|0 0|1 6m 2k 4 15:59:11
27041 *0 *0 *0 0 1|0 0 544m 1.25g 422m 0 testdb:62.2% 0 0|0 0|1 6m 2k 4 15:59:12
26522 *0 *0 *0 0 1|0 0 544m 1.25g 463m 0 testdb:58.4% 0 0|0 0|1 6m 2k 4 15:59:13
27195 *0 *0 *0 0 1|0 0 544m 1.25g 475m 0 testdb:60.1% 0 0|0 0|1 6m 2k 4 15:59:14
25610 *0 *0 *0 0 1|0 0 1.03g 2.25g 500m 1 testdb:57.6% 0 0|0 0|1 5m 2k 4 15:59:15
25501 *0 *0 *0 0 1|0 0 1.03g 2.25g 474m 0 testdb:64.7% 0 0|0 0|1 5m 2k 4 15:59:16
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
27446 *0 *0 *0 0 1|0 0 1.03g 2.25g 489m 0 testdb:58.2% 0 0|1 0|1 6m 2k 4 15:59:17
27113 *0 *0 *0 0 1|0 0 1.03g 2.25g 515m 1 testdb:57.2% 0 0|1 0|1 6m 2k 4 15:59:18
25383 *0 *0 *0 0 1|0 0 1.03g 2.25g 524m 0 testdb:59.9% 0 0|0 0|1 5m 2k 4 15:59:19
27506 *0 *0 *0 0 1|0 0 1.03g 2.25g 546m 1 testdb:61.3% 0 0|0 0|1 6m 2k 4 15:59:20
14901 2 *0 *0 0 1|0 0 1.03g 2.25g 498m 0 testdb:32.8% 0 0|1 0|1 3m 2k 4 15:59:21
9026 *0 *0 *0 0 1|0 0 1.03g 2.25g 501m 0 .:62.5% 0 0|1 0|1 2m 2k 4 15:59:24
16834 *0 *0 *0 0 1|0 1 1.03g 2.25g 506m 0 .:73.9% 0 0|1 0|1 3m 3k 4 15:59:25
25975 *0 *0 *0 0 1|0 0 1.03g 2.25g 521m 0 testdb:60.8% 0 0|0 0|1 5m 2k 4 15:59:26
23389 *0 *0 *0 0 1|0 0 1.03g 2.25g 525m 0 testdb:58.4% 0 0|0 0|1 5m 2k 4 15:59:27
27226 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:55.0% 0 0|0 0|1 6m 2k 4 15:59:28
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
26362 *0 *0 *0 0 1|0 0 1.03g 2.25g 541m 0 testdb:56.3% 0 0|1 0|1 6m 2k 4 15:59:31
2658 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 .:64.2% 0 0|0 0|1 611k 3k 4 15:59:32
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:34
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:35
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:36
2777 *0 *0 *0 0 1|0 0 1.03g 2.25g 583m 0 testdb:4.8% 0 0|0 0|1 638k 2k 4 15:59:37
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:38
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:39
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:40
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:41
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:42
19823 *0 *0 *0 0 1|0 0 1.03g 2.25g 549m 0 testdb:57.8% 0 0|1 0|1 4m 2k 4 15:59:43
25267 *0 *0 *0 0 1|0 0 1.03g 2.25g 561m 0 testdb:60.4% 0 0|0 0|1 5m 2k 4 15:59:44
26489 *0 *0 *0 0 1|0 0 1.03g 2.25g 601m 0 testdb:58.8% 0 0|0 0|1 6m 2k 4 15:59:45
26516 *0 *0 *0 0 1|0 0 1.03g 2.25g 604m 0 testdb:58.4% 0 0|0 0|1 1m 2k 4 16:00:26
*0 *0 *0 *0 0 1|0 0 2.03g 4.25g 868m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 16:00:27
*0 *0 *0 *0 0 1|0 0 2.03g 4.25g 868m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 16:00:33
2775 *0 *0 *0 0 1|0 0 2.03g 4.25g 845m 0 testdb:0.8% 0 0|1 0|1 638k 3k 4 16:00:34
3886 *0 *0 *0 0 1|0 0 2.03g 4.25g 879m 0 .:30.5% 0 0|0 0|1 893k 2k 4 16:00:35
You can try dropping the indexes and then perform the insert, after the insert is finished you can then create the indexes. I think this will be an overall faster scenario.
You can also recreate the indexes at the background
db.collection.ensureIndex( { a: 1 }, { background: true } )
if you want to continue querying, but that will make index creation slower