how to improve mongoDB write in a big collection - mongodb

I have a collection it have 2 billion documents,and it have 3 shardings I allot 20G memory to every sharding .It write very slow now and I use mongostate to monitor it, It only write 200 documents every second avg,and others is noraml,and I do not have any index,and I don't know what to do.
insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time
127 *0 *0 *0 9 9|0 2.8% 76.5% 0 12.0G 9.59G 0|0 1|6 224k 263k 80 shard1 PRI Mar 11 16:08:38.710
115 *0 *0 *0 6 13|0 2.7% 76.6% 0 12.0G 9.60G 0|1 1|3 185k 166k 80 shard1 PRI Mar 11 16:08:39.625
137 *0 *0 *0 11 8|0 2.7% 76.7% 0 12.0G 9.60G 0|0 1|3 231k 369k 80 shard1 PRI Mar 11 16:08:40.625
135 *0 *0 *0 21 27|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|2 263k 340k 80 shard1 PRI Mar 11 16:08:41.625
119 *0 *0 *0 9 15|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|6 220k 212k 80 shard1 PRI Mar 11 16:08:42.702
111 *0 *0 *0 4 9|0 2.8% 76.7% 0 12.0G 9.60G 0|0 1|8 195k 206k 80 shard1 PRI Mar 11 16:08:44.115
154 *0 *0 *0 15 19|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|3 282k 441k 80 shard1 PRI Mar 11 16:08:44.625
150 *0 *0 *0 13 20|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|1 258k 355k 80 shard1 PRI Mar 11 16:08:45.625
127 *0 *0 *0 15 17|0 2.9% 76.8% 0 12.0G 9.60G 0|0 1|4 231k 311k 79 shard1 PRI Mar 11 16:08:46.625
113 *0 *0 *0 11 11|0 3.0% 76.9% 0 12.0G 9.60G 0|0 1|5 215k 278k 79 shard1 PRI Mar 11 16:08:47.738
and IOPS
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 161.00 299.50 3.99 4.86 39.34 8.55 18.47 49.90 1.

Options:
1) Improve storage performance:
A. Add better disks HDD(bad choise)->SSD(better)->NVME SSD(excellent)
B. Tweak operating system/file system ( mount with noatime , split across phisical volumes for better IOPS )
C. Mount journal files to different volume.
2) Improvements from mongoDB:
A. Pre-split collection.
B. Add more shards.
C. Switch off FTDC.
D. Optimize writeConcern.
E. You say you dont have any indexes and you have sharded in 3x shards ? ( I think you need to have at least one index to be able to shard the collection , you maybe need to evaluate and check if this is correct shard index )
F. Reduce number of replicaSet members if possible in the initial loading. ( majority write concern in 7 members will requiere insert to be confirmed by 4x members )
3) Improvements from application:
A. Reduce writeConcern if possible.
B. Insert in paralel batches instead of single sequential inserts.
Please, share some more details like schema , shard distribution etc?

Related

how to undertand the result of mongostat?

when I use mongostat ,it shows insert query & etc ...
insert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time
27985 *0 *0 *0 217 440|0 4.3% 79.9% 0 2.47G 598M 0|0 1|1 4.69m 8.01m 20 wh PRI Dec 17 14:41:23.423
23750 *0 *0 *0 193 393|0 5.5% 78.2% 0 2.47G 599M 0|0 1|1 4.05m 7.01m 20 wh PRI Dec 17 14:41:24.417
26240 *0 *0 *0 208 433|0 3.6% 79.2% 0 2.47G 591M 0|0 1|1 4.38m 7.60m 20 wh PRI Dec 17 14:41:25.417
27490 *0 *0 *0 227 451|0 5.7% 82.7% 0 2.47G 602M 0|0 2|0 4.56m 8.01m 20 wh PRI Dec 17 14:41:26.418
*0 *0 *0 *0 0 5|0 3.2% 79.2% 0 2.47G 612M 0|0 1|0 621b 16.9m 20 wh PRI Dec 17 14:41:27.418
*0 *0 *0 *0 0 6|0 3.2% 79.2% 0 2.47G 612M 0|0 1|0 1.15k 71.7k 20 wh PRI Dec 17 14:41:28.419
20994 2 3 *0 169 353|0 4.8% 80.7% 0 2.47G 613M 0|0 1|0 3.52m 5.98m 20 wh PRI Dec 17 14:41:29.419
28202 *0 *0 *0 250 501|0 6.4% 81.1% 0 2.47G 599M 0|0 1|0 4.76m 8.33m 20 wh PRI Dec 17 14:41:30.417
29650 *0 *0 *0 227 453|0 2.7% 78.2% 0 2.47G 596M 0|0 1|0 4.94m 8.57m 20 wh PRI Dec 17 14:41:31.436
26487 *0 *0 *0 213 440|0 4.8% 80.3% 0 2.47G 593M 0|0 1|0 4.44m 9.37m 20 wh PRI Dec 17 14:41:32.441
What does these parameters represent?
What are the two parts seperated by | in command qrw qre ?
This is a explained in the docs, but only implicitly. After a bit of digging I came to this result:
The fields qrw and arw are a combination of qr + qw and ar + aw with the following meanings:
qr: The length of the queue of clients waiting to read data from the MongoDB instance.
qw: The length of the queue of clients waiting to write data from the MongoDB instance.
ar: The number of active clients performing read operations.
aw: The number of active clients performing write operations.
All fields are described in the official docs: https://docs.mongodb.com/manual/reference/program/mongostat/#fields

Some tasks those insert data into mongodb taking a long time in spark streaming

I have a spark streaming application that fetches log records from Kafka and inserts all the access log records into mongodb. The application runs normally in the first few batches,but after some batches there appears some tasks in a job takes quite a long time to insert data into mongodb.I guess it should be a problem with my mongodb connection pool configuration, but I have tried changing quite a lot with no promotion.
Here are the results from web ui:
time takes for each job
time takes for abnormal tasks
Spark: version-1.5.1 on yarn (well,this may be really too old.)
Mongodb: version 3.4.4 running on four machines with 12 shards.Each machine is 160G+ and 40 CPUs.
Codes for mongodb connection pool:
private MongoManager() {
if (mongoClient == null) {
MongoClientOptions.Builder build = new MongoClientOptions.Builder();
build.connectionsPerHost(200);
build.socketTimeout(1000);
build.threadsAllowedToBlockForConnectionMultiplier(200);
build.maxWaitTime(1000 * 60 * 2);
build.connectTimeout(1000 * 60 * 1);
build.writeConcern(WriteConcern.UNACKNOWLEDGED);
MongoClientOptions myOptions = build.build();
try {
ServerAddress serverAddress1 = new ServerAddress(ip1, 20000);
ServerAddress serverAddress2 = new ServerAddress(ip2, 20000);
ServerAddress serverAddress3 = new ServerAddress(ip3, 20000);
ServerAddress serverAddress4 = new ServerAddress(ip4, 20000);
List<ServerAddress> lists = new ArrayList<>(8);
lists.add(serverAddress1);
lists.add(serverAddress2);
lists.add(serverAddress3);
lists.add(serverAddress4);
mongoClient = new MongoClient(lists, myOptions);
} catch (Exception e) {
e.printStackTrace();
}
}
}
public void inSertBatch(String dbName, String collectionName, List<DBObject> jsons) {
if (jsons == null || jsons.isEmpty()) {
return;
}
DB db = mongoClient.getDB(dbName);
DBCollection dbCollection = db.getCollection(collectionName);
dbCollection.insert(jsons);
}
And the spark streaming code are as below:
referDstream.foreachRDD(rdd => {
rdd.foreachPartition(partition => {
val records = partition.map(x => {
val data = x._1.split("_")
val dbObject: DBObject = new BasicDBObject()
dbObject.put("xxx","xxx")
...
dbObject
}).toList
val mg: MongoManager = MongoManager.getInstance()
mg.inSertBatch("dbname", "colname", records.asJava)
})
})
Script to submit application:
nohup ${SPARK_HOME}/bin/spark-submit --name ${jobname} --driver-cores 2 --driver-memory 8g
--num-executors 20 --executor-memory 16g --executor-cores 4
--conf "spark.executor.extraJavaOptions=-XX:+UseConcMarkSweepGC" --conf "spark.shuffle.manager=hash"
--conf "spark.shuffle.consolidateFiles=true" --driver-java-options "-XX:+UseConcMarkSweepGC"
--master ${master} --class ${APP_MAIN} --jars ${jars_path:1} ${APP_HOME}/${MAINJAR} ${sparkconf} &
Data obtained from mongo shell:
$ mongostat -h xxx.xxx.xxx.xxx:20000
insert query update delete getmore command flushes mapped vsize res faults qrw arw net_in net_out conn time
*0 *0 *0 *0 0 14|0 0 0B 1.17G 514M 0 0|0 0|0 985b 19.4k 58 Dec 7 03:10:52.949
2999 *0 *0 *0 0 8|0 0 0B 1.17G 514M 0 0|0 0|0 517b 17.6k 58 Dec 7 03:10:53.950
15000 *0 *0 *0 0 19|0 0 0B 1.17G 514M 0 0|0 0|0 402b 17.2k 58 Dec 7 03:10:54.950
17799 *0 *0 *0 0 22|0 0 0B 1.17G 514M 0 0|0 0|0 30.5m 16.9k 58 Dec 7 03:10:55.950
15996 *0 *0 *0 0 18|0 0 0B 1.17G 514M 0 0|0 0|0 343b 16.9k 58 Dec 7 03:10:56.950
12003 *0 *0 *0 0 26|0 0 0B 1.17G 514M 0 0|0 0|0 982b 19.3k 58 Dec 7 03:10:57.949
*0 *0 *0 *0 0 6|0 0 0B 1.17G 514M 0 0|0 0|0 518b 17.6k 58 Dec 7 03:10:58.949
4704 *0 *0 *0 0 8|0 0 0B 1.17G 514M 0 0|0 0|0 10.2m 17.1k 58 Dec 7 03:10:59.950
34600 *0 *0 *0 0 64|0 0 0B 1.17G 526M 0 0|0 0|0 26.9m 16.9k 58 Dec 7 03:11:00.951
33129 *0 *0 *0 0 36|0 0 0B 1.17G 526M 0 0|0 0|0 344b 17.0k 58 Dec 7 03:11:01.949
mongos> db.serverStatus().connections
{ "current" : 57, "available" : 19943, "totalCreated" : 2707 }
Thanks for any suggestion about how to solve this problem.

Mongodb Out of Memory while total DB size < available RAM

I have read what I could find on memory consumption for MongoDB, but the gist of what I understood was that everything was handled by the OS, and if no memory is available the data will be read from disk, then replace something else in memory.
I have a pretty small database
> db.stats()
{
"db" : "prod",
"collections" : 11,
"objects" : 2022,
"avgObjSize" : 43469.34915924827,
"dataSize" : 87895024,
"storageSize" : 113283072,
"numExtents" : 30,
"indexes" : 10,
"indexSize" : 4840192,
"fileSize" : 201326592,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1
}
with a small server with 1GB of RAM. Seeing the size of the DB (~100MB), I would assume 1GB of RAM should be plenty.
I have however been having Out of Memory errors for some time now, first infrequently (one every 2-3 weeks), and almost twice a day now.
I'm at a loss as to what could cause these issues, and thought that I may be missing something completely.
I ran what diagnosis I found on the net:
ulimit
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7826
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7826
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
mongod version is 3.0.12
OS info:
NAME="Amazon Linux AMI"
VERSION="2015.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2015.03"
PRETTY_NAME="Amazon Linux AMI 2015.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2015.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
Amazon Linux AMI release 2015.03
db.serverStatus() is on pastebin. Edit: looking at https://docs.mongodb.com/manual/reference/command/serverStatus/#memory-status
If mem.virtual value is significantly larger than mem.mapped (e.g. 3 or more times), this may indicate a memory leak.
So maybe something to look at
Swap is enabled
free -m execution before launching MongoDB
$ free -m
total used free shared buffers cached
Mem: 996 60 935 0 5 19
-/+ buffers/cache: 36 959
Swap: 4095 9 4086
and right after launch (launching + running the command immediately)
$ free -m
total used free shared buffers cached
Mem: 996 925 71 0 5 834
-/+ buffers/cache: 84 911
Swap: 4095 9 4086
mongostat run
$ mongostat
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 79b 10k 9 07:04:55
*0 *0 *0 *0 0 1|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 79b 10k 9 07:04:56
*0 *0 *0 *0 0 3|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 196b 11k 9 07:04:57
*0 *0 *0 *0 0 1|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 79b 10k 9 07:04:58
*0 *0 *0 *0 0 2|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 133b 10k 9 07:04:59
*0 *0 *0 *0 0 1|0 0 688.0M 7.6G 764.0M 0 0|0 0|0 79b 10k 9 07:05:00
Running mongostat a few hours later show an increase in res memory (Edit: re-running serverStatus() shows no increase in mem.resident though)
$ mongostat
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 8 0|0 0|0 79b 10k 8 10:39:50
*0 *0 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 79b 10k 8 10:39:51
*0 *0 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 79b 10k 8 10:39:52
*0 *0 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 79b 10k 8 10:39:53
*0 *0 *0 *0 0 4|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 250b 11k 8 10:39:54
*0 2 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 183b 11k 8 10:39:55
*0 1 *0 *0 0 1|0 0 688.0M 7.7G 856.0M 0 0|0 0|0 131b 11k 8 10:39:56
swapon -s
$ swapon -s
Filename Type Size Used Priority
/swapfile file 4194300 10344 -1
Edit: I've setup MongoDB Cloud Monitoring, and the issue just reoccurred. This is the report, and the mongo process was killed at 02:29
Do you have any idea what may be causing the issue? Or hints into where I should look?
Thanks for your help!
Seb

Mongostat locked db field

Can someone explain which DB is represented by "." in locked db field in mongostat output mean? Does it mean global lock? Also its being output every 2 seconds, any specific reason why its happening every 2 seconds? Is it something to do with replication happening within the replica set?
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn set repl time
*0 *0 58 *0 191 63|0 0 4.39g 9.02g 241m 376 local:0.5% 0 0|0 0|0 28k 66k 55 rs-lol PRI 11:28:21
*0 *0 25 *0 93 30|0 0 4.39g 9.02g 241m 335 local:0.7% 0 0|0 0|0 13k 34k 55 rs-lol PRI 11:28:22
*0 *0 19 *0 49 26|0 1 4.39g 9.02g 241m 150 .:23.1% 0 0|0 0|0 9k 27k 55 rs-lol PRI 11:28:23
*0 *0 20 *0 67 25|0 0 4.39g 9.02g 241m 139 local:0.2% 0 0|0 0|0 10k 28k 55 rs-lol PRI 11:28:24
*0 *0 28 *0 102 30|0 0 4.39g 9.02g 241m 392 local:0.7% 0 0|0 0|0 14k 37k 55 rs-lol PRI 11:28:25
*0 *0 38 *0 133 41|0 0 4.39g 9.02g 241m 424 local:0.9% 0 0|0 0|0 19k 46k 55 rs-lol PRI 11:28:26
*0 *0 40 *0 144 45|0 0 4.39g 9.02g 241m 284 local:0.4% 0 0|0 0|0 20k 49k 55 rs-lol PRI 11:28:27
*0 *0 39 *0 138 43|0 0 4.39g 9.02g 241m 333 local:0.7% 0 0|0 0|0 19k 48k 55 rs-lol PRI 11:28:28
*0 *0 44 *0 159 49|0 0 4.39g 9.02g 241m 522 local:0.8% 0 0|0 0|0 22k 53k 55 rs-lol PRI 11:28:29
*0 *0 35 *0 128 37|0 0 4.39g 9.02g 241m 391 local:0.7% 0 0|0 0|0 17k 43k 55 rs-lol PRI 11:28:30
The locked db field you are referring to in your example now refers to the per DB locks as of version 2.2. Prior to version 2.2, it referred to global write locks.

Why Mongodb insert performance become so slow after a while

I am inserting 100 million data into Mongodb using Java API (with 50% columns are indexed, not bulk insert due to business logic).
Table and index structure:
db.gm_std_measurements.findOne();
{
"_id" : ObjectId("530b6340e4b033fabd61fb99"),
"fkDataSeriesId" : 421,
"measDateUtc" : "2014-10-10 12:00:00",
"measDateSite" : "2014-03-15 12:00:00",
"project_id" : 379,
"measvalue" : 597.516583008608,
"refMeas" : false,
"reliability" : 1
}
db.gm_std_measurements.getIndexes();
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "testdb.gm_std_measurements",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"fkDataSeriesId" : 1,
"measDateUtc" : 1,
"measDateSite" : 1,
"project_id" : 1
},
"ns" : "testdb.gm_std_measurements",
"name" : "default_mongodb_test_index"
}
]
At beginning mongostat tells the speed is quite good, for about 20-30k inserts per seconds. But afterward a while the performance drops down really quickly, with system load 5-10. What could be the reason?
As observed, for a lot of times, the mongostat seems to be frozen (or mongod is frozen), because there is no insert at all, and the tracing data of "locked db" is also 0.0%, is that normal?
Thanks a lot!
Below is some output of the mongostat:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
39520 *0 *0 *0 0 1|0 0 160m 506m 67m 3 testdb:61.6% 0 0|0 0|1 9m 2k 4 15:58:26
36010 *0 *0 *0 0 1|0 0 160m 506m 83m 1 testdb:55.9% 0 0|0 0|0 8m 2k 4 15:58:27
33793 *0 *0 *0 0 1|0 0 288m 762m 92m 3 testdb:57.8% 0 0|0 0|0 7m 2k 4 15:58:28
32061 *0 *0 *0 0 1|0 0 288m 762m 113m 0 testdb:55.9% 0 0|0 0|0 7m 2k 4 15:58:29
32302 *0 *0 *0 0 1|0 0 288m 762m 110m 1 testdb:60.2% 0 0|0 0|1 7m 2k 4 15:58:30
31283 *0 *0 *0 0 1|0 0 288m 762m 138m 0 testdb:57.1% 0 0|0 0|1 7m 2k 4 15:58:31
1126 *0 *0 *0 0 1|0 0 544m 1.25g 367m 0 testdb:3.4% 0 0|0 0|1 258k 2k 4 15:58:55
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
18330 *0 *0 *0 0 1|0 0 544m 1.25g 369m 1 testdb:40.8% 0 0|0 0|1 4m 2k 4 15:58:56
4235 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:7.3% 0 0|0 0|1 974k 2k 4 15:58:57
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:58:58
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:58:59
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:00
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:01
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:02
*0 *0 *0 *0 0 1|0 0 544m 1.25g 395m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:04
20083 *0 *0 *0 0 1|0 0 544m 1.25g 378m 0 .:23.4% 0 0|0 0|1 4m 2k 4 15:59:05
28595 *0 *0 *0 0 1|0 0 544m 1.25g 404m 0 testdb:60.0% 0 0|0 0|0 6m 2k 4 15:59:06
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
26415 *0 *0 *0 0 1|0 0 544m 1.25g 381m 0 testdb:60.8% 0 0|0 0|1 6m 2k 4 15:59:07
27161 *0 *0 *0 0 1|0 0 544m 1.25g 411m 0 testdb:59.5% 0 0|0 0|1 6m 2k 4 15:59:08
25550 *0 *0 *0 0 1|0 0 544m 1.25g 397m 0 testdb:56.6% 0 0|0 0|1 5m 2k 4 15:59:09
26245 *0 *0 *0 0 1|0 0 544m 1.25g 429m 0 testdb:60.0% 0 0|0 0|1 6m 2k 4 15:59:10
27836 *0 *0 *0 0 1|0 0 544m 1.25g 444m 0 testdb:60.0% 0 0|0 0|1 6m 2k 4 15:59:11
27041 *0 *0 *0 0 1|0 0 544m 1.25g 422m 0 testdb:62.2% 0 0|0 0|1 6m 2k 4 15:59:12
26522 *0 *0 *0 0 1|0 0 544m 1.25g 463m 0 testdb:58.4% 0 0|0 0|1 6m 2k 4 15:59:13
27195 *0 *0 *0 0 1|0 0 544m 1.25g 475m 0 testdb:60.1% 0 0|0 0|1 6m 2k 4 15:59:14
25610 *0 *0 *0 0 1|0 0 1.03g 2.25g 500m 1 testdb:57.6% 0 0|0 0|1 5m 2k 4 15:59:15
25501 *0 *0 *0 0 1|0 0 1.03g 2.25g 474m 0 testdb:64.7% 0 0|0 0|1 5m 2k 4 15:59:16
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
27446 *0 *0 *0 0 1|0 0 1.03g 2.25g 489m 0 testdb:58.2% 0 0|1 0|1 6m 2k 4 15:59:17
27113 *0 *0 *0 0 1|0 0 1.03g 2.25g 515m 1 testdb:57.2% 0 0|1 0|1 6m 2k 4 15:59:18
25383 *0 *0 *0 0 1|0 0 1.03g 2.25g 524m 0 testdb:59.9% 0 0|0 0|1 5m 2k 4 15:59:19
27506 *0 *0 *0 0 1|0 0 1.03g 2.25g 546m 1 testdb:61.3% 0 0|0 0|1 6m 2k 4 15:59:20
14901 2 *0 *0 0 1|0 0 1.03g 2.25g 498m 0 testdb:32.8% 0 0|1 0|1 3m 2k 4 15:59:21
9026 *0 *0 *0 0 1|0 0 1.03g 2.25g 501m 0 .:62.5% 0 0|1 0|1 2m 2k 4 15:59:24
16834 *0 *0 *0 0 1|0 1 1.03g 2.25g 506m 0 .:73.9% 0 0|1 0|1 3m 3k 4 15:59:25
25975 *0 *0 *0 0 1|0 0 1.03g 2.25g 521m 0 testdb:60.8% 0 0|0 0|1 5m 2k 4 15:59:26
23389 *0 *0 *0 0 1|0 0 1.03g 2.25g 525m 0 testdb:58.4% 0 0|0 0|1 5m 2k 4 15:59:27
27226 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:55.0% 0 0|0 0|1 6m 2k 4 15:59:28
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
26362 *0 *0 *0 0 1|0 0 1.03g 2.25g 541m 0 testdb:56.3% 0 0|1 0|1 6m 2k 4 15:59:31
2658 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 .:64.2% 0 0|0 0|1 611k 3k 4 15:59:32
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:34
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:35
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 564m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:36
2777 *0 *0 *0 0 1|0 0 1.03g 2.25g 583m 0 testdb:4.8% 0 0|0 0|1 638k 2k 4 15:59:37
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:38
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:39
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:40
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:41
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 1.03g 2.25g 584m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 15:59:42
19823 *0 *0 *0 0 1|0 0 1.03g 2.25g 549m 0 testdb:57.8% 0 0|1 0|1 4m 2k 4 15:59:43
25267 *0 *0 *0 0 1|0 0 1.03g 2.25g 561m 0 testdb:60.4% 0 0|0 0|1 5m 2k 4 15:59:44
26489 *0 *0 *0 0 1|0 0 1.03g 2.25g 601m 0 testdb:58.8% 0 0|0 0|1 6m 2k 4 15:59:45
26516 *0 *0 *0 0 1|0 0 1.03g 2.25g 604m 0 testdb:58.4% 0 0|0 0|1 1m 2k 4 16:00:26
*0 *0 *0 *0 0 1|0 0 2.03g 4.25g 868m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 16:00:27
*0 *0 *0 *0 0 1|0 0 2.03g 4.25g 868m 0 testdb:0.0% 0 0|0 0|1 62b 2k 4 16:00:33
2775 *0 *0 *0 0 1|0 0 2.03g 4.25g 845m 0 testdb:0.8% 0 0|1 0|1 638k 3k 4 16:00:34
3886 *0 *0 *0 0 1|0 0 2.03g 4.25g 879m 0 .:30.5% 0 0|0 0|1 893k 2k 4 16:00:35
You can try dropping the indexes and then perform the insert, after the insert is finished you can then create the indexes. I think this will be an overall faster scenario.
You can also recreate the indexes at the background
db.collection.ensureIndex( { a: 1 }, { background: true } )
if you want to continue querying, but that will make index creation slower