I have a really simple mongo query, that should use _id index.
The explain plan looks good:
> db.items.find({ deleted_at: null, _id: ObjectId('541fd8016d792e0804820100') }).sort({ positions: 1 }).explain()
{
"cursor" : "BtreeCursor _id_",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 6,
"nscannedAllPlans" : 7,
"scanAndOrder" : true,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
ObjectId("541fd8016d792e0804820100"),
ObjectId("541fd8016d792e0804820100")
]
]
},
"server" : "mydbserver:27017",
"filterSet" : false
}
But when I execute the query it executes in 100-800ms:
> db.items.find({ deleted_at: null, _id: ObjectId('541fd8016d792e0804820100') }).sort({ positions: 1 })
2014-09-26T12:34:00.279+0300 [conn38926] query mydb.items query: { query: { deleted_at: null, _id: ObjectId('541fd8016d792e0804820100') }, orderby: { positions: 1.0 } } planSummary: IXSCAN { positions: 1 } ntoreturn:0 ntoskip:0 nscanned:70043 nscannedObjects:70043 keyUpdates:0 numYields:1 locks(micros) r:1391012 nreturned:1 reslen:814 761ms
Why is it reporting nscanned:70043 nscannedObjects:70043 and why it so slow?
I am using MongoDB 2.6.4 on CentOS 6.
I tried repairing MongoDB, full dump/import, doesn't help.
Update 1
> db.items.find({deleted_at:null}).count()
67327
> db.items.find().count()
70043
I don't have index on deleted_at, but I have index on _id.
Update 2 (2014-09-26 14:57 EET)
Adding index on _id, deleted_at doesn't help, even explain doesn't use that index :(
> db.items.ensureIndex({ _id: 1, deleted_at: 1 }, { unique: true })
> db.items.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "mydb.items"
},
{
"v" : 1,
"unique" : true,
"key" : {
"_id" : 1,
"deleted_at" : 1
},
"name" : "_id_1_deleted_at_1",
"ns" : "mydb.items"
}
]
> db.items.find({ deleted_at: null, _id: ObjectId('541fd8016d792e0804820100') }).sort({ positions: 1 }).explain()
{
"cursor" : "BtreeCursor _id_",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 7,
"nscannedAllPlans" : 8,
"scanAndOrder" : true,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
ObjectId("541fd8016d792e0804820100"),
ObjectId("541fd8016d792e0804820100")
]
]
},
"server" : "myserver:27017",
"filterSet" : false
}
Update 3 (2014-09-26 15:03:32 EET)
Adding index on _id, deleted_at, positions helped. But still it seems weird that previous cases forces full collection scan.
> db.items.ensureIndex({ _id: 1, deleted_at: 1, positions: 1 })
> db.items.find({ deleted_at: null, _id: ObjectId('541fd8016d792e0804820100') }).sort({ positions: 1 }).explain()
{
"cursor" : "BtreeCursor _id_1_deleted_at_1_positions_1",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 3,
"nscannedAllPlans" : 3,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
ObjectId("541fd8016d792e0804820100"),
ObjectId("541fd8016d792e0804820100")
]
],
"deleted_at" : [
[
null,
null
]
],
"positions" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "myserver:27017",
"filterSet" : false
}
This looks like a bug. The query planner should select the _id index and the _id index should be all you need as it must immediately reduce the result set to one document. The sort should be irrelevant as it's ordering one document. It's a weird case because you are explicitly asking for one document with an _id match and then sorting it. You should be able to go around mongoid and drop the sort as a workaround.
.explain() does not ignore the sort. You can test that simply like so:
> for (var i = 0; i < 100; i++) { db.sort_test.insert({ "i" : i }) }
> db.sort_test.ensureIndex({ "i" : 1 })
> db.sort_test.find().sort({ "i" : 1 }).explain()
If MongoDB can't use the index for the sort, it will sort in memory. The field scanAndOrder in explain output tells you if MongoDB cannot use the index to sort the query results (i.e. scanAndOrder : false means MongoDB can use the index to sort the query results).
Could you please file a bug report in the MongoDB SERVER project? Perhaps the engineers will say it's working as designed but the behavior looks wrong to me and there's been a couple of query planner gotchas in 2.6.4 already. I may have missed it if it was said before, but does the presence/absence of deleted_at : null affect the problem?
Also, if you do file a ticket, please post a link to it in your question or as a comment on this answer so it's easy for others to follow. Thanks!
UPDATE : corrected my answer that suggested to use a (_id, deleted_at) compound index.Also in the comments, more clarity on how explain() might not reflect the query planner in certain cases.
What we are expecting is that find() will filter the results, and then the sort() would apply on the filtered set. However, according to this doc, the query planner will not use the index on _id, and also the index (if any) on postion for this query. Now, if you have a compound index on (_id, position), it should be able to use that index to process the query.
The gist is that if you have an index that covers your query, you can be assured of your index being used by the query planner. In your case, the query definitely isn't covered, as indicated by indexOnly : false in the explain plan.
If this is by design, it deifinitely is counter-intuitive and as wdberkely suggested, you should file a bug report, so that the community gets a more detailed explanation of this peculiar behaviour.
I am guessing that you have 70043 objects with id of '541fd8016d792e0804820100'. Can you simply do a find on that id and count() them? Index does not mean 'unique' index -- if you have a index "bucket" with a certain value, once the bucket is reached, it now does a scan within the bucket to look at every 'deleted_at' value to see if its 'null'. To get around this, use a compound index of (id, deleted_at).
Related
I have the following schema:
{
score : { type : Number},
edges : [{name : { type : String }, rank : { type : Number }}],
disable : {type : Boolean},
}
I have tried to build index for the following query:
db.test.find( {
disable: false,
edges: { $all: [
{ $elemMatch:
{ name: "GOOD" } ,
},
{ $elemMatch:
{ name: "GREAT" } ,
},
] },
}).sort({"score" : 1})
.limit(40)
.explain()
First try
When I created the index name "score" :
{
"edges.name" : 1,
"score" : 1
}
The 'explain' returned :
{
"cursor" : "BtreeCursor score",
....
}
Second try
when I changed "score" to:
{
"disable" : 1,
"edges.name" : 1,
"score" : 1
}
The 'explain' returned :
"clauses" : [
{
"cursor" : "BtreeCursor name",
"isMultiKey" : true,
"n" : 0,
"nscannedObjects" : 304,
"nscanned" : 304,
"scanAndOrder" : true,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"edges.name" : [
[
"GOOD",
"GOOD"
]
]
}
},
{
"cursor" : "BtreeCursor name",
"isMultiKey" : true,
"n" : 0,
"nscannedObjects" : 304,
"nscanned" : 304,
"scanAndOrder" : true,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"edges.name" : [
[
"GOOD",
"GOOD"
]
]
}
}
],
"cursor" : "QueryOptimizerCursor",
....
}
Where the 'name' index is :
{
'edges.name' : 1
}
Why is the mongo refuses to use the 'disable' field in the index?
(I've tried other fields too except from 'disable' but got the same problem)
It looks like the query optimiser is choosing the most efficient index and the index on edges.name works best. I recreated your collection by inserting a couple documents according to your schema. I then created the compound index below.
db.test.ensureIndex({
"disable" : 1,
"edges.name" : 1,
"score" : 1
});
When running an explain on the query you specified, the index was used.
> db.test.find({ ... }).sort({ ... }).explain()
{
"cursor" : "BtreeCursor disable_1_edges.name_1_score_1",
"isMultiKey" : true,
...
}
However, as soon as I added the index on edges.name, the query optimiser chose that index for the query.
> db.test.find({ ... }).sort({ ... }).explain()
{
"cursor" : "BtreeCursor edges.name_1",
"isMultiKey" : true,
...
}
You can still hint the other index though, if you want the query to use the compound index.
> db.test.find({ ... }).sort({ ... }).hint("disable_1_edges.name_1_score_1").explain()
{
"cursor" : "BtreeCursor disable_1_edges.name_1_score_1",
"isMultiKey" : true,
...
}
[EDIT: Added possible explanation related to the use of $all.]
Note that if you run the query without $all, the query optimiser uses the compound index.
> db.test.find({
"disable": false,
"edges": { "$elemMatch": { "name": "GOOD" }}})
.sort({"score" : 1}).explain();
{
"cursor" : "BtreeCursor disable_1_edges.name_1_score_1",
"isMultiKey" : true,
...
}
I believe the issue here is that you are using $all. As you can see in the result of your explain field, there are clauses, each pertaining to one of the values you are searching with $all. So the query has to find all pairs of disable and edges.name for each of the clauses. My intuition is that these two runs with the compound index make it slower than a query that looks directly at edges.name and then weeds out disable. This might be related to this issue and this issue, which you might want to look into.
My query to mongodb is:
db.records.find({ from_4: { '$lte': 7495 }, to_4: { '$gte': 7495 } }).sort({ from_11: 1 }).skip(60000).limit(100).hint("from_4_1_to_4_-1_from_11_1").explain()
I suggest that it should use index from_4_1_to_4_-1_from_11_1
{
"from_4": 1,
"to_4": -1,
"from_11": 1
}
But got error:
error: {
"$err" : "Runner error: Overflow sort stage buffered data usage of 33555322 bytes exceeds internal limit of 33554432 bytes",
"code" : 17144
} at src/mongo/shell/query.js:131
How to avoid this error?
Maybe I should create another index, that better fits my query.
I tried index with all ascending fields too ...
{
"from_4": 1,
"to_4": 1,
"from_11": 1
}
... but the same error.
P.S. I noticed, that when I remove skip command ...
> db.records.find({ from_4: { '$lte': 7495 }, to_4: { '$gte': 7495 } }).sort({ from_11: 1 }).limit(100).hint("from_4_1_to_4_-1_from_11_1").explain()
...it's ok, I got explain output, but it says that I don't use index: "indexOnly" : false
{
"clauses" : [
{
"cursor" : "BtreeCursor from_4_1_to_4_-1_from_11_1",
"isMultiKey" : false,
"n" : 100,
"nscannedObjects" : 61868,
"nscanned" : 61918,
"scanAndOrder" : true,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"from_4" : [
[
-Infinity,
7495
]
],
"to_4" : [
[
Infinity,
7495
]
],
"from_11" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
},
{
"cursor" : "BtreeCursor ",
"isMultiKey" : false,
"n" : 0,
"nscannedObjects" : 0,
"nscanned" : 0,
"scanAndOrder" : true,
"indexOnly" : false,
"nChunkSkips" : 0,
"indexBounds" : {
"from_4" : [
[
-Infinity,
7495
]
],
"to_4" : [
[
Infinity,
7495
]
],
"from_11" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
}
}
],
"cursor" : "QueryOptimizerCursor",
"n" : 100,
"nscannedObjects" : 61868,
"nscanned" : 61918,
"nscannedObjectsAllPlans" : 61868,
"nscannedAllPlans" : 61918,
"scanAndOrder" : false,
"nYields" : 832,
"nChunkSkips" : 0,
"millis" : 508,
"server" : "myMac:27026",
"filterSet" : false
}
P.P.S I have read mongo db tutorial about sort indexes and think that I do all right.
Update
accroding #dark_shadow advice I created 2 more indexes:
db.records.ensureIndex({from_11: 1})
db.records.ensureIndex({from_11: 1, from_4: 1, to_4: 1})
and index db.records.ensureIndex({from_11: 1}) becomes what I need:
db.records.find({ from_4: { '$lte': 7495 }, to_4: { '$gte': 7495 } }).sort({ from_11: 1 }).skip(60000).limit(100).explain()
{
"cursor" : "BtreeCursor from_11_1",
"isMultiKey" : false,
"n" : 100,
"nscannedObjects" : 90154,
"nscanned" : 90155,
"nscannedObjectsAllPlans" : 164328,
"nscannedAllPlans" : 164431,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 1284,
"nChunkSkips" : 0,
"millis" : 965,
"indexBounds" : {
"from_11" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "myMac:27025",
"filterSet" : false
}
When you use range queries (and you are) mongo query don't use the index for sorting anyway. You can check this by looking at the "scanAndOrder" value of your explain() once you test your query. If that value exists and is true it means it'll sort the resultset in memory (scan and order) rather than use the index directly. This is the reason why you are getting error in your first query.
As the Mongodb documentation says,
For in-memory sorts that do not use an index, the sort() operation is significantly slower. The sort() operation will abort when it uses 32 megabytes of memory.
You can check the value of scanAndOrder in your first query by using limit(100) for in memory sorting.
Your second query works because you have used limit so it will sort only 100 documents which can be done in memory.
Why "indexOnly" : false ?
This simply indicates that all the fields you wish to return are not in the index, the BtreeCursor indicates that the index was used for the query (a BasicCursor would mean it had not). For this to be an indexOnly query, you would need to be returning only the those fields in the index (that is: {_id : 0,from_4 :1, to_4:1, from_11 :1 }) in your projection. That would mean that it would never have to touch the data itself and could return everything you need from the index alone. You can check this also using the explain once you have modified your query for returning only mentioned fields.
Now, you will be confused. It uses index or not ? For sorting, it won't use the index but for querying it is using the index. That's the reason you get BtreeCusrsor (you should have seen your index name also in that).
Now, to solve your problem you can either create two index:
{
"from_4": 1,
"to_4": 1,
}
{
"from_11" : 1
}
and then see if it's giving error now or using your index for sorting by carefully observing scanOrder value.
There is one more work around:
Change the order of compund index:
{
"FROM_11" : 1,
"from_4": 1,
"to_4": 1,
}
NOT SURE ABOUT THIS APPROACH. It should work hopefully.
Looking at what you are trying to get, you can also do sort with {from_11:-1}.limit(1868).
I hope I have made the things a bit clearer now. Please do some testing based on my suggestions. If you face any issues, please let me know. We can work on it.
Thanks
I have a mongodb collection where documents contain the following fields (documents have a lot of fields but I removed them for understanding):
{
"_id": ObjectId("53aad11444d0e2fd648b4567"),
"id": NumberLong(238790),
"rid": NumberLong(12),
"parent_id": {
"0": NumberLong(12),
"1": NumberLong(2)
},
"coid": NumberLong(3159),
"reid": NumberLong(4312),
"cid": NumberLong(4400)
}
When I run a query I get
> db.ads2.find({coid:3159, parent_id : 2}).sort({inserdate:1}).explain()
{
"cursor" : "BtreeCursor coid_1_parent_id_1_insertdate_-1",
"isMultiKey" : true,
"n" : 20444,
"nscannedObjects" : 20444,
"nscanned" : 20444,
"nscannedObjectsAllPlans" : 20444,
"nscannedAllPlans" : 20444,
"scanAndOrder" : true,
"indexOnly" : false,
"nYields" : 319,
"nChunkSkips" : 0,
"millis" : 274,
"indexBounds" : {
"coid" : [
[
3159,
3159
]
],
"parent_id" : [
[
2,
2
]
],
"insertdate" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
]
},
"server" : "myserver.com:27017",
"filterSet" : false
}
The question is how should I change the index to make mongo to use it properly?
First of all, your query and the document structure you posted do not match. If you run this query on the collection that contains documents with that kind of structure you will get no results.
But since your query yielded 20444 results I guess that your structure actually looks like this:
{
...
"parent_id": [
"0": NumberLong(12),
"1": NumberLong(2)
],
...
}
This is the index you created:
db.x.ensureIndex({coid : 1, parent_id : 1, insertdate : -1});
From your explain() output you can see that MongoDB is using index to find the documents (because number of scanned documents is equal to the number of returned documents n == nscanned). But, "scanAndOrder" : true means that MongoDB is not using the index to sort your documents.
Your index should be used for matching and sorting if the field you're using to sort exist in the documents.
But the problem is that I can't see insertdate field nowhere in your structure. So if you're sorting by a field that doesn't exist in your document, normally, MongoDB can't use the index for sorting.
Edit
After your comment and examination of your original question I probably found what's causing your problem. You have a typo in the query you're executing. You're specifying the sort parameter as inserdate, while the name of the field that was indexes is insertdate.
I have a sharded collection "my_collection" with the following structure:
{
"CREATED_DATE" : ISODate(...),
"MESSAGE" : "Test Message",
"LOG_TYPE": "EVENT"
}
The mongoDB environment is sharded with 2 shards. The above collection is sharded using Hashed shard key on LOG_TYPE. There are 7 more other possibilities for LOG_TYPE attribute.
I have 1 million documents in "my_collection" and I am trying to find the count of documents based on the LOG_TYPE using the following query:
db.my_collection.aggregate([
{ "$group" :{
"_id": "$LOG_TYPE",
"COUNT": { "$sum":1 }
}}
])
But this is getting me result in about 3 seconds. Is there any way to improve this? Also when I run the explain command, it shows that no Index has been used. Does the group command doesn't use an Index?
There are currently some limitations in what aggregation framework can do to improve the performance of your query, but you can help it the following way:
db.my_collection.aggregate([
{ "$sort" : { "LOG_TYPE" : 1 } },
{ "$group" :{
"_id": "$LOG_TYPE",
"COUNT": { "$sum":1 }
}}
])
By adding a sort on LOG_TYPE you will be "forcing" the optimizer to use an index on LOG_TYPE to get the documents in order. This will improve the performance in several ways, but differently depending on the version being used.
On real data if you have the data coming into the $group stage sorted, it will improve the efficiency of accumulation of the totals. You can see the different query plans where with $sort it will use the shard key index. The improvement this gives in actual performance will depend on the number of values in each "bucket" - in general LOG_TYPE having only seven distinct values makes it an extremely poor shard key, but it does mean that it all likelihood the following code will be a lot faster than even optimized aggregation:
db.my_collection.distinct("LOG_TYPE").forEach(function(lt) {
print(db.my_collection.count({"LOG_TYPE":lt});
});
There are a limited number of things that you can do in MongoDB, at the end of the day this might be a physical problem that extends beyond MongoDB itself, maybe latency causing configsrvs to respond untimely or results to be brought back from shards too slowly.
However you might be able to solve some performane problems by using a covered query. Since you are in fact sharding on LOG_TYPE you will already have an index on it (required before you can shard on it), not only that but the aggregation framework will auto add projection so that won't help.
MongoDB is likely having to communicate to every shard for the results, otherwise called a scatter and gather operation.
$group on its own will not use an index.
This is my results on 2.4.9:
> db.t.ensureIndex({log_type:1})
> db.t.runCommand("aggregate", {pipeline: [{$group:{_id:'$log_type'}}], explain: true})
{
"serverPipeline" : [
{
"query" : {
},
"projection" : {
"log_type" : 1,
"_id" : 0
},
"cursor" : {
"cursor" : "BasicCursor",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 1,
"nscannedAllPlans" : 1,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
},
"allPlans" : [
{
"cursor" : "BasicCursor",
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"indexBounds" : {
}
}
],
"server" : "ubuntu:27017"
}
},
{
"$group" : {
"_id" : "$log_type"
}
}
],
"ok" : 1
}
This is the result from 2.6:
> use gthtg
switched to db gthtg
> db.t.insert({log_type:"fdf"})
WriteResult({ "nInserted" : 1 })
> db.t.ensureIndex({log_type: 1})
{ "numIndexesBefore" : 2, "note" : "all indexes already exist", "ok" : 1 }
> db.t.runCommand("aggregate", {pipeline: [{$group:{_id:'$log_type'}}], explain: true})
{
"stages" : [
{
"$cursor" : {
"query" : {
},
"fields" : {
"log_type" : 1,
"_id" : 0
},
"plan" : {
"cursor" : "BasicCursor",
"isMultiKey" : false,
"scanAndOrder" : false,
"allPlans" : [
{
"cursor" : "BasicCursor",
"isMultiKey" : false,
"scanAndOrder" : false
}
]
}
}
},
{
"$group" : {
"_id" : "$log_type"
}
}
],
"ok" : 1
}
I have a collection of documents with a multikey index defined. However, the performance of the query is pretty poor for just 43K documents. Is ~215ms for this query considered poor? Did I define the index correctly if nscanned is 43902 (which equals the total documents in the collection)?
Document:
{
"_id": {
"$oid": "50f7c95b31e4920008dc75dc"
},
"bank_accounts": [
{
"bank_id": {
"$oid": "50f7c95a31e4920009b5fc5d"
},
"account_id": [
"ff39089358c1e7bcb880d093e70eafdd",
"adaec507c755d6e6cf2984a5a897f1e2"
]
}
],
"created_date": "2013,01,17,09,50,19,274089",
}
Index:
{ "bank_accounts.bank_id" : 1 , "bank_accounts.account_id" : 1}
Query:
db.visitor.find({ "bank_accounts.account_id" : "ff39089358c1e7bcb880d093e70eafdd" , "bank_accounts.bank_id" : ObjectId("50f7c95a31e4920009b5fc5d")}).explain()
Explain:
{
"cursor" : "BtreeCursor bank_accounts.bank_id_1_bank_accounts.account_id_1",
"isMultiKey" : true,
"n" : 1,
"nscannedObjects" : 43902,
"nscanned" : 43902,
"nscannedObjectsAllPlans" : 43902,
"nscannedAllPlans" : 43902,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 213,
"indexBounds" : {
"bank_accounts.bank_id" : [
[
ObjectId("50f7c95a31e4920009b5fc5d"),
ObjectId("50f7c95a31e4920009b5fc5d")
]
],
"bank_accounts.account_id" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "Not_Important"
}
I see three factors in play.
First, for application purposes, make sure that $elemMatch isn't a more appropriate query for this use-case. http://docs.mongodb.org/manual/reference/operator/elemMatch/. It seems like it would be bad if the wrong results came back due to multiple subdocuments satisfying the query.
Second, I imagine the high nscanned value can be accounted for by querying on each of the field values independently. .find({ bank_accounts.bank_id: X }) vs. .find({"bank_accounts.account_id": Y}). You may see that nscanned for the full query is about equal to nscanned of the largest subquery. If the index key were being evaluated fully as a range, this would not be expected, but...
Third, the { "bank_accounts.account_id" : [[{"$minElement" : 1},{"$maxElement" : 1}]] } clause of the explain plan shows that no range is being applied to this portion of the key.
Not really sure why, but I suspect it has something to do with account_id's nature (an array within a subdocument within an array). 200ms seems about right for an nscanned that high.
A more performant document organization might be to denormalize the account_id -> bank_id relationship within the subdocument, and store:
{"bank_accounts": [
{
"bank_id": X,
"account_id: Y,
},
{
"bank_id": X,
"account_id: Z,
}
]}
instead of:
{"bank_accounts": [{
"bank_id": X,
"account_id: [Y, Z],
}]}
My tests below show that with this organization, the query optimizer gets back to work and exerts a range on both keys:
> db.accounts.insert({"something": true, "blah": [{ a: "1", b: "2"} ] })
> db.accounts.ensureIndex({"blah.a": 1, "blah.b": 1})
> db.accounts.find({"blah.a": 1, "blah.b": "A RANGE"}).explain()
{
"cursor" : "BtreeCursor blah.a_1_blah.b_1",
"isMultiKey" : false,
"n" : 0,
"nscannedObjects" : 0,
"nscanned" : 0,
"nscannedObjectsAllPlans" : 0,
"nscannedAllPlans" : 0,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"blah.a" : [
[
1,
1
]
],
"blah.b" : [
[
"A RANGE",
"A RANGE"
]
]
}
}