I have a doubt on MongoDB sparse Index.
I have a collection (post) with very little documents (6K the biggest) that could embed a sub-document in this way:
{
"a": "a-val",
"b": "b-val",
"meta": {
"urls": [ "url1", "url2" ... ],
"field1": "value1",
...
}
}
The field "a" and "b" are always presents, but "meta.urls" could be non existent!
Now, I have inserted just one document with "meta.urls" value and then I did
db.post.ensureIndex({"a": 1, "b": 1, "meta.urls": 1}, {sparse: true});
post stats gives me a "strange" result: the index is about 97MB!
How is it possible? Only one document with "meta.urls" inserted, and index size is 97MB ?
So, I tried to create only "meta.urls" index in this way:
db.post.ensureIndex({"meta.urls": 1}, {sparse: true});
I have now "meta.urls_1" index with just 1 document.
But if I explain a simple query like this
db.post.find({"meta.urls": {$exists: true}}).hint("meta.urls_1").explain({verbose: true});
I have another "strange" result:
"n" : 1,
"nscannedObjects" : 5,
"nscanned" : 5,
Why Mongo scans 5 docs, an not just the one in the index?
If I query for a precise match on "meta.urls", the single sparse index will work correctly.
Example:
db.post.find({"meta.urls": "url1"}).hint("meta.old_slugs_1") // 1 document
For your first question: you can use a compound index to search on a prefix of the keys it indexes. For example, your first index would be used if you searched on just a or both a and b. Thus, the sparse will only fail to index docs where a is null.
I don't have an answer for your second question, but you should trying updating MongoDB and trying again - its moving pretty quickly, and sparse indexes have gotten better in the past few months.
Related
I'm using MongoDB version 4.2.0. I have a collection with the following indexes:
{uuid: 1},
{unique: true, name: "uuid_idx"}
and
{field1: 1, field2: 1, _id: 1},
{unique: true, name: "compound_idx"}
When executing this query
aggregate([
{"$match": {"uuid": <uuid_value>}}
])
the planner correctly selects uuid_idx.
When adding this sort clause
aggregate([
{"$match": {"uuid": <uuid_value>}},
{"$sort": {"field1": 1, "field2": 1, "_id": 1}}
])
the planner selects compound_idx, which makes the query slower.
I would expect the sort clause to not make a difference in this context. Why does Mongo not use the uuid_idx index in both cases?
EDIT:
A little clarification, I understand there are workarounds to use the correct index, but I'm looking for an explanation of why this does not happen automatically (if possible with links to the official documentation). Thanks!
Why is this happening?:
Lets understand how Mongo chooses which index to use as explained here.
If a query can be satisfied by multiple indexes (satisfied is used losely as Mongo actually chooses all possibly relevant indexes) defined in the collection.
MongoDB will then test all the applicable indexes in parallel. The first index that can returns 101 results will be selected by the query planner.
Meaning that for that certain query that index actually wins.
What can we do?:
We can use $hint, hint basically forces Mongo to use a specific index, however Mongo this is not recommended because if changes occur Mongo will not adapt to those.
The query:
aggregate(
[
{ $match : { uuid : "some_value" } },
{ $sort : { fld1: 1, fld2: 1, _id: 1 } }
],
)
doesn't use the index "uuid_idx".
There are couple of options you can work with for using indexes on both the match and sort operations:
(1) Define a new compound index: { uuid: 1, fld1: 1, fld2: 1, _id: 1 }
Both the match and match+sort queries will use this index (for both the match and sort operations).
(2) Use the hint on the uuid index (using existing indexes)
Both the match and match+sort queries will use this index (for both the match and sort operations).
aggregate(
[
{ $match : { uuid : "some_value" } },
{ $sort : { fld1: 1, fld2: 1, _id: 1 } }
],
{ hint: "uuid_idx"}
)
If you can use find instead of aggregate, it will use the right index. So this is still problem in aggregate pipeline.
I am querying a collection where for two array fields I have specific values and for two other fields I only want { $exists : true }.
i.e.
db.collection.aggregate([
{ $match: {array: {$elemMatch: {field1: 'value1', field2: 'value2', field3:{$exists : true}, field4: {$exists : true}}}] },
{$unwind:array},
{$project...}
])
I have created three indexes:
index 1: field1:1, field2:1
index 2: field1:1, field2:1, field3:1
index 3: field1:1, field2:1, field3:1, field4:1
when I try the explain() method on the query the winning plan always picks up index 1.
Can i create a compound index where all four fields are included to speed up my query? (I have tried partial indexing on fields 3 and 4 but it made no difference)
this is a question about how to create efficient indexes when query have "or". Without “or” ,I know how to create efficient index.
This is my query.
db.collection.find({
'msg.sendTime':{$gt:1},
'msg.msgType':{$in:["chat","g_card"]},
$or:[{'msg.recvId':{$in:['xm80049258']}},{'msg.userId':'xm80049258'}],
$orderby:{'msg.sendTime':-1}})
After reading some article, I create two single index on msg.recvId and msg.userId, and this make sense.
I want to know when mongodb execute "or", Is it divides all documents at very first ,then use msg.sendTime and msg.msgType ?
How to create efficient indexes in this case? Should I create indexes (msg.sendTime:1,msg.msgType:1,msg.recvId:1) and
(msg.sendTime:1,msg.msgType:1,msg.userId:1)
Thanks very much.
Paraphrasing from $or Clauses and Indexes:
When evaluating the clauses in the $or expression, MongoDB either performs a collection scan or, if all the clauses are supported by indexes, MongoDB performs index scans. That is, for MongoDB to use indexes to evaluate an $or expression, all the clauses in the $or expression must be supported by indexes.
Also from Indexing Strategies:
Generally, MongoDB only uses one index to fulfill most queries. However, each clause of an $or query may use a different index
What those paragraph mean for $or queries are:
In a find() query, only one index can be used. Therefore it's best to create an index that aligns with the fields in your query. Otherwise, MongoDB will do a collection scan.
Except when the query is an $or query, where MongoDB can use one index per $or term
In combination, if you have $or in your query, it's best to put the $or term as the top-level term, and create an index for each term separately
So to answer your question:
I want to know when mongodb execute "or", Is it divides all documents at very first ,then use msg.sendTime and msg.msgType ?
If your query has a top-level $or clause, MongoDB can use one index per clause. Otherwise, it will do a collection scan, or a semi-collection scan. For example, if you have an index:
db.collection.createIndex({a: 1, b: 1})
There are two general type of query you can create:
1. $or NOT on the top level of the query
This query can use the index, but will not be performant:
db.collection.find({a: 1, $or: [{b: 1}, {b: 2}]})
since the explain() output of the query is:
> db.collection.explain().find({a: 1, $or: [{b: 1}, {b: 2}]})
{
"queryPlanner": {
...
"indexBounds": {
"a": [
"[1.0, 1.0]"
],
"b": [
"[MinKey, MaxKey]"
]
...
Note that the query planner cannot use the proper boundary for the b field, where it is doing a semi-collection scan (since it's searching for b from MinKey to MaxKey, i.e. everything). The query planner result above is basically saying: "Find documents where a = 1, and scan all of them for b having value of 1 or 2"
2. $or on the top level of the query
However, pulling the $or clause to the top-level:
db.collection.find({$or: [{a: 1, b: 1}, {a: 1, b: 2}]})
will result in this query plan:
> db.test.explain().find({$or: [{a: 1, b: 1}, {a: 1, b: 2}]})
{
"queryPlanner": {
...
"winningPlan": {
"stage": "SUBPLAN",
...
"inputStages": [
{
"stage": "IXSCAN",
...
"indexBounds": {
"a": [
"[1.0, 1.0]"
],
"b": [
"[1.0, 1.0]"
]
}
},
{
"stage": "IXSCAN",
...
"indexBounds": {
"a": [
"[1.0, 1.0]"
],
"b": [
"[2.0, 2.0]"
]
Note that each term of the $or is treated as a separate query, each with a tight boundary. As such, the query plan above is saying: "Find documents where a = 1, b = 1 or a = 1, b = 2". As you can imagine, this query will be much more performant compared to the earlier query.
For your second question:
How to create efficient indexes in this case? Should I create indexes (msg.sendTime:1,msg.msgType:1,msg.recvId:1) and (msg.sendTime:1,msg.msgType:1,msg.userId:1)
As explained above, you need to combine the proper query with the proper index to achieve the best result. The two indexes you proposed will be able to be used by MongoDB and will work best if you rearrange your query to have the $or in the top-level of your query.
I encourage you to understand the explain() output of MongoDB, since it's the best tool to find out if your queries are using the proper indexes or not.
Relevant resources that you may find useful are:
Explain Results
Create Indexes to Support Your Queries
Indexing Strategies
I have two collections, customSchemas, and customdata. Besides the default _id index, I've added the following indexes
db.customData.createIndex( { "orgId": 1, "contentType": 1 });
db.customSchemas.createIndex( { "orgId": 1, "contentType": 1 }, { unique: true });
I've decided to enforce orgId on all calls, so in my service layer, every query has an orgId in it, even the ones with ids, e.g.
db.customData.find({"_id" : ObjectId("557f30402598f1243c14403c"), orgId: 1});
Should I add an index that has both _id and orgId in it? Do the indexes I have currently help at all when I'm searching by both _id and orgId?
MongoDB 2.6+ provides index intersection feature that cover your case by using intersection of index _id {_id:1} and index prefix orgId in { "orgId": 1, "contentType": 1 }
So your query {"_id" : ObjectId("557f30402598f1243c14403c"), orgId: 1} should be covered by index already.
However, index intersection is less performant than a compound index on {"_id" : 1, orgId: 1}, as it comes with an extra step (intersection of the two sets). Hence, if this is a query that you use most of the time, creating the compound index on it is a good idea.
Let's say I have the following document structure:
{ _id : 1,
items: [ {n: "Name", v: "Kevin"}, ..., {n: "Age", v: 100} ],
records : [ {n: "Environment", v: "Linux"}, ... , {n: "RecordNumber, v: 555} ]
}
If I create 2 compound indexes on items.n-items.v and records.n-records.v, I could perform an $all query:
db.collection.find( {"items" : {$all : [{ $elemMatch: {n: "Name", v: "Kevin"} },
{$elemMatch: {n: "Age", v: 100} } ]}})
I could also perform a similar search on records.
db.collection.find( {"records" : {$all : [{ $elemMatch: {n: "Environment", v: "Linux"} },
{$elemMatch: {n: "RecordNumber", v: 555} } ]}})
Can I somehow perform a query that uses the index(es) to search for a document based on the items and records field?
find all documents where item.n = "Name" and item.v = "Kevin" AND record.n="RecordNumber" and record.v = 100
I'm not sure that this is possible using $all.
You can use an index to query on one array, but not both. Per the documentation, While you can create multikey compound indexes, at most one field in a compound index may hold an array.
Practically:
You can use a Compound index to index multiple fields.
You can use a Multikey index to index all the elements of an array.
You can use a Multikey index as one element of a compound Index
You CANNOT use multiple multikey indexes in a compound index
The documentation lays out the reason for this pretty clearly:
MongoDB does not index parallel arrays because they require the index to include each value in the Cartesian product of the compound keys, which could quickly result in incredibly large and difficult to maintain indexes.