I have about 1000000 documents in a collections (random generated).
Sample document:
{
"loc": {
"lat": 39.828475,
"lon": 116.273542
},
"phone": "",
"keywords": [
"big",
"small",
"biggest",
"smallest"
],
"prices": [
{
"category": "uRgpiamOVTEQ",
"list": [
{
"price": 29,
"name": "ehLYoPpntlil"
}
]
},
{
"category": "ozQNmdwpwhXPnabZ",
"list": [
{
"price": 96,
"name": "ITTaLHf"
},
{
"price": 88,
"name": "MXVgJFBgtwLYk"
}
]
},
{
"category": "EDkfKGZSou",
"list": [
{
"price": 86,
"name": "JluoCLnenOqDllaEX"
},
{
"price": 35,
"name": "HbmgMDfxCOk"
},
{
"price": 164,
"name": "BlrUD"
},
{
"price": 106,
"name": "LOUcsMDeaqVm"
},
{
"price": 14,
"name": "rDkwhN"
}
]
}
],
}
Search without indexes
> db.test1.find({"prices.list.price": { $gt: 190 } }).explain()
{
"cursor" : "BasicCursor",
"isMultiKey" : false,
"n" : 541098,
"nscannedObjects" : 1005584,
"nscanned" : 1005584,
"nscannedObjectsAllPlans" : 1005584,
"nscannedAllPlans" : 1005584,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 8115,
"nChunkSkips" : 0,
**"millis" : 13803,**
"server" : "localhost:27017",
"filterSet" : false
}
With indexes:
> db.test1.ensureIndex({"prices.list.price":1,"menu.list.name":1})
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.test1.find({"prices.list.price": { $gt: 190 } }).explain()
{
"cursor" : "BtreeCursor prices.list.price_1_prices.list.name_1",
"isMultiKey" : true,
"n" : 541098,
"nscannedObjects" : 541098,
"nscanned" : 868547,
"nscannedObjectsAllPlans" : 541098,
"nscannedAllPlans" : 868547,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 16852,
"nChunkSkips" : 0,
**"millis" : 66227,**
"indexBounds" : {
"menu.list.price" : [
[
190,
Infinity
]
],
"menu.list.name" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "localhost:27017",
"filterSet" : false
}
Have any idea why indexed search slower than without index ?
Also i will use:
db.test1.find( { loc : { $near : [39.876045, 32.862245]}}) (need 2d indexes)
db.test1.find({ keywords:{$in: [ "small", "another" ] }}) (use index for keywords)
db.test1.find({"prices.list.name":/.s./ }) (no need to index because i will use regex)
An index allows faster access to location of the document that satisfies the query.
In your example, your query selects half of all the documents in the collection. So even though the index scan provides faster access to know which documents will match the query predicate, it actually creates a lot more work overall.
In collection scan, the query is scanning all of the documents, and checking the field that you are querying by to see if it matches. Half the time it ends up selecting the document.
In index scan, the query is traversing half of all the index entries and then jumping from them directly to the documents that satisfy the query predicate. That's more operations in your case.
In addition, while doing this, the operations are yielding the read mutex when they need to wait for the document they have to read to be brought into RAM, or when there is a write that is waiting to go, and the index scan is showing double the number of yields as the collection scan. If you don't have enough RAM for your working set, then adding an index will put more pressure on the existing resources and make things slower, rather than faster.
Try the same query with price compared to a much larger number, like 500 (or whatever would be a lot more selective in your data set). If the query is still slower with an index, then you are likely seeing a lot of page faulting on the system. But if there is enough RAM for the index, then the indexed query will be a lot faster while the unindexed query will be just as slow.
First, as a suggestion you will get more faster while querying arrays with elemMatch.
http://docs.mongodb.org/manual/reference/operator/query/elemMatch/
In your case
db.test1.find({"prices.list.price":{ $elemMatch: { $gte: 190 }} })
Second thing is
To index a field that holds an array value, MongoDB adds index items
for each item in the array. These multikey indexes allow MongoDB to
return documents from queries using the value of an array. MongoDB
automatically determines whether to create a multikey index if the
indexed field contains an array value; you do not need to explicitly
specify the multikey type.
Consider the following illustration of a multikey index:
Diagram of a multikey index on the addr.zip field. The addr field contains an array of
address documents. The address documents contain the zip field.
Multikey indexes support all operations supported by other MongoDB
indexes; however, applications may use multikey indexes to select
documents based on ranges of values for the value of an array.
Multikey indexes support arrays that hold both values (e.g. strings,
numbers) and nested documents.
from http://docs.mongodb.org/manual/core/index-multikey/
Related
I've read the docs but It does not work as expected. I have 64 million of these documents all with different "millisecos":
{
"_id" : ObjectId("5396e85c12f43f5d1bafbd13"),
"author" : ObjectId("5396ca2b0fe95cf96599d881"),
"location" : {
"type": "Point",
"coordinates" : [
12.52891929999998,
16.620259
]
},
"name": "Jordan",
"description" : "aDescription"
}
And I have the following 3 indexes in Mongo:
{
// index on the _id
"location" : "2dsphere",
// compound index on location and time_1
}
How do I quickly query on the compound index to get distinct "names"? My query to get all applicable documents is this:
db.mycollection.find({"location":{"$geoWithin": {"$centerSphere": [-83.3434343,24.34343], 0.5}}, "millisecos": {"$gt": 1399522511000, "$lt":1399526111000}})
I am not sure how to get to the unique "names" of authors I want.
Is is slow, is indexing supposed to help? How do I make it fast? (w/o sharding). If query is bad, I am open to other recommendations. Does limit help or skip?
Query Plan:
{
"cursor": "S2Cursor",
"isMultiKey": true,
"n": 70394,
"nscannedObjects": 70394,
"nscanned": 10479843,
"nscannedObjectsAllPlans":70394
"scanAndOrder": false,
"nYields": 141,
"millis": 175250,
"indexBounds": { },
"nscanned": 10479843,
"matchTested": NumberLong(10024353),
"geoTested": NumberLong(60348),
"cellsInCover": NumberLong(26),
"server": mongo-cluster.excelmicro:27017
}
During my hands on with MongoDB I came to understand about a problem with MongoDB indexes. Problem is that MongoDB indexes sometimes doesn't enforce the two-end boundaries to query. Here's one of the output I encountered while querying the database:
Query:
db.user.find({transaction:{$elemMatch:{product:"mobile", firstTransaction:{$gte:ISODate("2015-01-01"), $lt:ISODate("2015-01-02")}}}}).hint("transaction.product_1_transaction.firstTransaction_1").explain()
Output:
"cursor" : "BtreeCursor transaction.firstTransaction_1_transaction.product_1",
"isMultiKey" : true,
"n" : 622,
"nscannedObjects" : 350931,
"nscanned" : 6188185,
"nscannedObjectsAllPlans" : 350931,
"nscannedAllPlans" : 6188185,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 235851,
"nChunkSkips" : 0,
"millis" : 407579,
"indexBounds" : {
"transaction.firstTransaction" : [
[
true,
ISODate("2015-01-02T00:00:00Z")
]
],
"transaction.product" : [
[
"mobile",
"mobile"
]
]
},
As you can see in above example for firstTransaction field one end of the bound is true instead of date I mentioned. I found the workaround for this is min(), max() functions. I tried those but they not seem to be working with embedded document (transaction is an array of sub document which contains fields like firstTransaction, product etc). I get following error:
Query:
db.user.find({transaction:{$elemMatch:{product:'mobile'}}}).min({transaction:{$elemMatch:{firstTransaction:ISODate("2015-01-01")}}}).max({transaction:{$elemMatch:{firstTransaction:ISODate("2015-01-02")}}})
Output:
planner returned error: unable to find relevant index for max/min query
firstTransaction field is indexed though as well as product & their compound index too. I don't know what is going wrong here.
Sample document:
{
_id: UUID (indexed by default),
name: string,
dob: ISODate,
addr: string,
createdAt: ISODate (indexed),
.
.
.,
transaction:[
{
firstTransaction: ISODate(indexed),
lastTransaction: ISODate(indexed),
amount: float,
product: string (indexed),
.
.
.
},...
],
other sub documents...
}
This is the correct behavior. You cannot always intersect the index bounds for $lte and $gte - sometimes it would give incorrect results. For example, consider the document
{ "x" : [{ "a" : [4, 6] }] }
This document matches the query
db.test.find({ "x" : { "$elemMatch" : { "a" : { "$gte" : 5, "$lte" : 5 } } } });
If we define an index on { "x.a" : 1 }, the two index bounds would be [5, infinity], and [-infinity, 5]. Intersecting them would give [5, 5] and using this index bound would not match the document - incorrectly!
Can you provide a sample document and tell us more about what you're trying to do with the query? With context, there may be another way to write the query that uses tighter index bounds.
I have a very large collection of documents like:
{ loc: [10.32, 24.34], relevance: 0.434 }
and want to be able efficiently do a query like:
{ "loc": {"$geoWithin":{"$box":[[-103,10.1],[-80.43,30.232]]}} }
with arbitrary boxes.
Adding an 2d index on loc makes this very fast and efficient. However, I want to now also just get the most relevant documents:
.sort({ relevance: -1 })
Which causes everything to grind to a crawl (there can be huge amount of results in any particular box, and I just need the top 10 or so).
Any advise or help greatly appreciated!!
Have you tried using the aggregation framework?
A two stage pipeline might work:
a $match stage that uses your existing $geoWithin query.
a $sort stage that sorts by relevance: -1
Here's an example of what it might look like:
db.foo.aggregate(
{$match: { "loc": {"$geoWithin":{"$box":[[-103,10.1],[-80.43,30.232]]}} }},
{$sort: {relevance: -1}}
);
I'm not sure how it will perform. However, even if it's poor with MongoDB 2.4, it might be dramatically different in 2.6/2.5, as 2.6 will include improved aggregation sort performance.
When there is a huge result matching particular box, sort operation is really expensive so that you definitely want to avoid it.
Try creating separate index on relevance field and try using it (without 2d index at all): the query will be executed much more efficiently that way - documents (already sorted by relevance) will be scanned one by one matching the given geo box condition. When top 10 are found, you're good.
It might not be that fast if geo box matches only small subset of the collection, though. In worst case scenario it will need to scan through the whole collection.
I suggest you to create 2 indexes (loc vs. relevance) and run tests on queries which are common in your app (using mongo's hint to force using needed index).
Depending on your tests results, you may even want to add some app logic so that if you know the box is huge you can run the query with relevance index, otherwise use loc 2d index. Just a thought.
You cannot have the scan and order value as 0 when you trying to use to have sorting on the part of a compound key. Unfortunately currently there is no solution for your problem which is not related to the phenomenon that you are using a 2d index or else.
When you run an explain command on your query the value of "scanAndOrder" show weather it was needed to have a sorting phase after collecting the result or not.If it is true a sorting after the querying was needed, if it is false sorting was not needed.
To test the situation i created a collection called t2 in a sample db this way:
db.createCollection('t2')
db.t2.ensureIndex({a:1})
db.t2.ensureIndex({b:1})
db.t2.ensureIndex({a:1,b:1})
db.t2.ensureIndex({b:1,a:1})
for(var i=0;i++<200;){db.t2.insert({a:i,b:i+2})}
While you can use only 1 index to support a query i did the following test with the results included:
mongos> db.t2.find({a:{$gt:50}}).sort({b:1}).hint("b_1").explain()
{
"cursor" : "BtreeCursor b_1",
"isMultiKey" : false,
"n" : 150,
"nscannedObjects" : 200,
"nscanned" : 200,
"nscannedObjectsAllPlans" : 200,
"nscannedAllPlans" : 200,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"b" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "localhost:27418",
"millis" : 0
}
mongos> db.t2.find({a:{$gt:50}}).sort({b:1}).hint("a_1_b_1").explain()
{
"cursor" : "BtreeCursor a_1_b_1",
"isMultiKey" : false,
"n" : 150,
"nscannedObjects" : 150,
"nscanned" : 150,
"nscannedObjectsAllPlans" : 150,
"nscannedAllPlans" : 150,
"scanAndOrder" : true,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 1,
"indexBounds" : {
"a" : [
[
50,
1.7976931348623157e+308
]
],
"b" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
]
},
"server" : "localhost:27418",
"millis" : 1
}
mongos> db.t2.find({a:{$gt:50}}).sort({b:1}).hint("a_1").explain()
{
"cursor" : "BtreeCursor a_1",
"isMultiKey" : false,
"n" : 150,
"nscannedObjects" : 150,
"nscanned" : 150,
"nscannedObjectsAllPlans" : 150,
"nscannedAllPlans" : 150,
"scanAndOrder" : true,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 1,
"indexBounds" : {
"a" : [
[
50,
1.7976931348623157e+308
]
]
},
"server" : "localhost:27418",
"millis" : 1
}
mongos> db.t2.find({a:{$gt:50}}).sort({b:1}).hint("b_1_a_1").explain()
{
"cursor" : "BtreeCursor b_1_a_1",
"isMultiKey" : false,
"n" : 150,
"nscannedObjects" : 150,
"nscanned" : 198,
"nscannedObjectsAllPlans" : 150,
"nscannedAllPlans" : 198,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"b" : [
[
{
"$minElement" : 1
},
{
"$maxElement" : 1
}
]
],
"a" : [
[
50,
1.7976931348623157e+308
]
]
},
"server" : "localhost:27418",
"millis" : 0
}
The indexes on individual fields does not help much so a_1 (not support sorting) and b_1 (not support queryin) is out . The index on a_1_b_1 also not fortunate while it will perform worse than the single a_1, mongoDB engine will not utilize the situation that the part related to one 'a' value stored ordered this way. What is worth to try is a compound index b_1_a_1 which in your case relevance_1_loc_1 while it will return the results in ordered manner so scanAndOrder will be false and i have not tested for 2d index but i assume it will exclude scanning some documents based on just the index value (that is why in the test in that case the nscanned is higher than nscannedObjects). The index unfortunately will be huge but still smaller than the docs.
This solution is valid if you need to search inside a box(rectangle).
The problem with geospatial index is that you can only place it in the front of a Compound Index (at least it is so for mongo 3.2)
So I thought why not to create my own "geospatial" index? All I need is to create a Compound Index on Lat, Lgn (X,Y) and add the sort field at the first place. Then I'll need to implement the logic of searching inside the box boundaries and specifically instruct mongo to use it (hint).
Translating to your problem:
db.collection.createIndex({ "relevance": 1, "loc_x": 1, "loc_y": 1 }, { "background": true } )
Logic:
db.collection.find({
"loc_x": { "$gt": -103, "$lt": -80.43 },
"loc_y": { "$gt": 10.1, "$lt": 30.232 }
}).hint("relevance_1_loc_x_1_loc_y_1") // or whatever name you gave it
Use $gte and $lte if you need inclusive results.
And you don't need to use .sort() since it's already sorted, or you can do a reverse sort on relevance if you need.
The only issue that I encountered with it is when the box area is small. It took more time to find small areas than large ones. That is why I kept the geospatial index for small area searches.
I have converted my old collection using mongodb "2d" index to a collection having geojson specification "2dsphere" index. The problem is that the query is taking about 11 second to execute on collection of about 2 lac objects. Previously is was taking about 100 ms for query. My document is as follow.
{
"_id": ObjectId("4f9c2aa2d142b9882f02a3b3"),
"geonameId": NumberInt(1106542),
"name": "Chitungwiza",
"feature code": "PPL",
"country code": "ZW",
"state": "Harare Province",
"population": NumberInt(340360),
"elevation": "",
"timezone": "Africa\/Harare",
"geolocation": {
"type": "Point",
"coordinates": {
"0": 31.07555,
"1": -18.01274
}
}
}
My explain query output is given below.
db.city_info.find({"geolocation":{'$near':{ '$geometry': { 'type':"Point",coordinates:[73,23] } }}}).explain()
{
"cursor" : "S2NearCursor",
"isMultiKey" : true,
"n" : 172980,
"nscannedObjects" : 172980,
"nscanned" : 1121804,
"nscannedObjectsAllPlans" : 172980,
"nscannedAllPlans" : 1121804,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 13,
"nChunkSkips" : 0,
"millis" : 13841,
"indexBounds" : {
},
"nscanned" : 1121804,
"matchTested" : NumberLong(191431),
"geoMatchTested" : NumberLong(191431),
"numShells" : NumberLong(373),
"keyGeoSkip" : NumberLong(930373),
"returnSkip" : NumberLong(933610),
"btreeDups" : NumberLong(0),
"inAnnulusTested" : NumberLong(191431),
"server" : "..."
}
Please let me know how can I correct the problem and reduce the query time.
The $near command does not require $maxDistance argument for "2dsphere" databases as you suggest. Adding $maxDistance just specified a range that reduced the number of query results to a manageable number. The reason for the difference in your experience changing from "2d" to "2dsphere" style indexes is that "2d" indexes impose a default limit of 100 if none is specified. As you can see, the default query plan for 2dsphere indexes does not impose such limit so the query is scanning the entire index ("nscannedObjects" : 172980). If you ran the same query on a "2d" index you would see "n" and "nscannedObjects" are only 100 which explains the cost discrepancy.
If all of your items were within the $maxDistance range (try it with $maxDistance 20M meters, for instance), you will see the query performance degrade back to where it was without it. In either case, it is very important to use limit() to tell the query plan to only scan the necessary results within the index to prevent runaways, especially with larger data sets.
I have solved the problem. The $near command requires $maxDistance argument as specified here: http://docs.mongodb.org/manual/applications/2dsphere/ . As soon as I supplied $maxDistance, the query time reduced to less than 100 ms.
In MongoDB, I have the following document
{
"_id": { "$oid" : "4FFD813FE4B0931BDAAB4F01" },
"concepts": {
"blabla": 20,
"blibli": 100,
"blublu": 250,
... (many more here)
}
}
And I would like to index it to be able to query for the "key" of the "concept" array (I know it's not really a mongoDB array...):
db.things.find({concepts:blabla});
Is it possible with the above schema? Or shall I refactor my documents to something like
{
"_id": { "$oid" : "4FFD813FE4B0931BDAAB4F01" },
"concepts": ["blabla","blibli","blublu", ... (many more here)]
}
}
I'll answer your actual question. No you cannot index on the field names given your current schema. $exists uses an index but that is an existence check only.
There are a lot of problems with a schema like the one you're using and I would suggest a refactor to :
{
"_id": { "$oid" : "4FFD813FE4B0931BDAAB4F01" },
"concepts": [
{name:"blabla", value: 20},
{name:"blibli", value: 100},
{name:"blublu", value: 250},
... (many more here)
]
}
then index {'concepts.name:1'} and you can actually query on the concept names rather than just check for the existence.
TL;DR : No you can't.
You can query field presence with specific query:
db.your_collection.find({"concept.yourfield": { $exists: true }})
(notice the $exists)
It will return all your document where yourfield is a field of concept subdocument
edit:
this solution is only about query. Indexes contains values not field.
MongoDB indexes each value of the array so you can query for individual items.As you can find here.
But in nested arrays you need to tell to index mongodb to index your sub-fields.
db.col1.ensureIndex({'concepts.blabla':1})
db.col1.ensureIndex({'concepts.blublu':1})
db.col1.find({'concepts.blabla': 20}).explain()
{
"cursor" : "BtreeCursor concepts.blabla_1",
"nscanned" : 1,
"nscannedObjects" : 1,
"n" : 1,
"millis" : 0,
"nYields" : 0,
"nChunkSkips" : 0,
"isMultiKey" : false,
"indexOnly" : false,
"indexBounds" : {
"concepts.blabla" : [
[
20,
20
]
]
}
}
After creating the index , the cursor type changes itself from BasicCursor to BtreeCursor.
if you create your document as you stated at the end of your question
{
"_id": { "$oid" : "4FFD813FE4B0931BDAAB4F01" },
"concepts": ["blabla","blibli","blublu", ... (many more here)]
}
}
just the indexing will be enough as below:
db.col1.ensureIndex({'concepts':1})