For example, I have documents with only three fields: user, date, status. Since I select by user and sort by date, I have those two fields as an index. That is the proper thing to do. However, since each date only has one status, I am essentially indexing everything. Is it okay to not index all fields in a query? Where do you draw the line?
What makes this question more difficult is the complete opposite approach to indexes between read-heavy and write-heavy collections. If yours is somewhere in between, how do you determine the proper approach when it comes to indexes?
Is it okay to not index all fields in a query?
Yes, but you'll want to avoid this for frequently used queries. Anything not indexed will imply a "table scan". This means accessing each possible document individually, which will be slow.
Where do you draw the line?
Also note, that if you sort by an un-indexed field, MongoDB will "yell at you" if you're trying to sort too much data. So you have to have some awareness of how much data is "outside of" the index.
If yours is somewhere in between, how do you determine the proper approach when it comes to indexes?
Monitoring, instrumenting, experimenting and experience.
There is no hard and fast rule here, it's all going to be about trade-offs. CPU vs. RAM vs. Disk IO vs. Responsiveness, etc.
The perfect situation is to store everything in a single index. By everything I mean all fields you query on, you sort by and you retrieve. This will ensure that you'll get maximum performance (if index fits in ram)
This situation is not always possible, so you'll have to make choices.
Here are 3 tips to reduce at maximum the index size:
Does each of your query have a lot of results or only a few ? => A few : you do not have to index all the fields you retrieve (only the query and sort fields because few results mean few disk access).
Does your query results are often the same (i.e your working set is small) ? => don't index the field you retrieve because results are cached by mongodb.
Do you have a query field more selective than another ? => index the more selective field only.
Related
I am designing a MongoDB collection that will have 50 million documents and every field in the document will be searchable and sortable. The searching and sorting logics will be sent from the frontend so could be a lot of fields searchings and sorting combinations. I've made some tests and concluded that when there is searching and sorting only in indexed fields the query runs very fast, but when searching or sorting non-indexed fields the query runs very slow.
Considering that will have a lot of possible searching/sorting combinations, how can I build indexes in this collection in this case to get a better performance?
Indexing comes at a cost of extra memory space and possible increased execution time of database write(insert and update) operations. However, like you rightly pointed out, indexing makes database reads(and sorting) super fast.
Creating indexes is easy and straight forward, however, you need to consider the tradeoffs, most times, this is usually the read-write ration of the fields in your documents.
If you frequently read(or sort) documents from a very large collection(like the 50million examples you mentioned), it makes a lot of sense to add indexing to all the fields you use to identify(or sort) your documents, you just need to ensure you don't run out of memory space in the DB. Not indexing the fields would be very frustrating, just imagine if you need to get the last document by a field that is not indexed, you would have to search through 49,999,999 documents to find it.
I hope this helps.
I have a collection I am updating adding a new field.
The document looks like:
{"A": "P145", "B":"adf", "C":[{"df":"14", "color":"blue"},{"df":17}],
"_id":ObjectID(....), "Synonyms":{"Synonym1": "value1",
"Synonym2": ["value1", "value2"]}}
In the update I am adding new elements to C
I want to create a index on the field A and B. A and B are 20206 unique fields. The queries to the database will be based on these fields.
The "_id" is set by default.
I plan to do it with collection.ensure_index({"A":1, "B":1}, background=True)
How much time could it need? It will be faster than the system index based on "_id"?
The amount of time it takes to add the index would depend on your hardware, but with 20206 records a simple index as you describe shouldn't take very long for most hardware.
Queries fully covered by the index (i.e. where you specify A and B, or just A, but not just B - indexes cover from left to right so unless you include A in the select, the index can't be used) will be much faster to retrieve the results. Unless you are searching by _id, the default index on _id won't help you at all; queries on A and B will have to perform a full collection scan without your proposed index, which is orders of magnitude slower than an index scan.
Inserts will be slightly slower as the index will need to be updated too, but again with a relatively small number of total documents, this isn't likely be a large overhead.
The updates to change the C collection may well be faster if you are using A and B to identify which document to update, as they will benefit from the faster search, and the update should not be impacted once the data is found as the index should not need changing.
As the absolute performance will be specific to your hardware, if you're concerned about it the best thing to do is try it out on a copy of the data (on similar hardware) and measure whether the performance meets your needs. The output from explaining the query can be very informative in understanding how your indexes are impacting your query performance.
Well, the time taken to create the index totally depends on the hardware (system) you are using and the number of records. for ~20K records it should be quick and not take more time. max few seconds in worst case. Little off topic, but i see that you have given background true option, probably its not needed as these background options are used while create a very large data set.Please consider few things while creating index, not only for this question but in general.
when you create Index in foreground they block the operation and wouldn't allow the read operation and that the reason background true is used. http://docs.mongodb.org/v2.2/administration/indexes/
the good part with foreground index creation is that the indexes are more compact and better compare to background. hence it should be preferred.
The good news is over a long run, both background index creation and foreground delivers the same performance and does'nt matter which way the indexes were created. ... Happy Mongoing.. ;-)
-$
Scenario
I have Mongo collection Items that have dynamic item objects in it. Currently I have over 3 million records. I'm using C# with MongoSharp but I don't think it has anything to do with my problem.
Here is an example Item (it has a lot more fields than just 3):
{
_id : "1234567890",
Code : 888596937,
RefNumber : "GHTZKL",
...
}
AFAIK there is no point in using TextSearch since it's not really words, just some codes so it won't give me anything beneficial. I also cannot index them all since then I will have to index every single field.
Problem
Right now when I filter data it takes about 1-3 seconds (on ssd). Is there any way I can make it filter my items faster or it is as fast as I can get?
You don't mention what field you want to search on, but it sounds like you want to search on any arbitrary attribute. This is a common design and borders on an antipattern for MongoDB. The only way to avoid the collection scan you're getting now is to index the fields you want to search on, but indexing every field when you don't know what the fields will be ahead of time isn't possible. The solution to this to name only the common fields (and index on them), then group the other fields into name/value pairs in an array in the the document. You can then index that array to get your fast searches.
A word of caution on NVP arrays: if you array gets very large (hundreds of attributes), your index size will blow up spectacularly. It's best to try to keep the array size fairly small.
For more information on this design pattern, see Asya's great writeup.
I am doing a range query on _id and need to return only one particular field ("data") from the found documents. I would like to make this query indexOnly for optimal performance.
Here is the query:
db.collection.find({_id:{$gte:"c",$lte:"d"}},{_id:0,data:1})
This query is of course not indexOnly so I need to add another index:
db.collection.ensureIndex({_id:1,data:1})
and tell MongoDB to use that Index with:
db.collection.find({_id:{$gte:"c",$lte:"d"}},{_id:0,data:1}).hint({_id:1,data:1})
(The hint is needed because otherwise MongoDB will use the standard _id index for the query.)
This works as expected and makes the query indexOnly. However one cannot delete the standard _id index even though it is no longer needed which leads to a lot of wasted space for the doubled index. It is also annoying to be forced to always use the hint() in the query.
So I am wondering if there is a smarter way to do this.
I don't believe that there is any way to do what you want. The _id index cannot be removed, and you need to have the second index in order to perform a covered (indexOnly) query on your data.
Do you really have the need to have only a single index? I would suspect that you probably only have the requirement for either increased speed or reduced disk usage, but not both. If you do really have a requirement for both increased speed and reduced disk usage, you may need to look for a different database solution, since all of the techniques used to speed up MongoDB queries (indexes, covered queries, sharding, etc) tend to trade increased disk usage in order to gain the speed boost they provide.
EDIT:
Also, if the call to hint is bugging you, you can probably leave it off since MongoDB will eventually re-optimize it's query plan at which point it will switch over to your new index if it really is faster.
In Mongo, suppose I have a collection mycollection that has fields a, b, and huge. I very frequently want to perform queries, mapreduce, updates, etc. on a, and b and very occassionally want to return huge in query results as well.
I know that db.mycollection.find() will scan the entire collection and result in Mongo attempting to add the whole collection to the working set, which may exceed the amount of RAM I have available.
If I instead call db.mycollection.find({}, { a : 1, b : 1 }), will this still result in the whole collection being added to the working set or only the terms of my projection?
MongoDB can use something called covered queries: http://docs.mongodb.org/manual/applications/indexes/#create-indexes-that-support-covered-queries these allow you to load all the values from the index rather than the disk, or memory, if those documents are in memory at the time.
Be warned that you cannot use covered queries on a full table scan, the condition, projection and sort must all be within the index; i.e.:
db.col.ensureIndex({a:1,b:1});
db.col.find({a:1}, {_id:0, a:1, b:1})(.sort({b:1}));
Would work (the sort is in brackets because it is not totally needed). You can add _id to your index if you intend to return that too.
Map Reduce does not support covered queries, there is no way to project only a certain amount of fields into the MR, as far as I know; maybe there is some hack I do not know of. Map Reduce only supports a $match like operator in terms of input query with a separate parameter for the sort of the incoming query ( http://docs.mongodb.org/manual/applications/map-reduce/ ).
Note that for updates I believe only atomic operations: http://docs.mongodb.org/manual/tutorial/isolate-sequence-of-operations/ (excluding findAndModify) do not load the document into your working set, however, believe is the keyword there.
Considering you need to do both MR and normal find and update on these records I would strongly recommend you look into checking why you are paging in so much data and whether you really do need to do it that often. It seems like you are trying to do too much processing in a short and frequent amount of time.
On the other hand, if this is a script which runs every night or something then I would not worry too much about its excessive working set (i.e. score board recalc script).