Do I need composite indices if each attribute is indexed in mongodb collection? - mongodb

Suppose I have a collection in a mongo database with the following documents
{
"name" : "abc",
"email": "abc#xyz.com",
"phone" : "+91 1234567890"
}
The collection has a lot of objects (a million or so), and my application, apart from regularly adding objects to this collection, does a few different types of finds on this data.
One method does a find with all the three attributes (name, email and phone), so I can make a composite index for those three fields to make sure this find works effiently.
db.mycollection.ensureIndex({name:1,email:1,phone:1})
Now, I also have methods in my application which fetch all the objects with the same name (bad example, I know). So I need an index for the name field.
db.mycollection.ensureIndex({name:1})
Gradually, my application grows to a point where I have to index the other fields.
Now, my question. If I have each of the attributes indexed individually, does it still make sense to maintain composite indices for all three attributes (or 2 of the attributes)?
Obviously, this is a bad example... If I were making a collection to store multiple contact info for a person, I'd use arrays. But, this question is purely about the indexes.

It depends on your queries.
If you are doing a query such as:
db.mycollection.find({"name": "abc", email: "abc#xyz.com", phone: "+91 1234567890"});
then a composite index would be the most efficient.

Just to answer my own question for sake of completion:
Compound indexes don't mean that each of the individual attributes are indexed, only the first attribute in the compound index can be used alone in a find with efficiency. The idea is to strike a balance and optimize queries, as too many indexes increase disk storage and insertion time.

Related

What should the indexing strategy be to support queries that are a combination of different fields?

Lets say I have a User collection, where a document looks like this
{
"name": "Starlord",
"age": 24,
"gender": "Male",
"height": 180,
"weight": 230,
"hobbies": "Flying Spaceships"
}
Now, I want someone to be able to search for User based on one or more of these fields. So I add a compound index containing all these fields in the order above.
The issue is that MongoDB indexing works great when the query fields are a prefix of the indexed fields. For example, if I query by name, age and gender then the performance of the query is great. If I query by name, gender and weight, then the performance of the query is not so great (although it still uses the index and is faster than no-index).
What indexing strategy do you use when you have a use case like this?
The reason why your query by name, age and gender works great while the query by name, gender and weight does not is because the order of the fields matter significantly for compound indexes in MongoDB, especially the index's prefixes. As explained in this page in the documentation, a compound index can support queries on any prefix of its fields. So assuming you created the index in the order you presented the fields, the query for name, age and gender is a prefix of your compound index, while name, gender and weight can only take advantage of the name part of the index.
Supporting all possible combinations of queries on these fields would require you to create enough compound indexes so that all possible queries are prefixes of your indexes. I would say that this is not something you would want to do. Since your question asks about indexing strategies for queries with multiple fields, I would suggest that you look into the specific data access patterns that are most useful for your data set and create a few compound indexes that support these, taking advantage of the prefixes concept and omitting certain fields with low cardinality from the index, such as gender.
If you need to be able to query for all combinations, the number of indexes requires explodes quickly. The feature that comes to the rescue is called "index intersection".
Create a simple index on each field and trust the query optimizer to perform the correct index intersection. This feature is relatively new (from 2.6) and not as feature complete as in the well-known RBDMSses. It makes sense to track the Jira Ticket for index intersections to know the limitations, because the limitations are quite severe. It usually makes sense to carefully mix simple indexes (that can be intersected) and compound indexes (for very common queries).
In your specific case, you can utilize the fact that many fields are numeric and the range of valid values is very limited (e.g., for age, height and weight). The gender field has low selectivity and shouldn't be indexed in any case. Filter the gender in the last step, because it will, on average, only double the amount of data that must be processed.
Creating n! compound indexes is almost certainly not an option for n > 3...

MongoDB index on many (nested) fields/attributes

In e-commerce application I have documents like this:
{ category:'A', ..., price:122,
attr:{ width:6, height:4, hasLCD:true, lcdType:'some text', ..., a36:null }
}
I.e. every product has many attributes of various simple types.
Now I want to filter products by dynamic queries containing top level fields plus some attributes. For example:
find({category:'A', price:{$lt:200}, ...,
'attr.height':{$lt:6}, 'attr.hasLCD':true, 'attr.lcdType':{$in:[...]}, ...})
And I'd like this to perform fast.
Trying to index on all possible 'attr.*' variants gives me an error (too many compound keys). I also suspect that if I index it that way and then omit one of attrs in query index won't work.
Trying to index on 'attr' as a whole does not help either.
What is the proper way to model this under MongoDB?
Update
I have tried this approach (also mentioned here). I.e. store attributes as array of key-value pairs:
attr2: [ {tag:'lcgType', value:'some text'}, ...
And index it like this:
ensureIndex({ 'attr2.tag':1, 'attr2.value':1 })
And query like this:
find({attr2:{$all:[
{$elemMatch:{tag:'bestseller',value:true}},
{$elemMatch:{tag:'weight',value:{$lte:100}}}
]}})
Now explain() says that it is using "BtreeCursor attr2.tag_1_attr2.value_1" but still "nscanned" : 31607 and the whole execution time have actually increased (compared to non-indexed scenario).
Something is wrong here.
Sub-question
What if I select some (less than 31) most frequently queried attributes and try to index on those. If I put all of them in single compound index:
ensureIndex({'attr.a1':1, 'attr.a2':1, ...})
According to the docs this index won't be used for queries missing attr.a1 attribute.
How to define index in this case?
If you really have to allow a lot of filters, combinations and possibly even sorts, MongoDB is not a good fit because it uses only one index per query. The number of indexes then grows way too fast, because compound keys are somewhat inflexible (that should answer the subquestion) and becomes a performance hog.
Use a search database like ElasticSearch, SolR, etc. instead that comes with the features you need. You can the use a $in on the ids that the search server returned if you want to keep the base information in MongoDB (it's usually a good idea to have the search database simply replicate the information of the primary data store so you don't need to sync changes two-way, which would be a nightmare)

Mongo _id for subdocument array

I wish to add an _id as property for objects in a mongo array.
Is this good practice ?
Are there any problems with indexing ?
I wish to add an _id as property for objects in a mongo array.
I assume:
{
g: [
{ _id: ObjectId(), property: '' },
// next
]
}
Type of structure for this question.
Is this good practice ?
Not normally. _ids are unique identifiers for entities. As such if you are looking to add _id within a sub-document object then you might not have normalised your data very well and it could be a sign of a fundamental flaw within your schema design.
Sub-documents are designed to contain repeating data for that document, i.e. the addresses or a user or something.
That being said _id is not always a bad thing to add. Take the example I just stated with addresses. Imagine you were to have a shopping cart system and (for some reason) you didn't replicate the address to the order document then you would use an _id or some other identifier to get that sub-document out.
Also you have to take into consideration linking documents. If that _id describes another document and the properties are custom attributes for that document in relation to that linked document then that's okay too.
Are there any problems with indexing ?
An ObjectId is still quite sizeable so that is something to take into consideration over a smaller, less unique id or not using an _id at all for sub-documents.
For indexes it doesn't really work any different to the standard _id field on the document itself and a unique index across the field should work across the collection (scenario dependant, test your queries).
NB: MongoDB will not add an _id to sub-documents for you.

MongoDB: Speed of field ("inside record") search in comporation with speed of search in "global scope"

My question may be not very good formulated because I haven't worked with MongoDB yet, so I'd want to know one thing.
I have an object (record/document/anything else) in my database - in global scope.
And have a really huge array of other objects in this object.
So, what about speed of search in global scope vs search "inside" object? Is it possible to index all "inner" records?
Thanks beforehand.
So, like this
users: {
..
user_maria:
{
age: "18",
best_comments :
{
goodnight:"23rr",
sleeptired:"dsf3"
..
}
}
user_ben:
{
age: "18",
best_comments :
{
one:"23rr",
two:"dsf3"
..
}
}
So, how can I make it fast to find user_maria->best_comments->goodnight (index context of collections "best_comment") ?
First of all, your example schema is very questionable. If you want to embed comments (which is a big if), you'd want to store them in an array for appropriate indexing. Also, post your schema in JSON format so we don't have to parse the whole name/value thing :
db.users {
name:"maria",
age: 18,
best_comments: [
{
title: "goodnight",
comment: "23rr"
},
{
title: "sleeptired",
comment: "dsf3"
}
]
}
With that schema in mind you can put an index on name and best_comments.title for example like so :
db.users.ensureIndex({name:1, 'best_comments.title:1})
Then, when you want the query you mentioned, simply do
db.users.find({name:"maria", 'best_comments.title':"first"})
And the database will hit the index and will return this document very fast.
Now, all that said. Your schema is very questionable. You mention you want to query specific comments but that requires either comments being in a seperate collection or you filtering the comments array app-side. Additionally having huge, ever growing embedded arrays in documents can become a problem. Documents have a 16mb limit and if document increase in size all the time mongo will have to continuously move them on disk.
My advice :
Put comments in a seperate collection
Either do document per comment or make comment bucket documents (say,
100 comments per document)
Read up on Mongo/NoSQL schema design. You always query for root documents so if you end up needing a small part of a large embedded structure you need to reexamine your schema or you'll be pumping huge documents over the connection and require app-side filtering.
I'm not sure I understand your question but it sounds like you have one record with many attributes.
record = {'attr1':1, 'attr2':2, etc.}
You can create an index on any single attribute or any combination of attributes. Also, you can create any number of indices on a single collection (MongoDB collection == MySQL table), whether or not each record in the collection has the attributes being indexed on.
edit: I don't know what you mean by 'global scope' within MongoDB. To insert any data, you must define a database and collection to insert that data into.
Database 'Example':
Collection 'table1':
records: {a:1,b:1,c:1}
{a:1,b:2,d:1}
{a:1,c:1,d:1}
indices:
ensureIndex({a:ascending, d:ascending}) <- this will index on a, then by d; the fact that record 1 doesn't have an attribute 'd' doesn't matter, and this will increase query performance
edit 2:
Well first of all, in your table here, you are assigning multiple values to the attribute "name" and "value". MongoDB will ignore/overwrite the original instantiations of them, so only the final ones will be included in the collection.
I think you need to reconsider your schema here. You're trying to use it as a series of key value pairs, and it is not specifically suited for this (if you really want key value pairs, check out Redis).
Check out: http://www.jonathanhui.com/mongodb-query

How to deal with Many-to-Many relations in MongoDB when Embedding is not the answer?

Here's the deal. Let's suppose we have the following data schema in MongoDB:
items: a collection with large documents that hold some data (it's absolutely irrelevant what it actually is).
item_groups: a collection with documents that contain a list of items._id called item_groups.items plus some extra data.
So, these two are tied together with a Many-to-Many relationship. But there's one tricky thing: for a certain reason I cannot store items within item groups, so -- just as the title says -- embedding is not the answer.
The query I'm really worried about is intended to find some particular groups that contain some particular items (i.e. I've got a set of criteria for each collection). In fact it also has to say how much items within each found group fitted the criteria (no items means group is not found).
The only viable solution I came up with this far is to use a Map/Reduce approach with a dummy reduce function:
function map () {
// imagine that item_criteria came from the scope.
// it's a mongodb query object.
item_criteria._id = {$in: this.items};
var group_size = db.items.count(item_criteria);
// this group holds no relevant items, skip it
if (group_size == 0) return;
var key = this._id.str;
var value = {size: group_size, ...};
emit(key, value);
}
function reduce (key, values) {
// since the map function emits each group just once,
// values will always be a list with length=1
return values[0];
}
db.runCommand({
mapreduce: item_groups,
map: map,
reduce: reduce,
query: item_groups_criteria,
scope: {item_criteria: item_criteria},
});
The problem line is:
item_criteria._id = {$in: this.items};
What if this.items.length == 5000 or even more? My RDBMS background cries out loud:
SELECT ... FROM ... WHERE whatever_id IN (over 9000 comma-separated IDs)
is definitely not a good way to go.
Thank you sooo much for your time, guys!
I hope the best answer will be something like "you're stupid, stop thinking in RDBMS style, use $its_a_kind_of_magicSphere from the latest release of MongoDB" :)
I think you are struggling with the separation of domain/object modeling from database schema modeling. I too struggled with this when trying out MongoDb.
For the sake of semantics and clarity, I'm going to substitute Groups with the word Categories
Essentially your theoretical model is a "many to many" relationship in that each Item can belong Categories, and each Category can then possess many Items.
This is best handled in your domain object modeling, not in DB schema, especially when implementing a document database (NoSQL). In your MongoDb schema you "fake" a "many to many" relationship, by using a combination of top-level document models, and embedding.
Embedding is hard to swallow for folks coming from SQL persistence back-ends, but it is an essential part of the answer. The trick is deciding whether or not it is shallow or deep, one-way or two-way, etc.
Top Level Document Models
Because your Category documents contain some data of their own and are heavily referenced by a vast number of Items, I agree with you that fully embedding them inside each Item is unwise.
Instead, treat both Item and Category objects as top-level documents. Ensure that your MongoDb schema allots a table for each one so that each document has its own ObjectId.
The next step is to decide where and how much to embed... there is no right answer as it all depends on how you use it and what your scaling ambitions are...
Embedding Decisions
1. Items
At minimum, your Item objects should have a collection property for its categories. At the very least this collection should contain the ObjectId for each Category.
My suggestion would be to add to this collection, the data you use when interacting with the Item most often...
For example, if I want to list a bunch of items on my web page in a grid, and show the names of the categories they are part of. It is obvious that I don't need to know everything about the Category, but if I only have the ObjectId embedded, a second query would be necessary to get any detail about it at all.
Instead what would make most sense is to embed the Category's Name property in the collection along with the ObjectId, so that pulling back an Item can now display its category names without another query.
The biggest thing to remember is that the key/value objects embedded in your Item that "represent" a Category do not have to match the real Category document model... It is not OOP or relational database modeling.
2. Categories
In reverse you might choose to leave embedding one-way, and not have any Item info in your Category documents... or you might choose to add a collection for Item data much like above (ObjectId, or ObjectId + Name)...
In this direction, I would personally lean toward having nothing embedded... more than likely if I want Item information for my category, i want lots of it, more than just a name... and deep-embedding a top-level document (Item) makes no sense. I would simply resign myself to querying the database for an Items collection where each one possesed the ObjectId of my Category in its collection of Categories.
Phew... confusing for sure. The point is, you will have some data duplication and you will have to tweak your models to your usage for best performance. The good news is that that is what MongoDb and other document databases are good at...
Why don't use the opposite design ?
You are storing items and item_groups. If your first idea to store items in item_group entries then maybe the opposite is not a bad idea :-)
Let me explain:
in each item you store the groups it belongs to. (You are in NOSql, data duplication is ok!)
for example, let's say you store in item entries a list called groups and your items look like :
{ _id : ....
, name : ....
, groups : [ ObjectId(...), ObjectId(...),ObjectId(...)]
}
Then the idea of map reduce takes a lot of power :
map = function() {
this.groups.forEach( function(groupKey) {
emit(groupKey, new Array(this))
}
}
reduce = function(key,values) {
return Array.concat(values);
}
db.runCommand({
mapreduce : items,
map : map,
reduce : reduce,
query : {_id : {$in : [...,....,.....] }}//put here you item ids
})
You can add some parameters (finalize for instance to modify the output of the map reduce) but this might help you.
Of course you need to have another collection where you store the details of item_groups if you need to have it but in some case (if this informations about item_groups doe not exist, or don't change, or you don't care that you don't have the most updated version of it) you don't need them at all !
Does that give you a hint about a solution to your problem ?