MongoDB schema design for unbounded growing table - mongodb

I'm practicing on MongoDB through a small personal project,
in which, may encounter a need to store some intermediate data abstracted as a unbounded growing table. Both rows and columns would grow boundlessly.
The usage of this abstract table is that I want to be able to
know the corresponding column for each entry in a row
know the corresponding row for each entry in a column
Or, in other word, know the index of each table entry
Hence there comes up two choices to model the table:
Make two collections:
one holds each row as a document which embeds a growing structure as row entries to have reference to the corresponding columns;
and similarly, another collection holds each column as a document embedding a growing structure to reference to the corresponding rows.
Make a single separate collection that holds each table entry as a document. Hence each document size is fixed.
The first model has problem with document growth (In fact, in my application, the table grows a bit askew, and only one collection would encounter document growth issue). The second model seems fine to me. Is there some pitfall or some other issue that should be aware of? And what is the common practice to deal with such problem?
UPDATE: explain things in more detail
I am trying to do automatic summarization of an ongoing conversation. The input is a corpus of sentences, and terms are extracted from each sentences. For example, English terms are stemmed, and sentences in CJK languages are segmented. Hence obtained a term-sentence matrix. Then one of the method needs to compute (sparse) SVD of such term-sentence matrix.
The sentences and extracted terms would be stored into the database. But the term-sentence matrix would grow unbounded.
(Or one can think of the problem of storing a mapping between tweets and hashtags)
There were two choices of draft schema that comes up to my mind:
choice one (hold two-way linkages between sentences and terms)
{ // sentence collection doc
"_id" : // generated by timestamp
, "text" : //
, "contained_terms" : [
// an array of "_id"s in term collection
]
}
{ // term collection doc
"_id" : // use term name
, "in_sentences" : [
// an array of "_id"s in sentence collection
]
}
choice two (make linkages into a separate collection)
{ // linkage collection doc (as matrix entries)
"_id" : // generated by timestamp
, "term" : // an "_id" in term collection
, "in_sentence" : // an "_id" in sentence collection
}
{ // sentence collection doc
"_id" : // generated by time stamp
, "text" : //
}
{ // term collection doc
"_id" : // use term name
}
The choice one encounters document growth problem because "in sentences" array of a term collection doc is very likely to grow beyond limit when sentences come in nonstop.
The choice two extract the linkage between terms and sentences into a separate collection, hence avoids the document growth. Although querying "which sentences contain the term" costs more, but in the end, it seems I don't actually need such operation much.
Currently, I'm thinking that the choice two better suit my needs. The linkage collection seems conform to the input of sparse SVD. To speed up computation, very high frequency terms can be filtered out if the term frequency field is added to each term collection docs (or in a separate collection when there are more than one conversations). This filtering seems fine in the case of automatic summarization.
But still wonder
Is there some issues or pitfalls that should be aware of?
What is the common practice for similar situation?

My understanding of mongodb is that you need to design your schema around your queries. So how you save your data is highly dependent on what data will you be querying. So even for the same set of data, your schema can vary depending on the actual use case. Additionally, data redundancy is quite common in NoSql database design. In case you are going to need some data again and again, there is no point in saving it in a separate collection. You can duplicate it in 2 collections, and that's a fair enough cost for faster querying. Memory is cheap, processing isn't! Additionally, pre-aggregation helps in case of mongo for huge data sets. Your queries will work fine for decent number of documents, but once you go into the realm of millions of records, you may face problems with a certain class of queries like counts, aggregation, etc. Pre-aggregation helps in keeping things real time, though it may have a higher write/insertion overhead. Always avoid a full table scan, whenever you can.
Above are some broad level concepts that I find relevant to your question. I'll try and explain it in your context with some examples (as I am not sure what data you are eventually going to need, or the queries you will do).
Let's say you are going to need terms per sentence frequently, to highlight them. In that case the recommended schema will be:
{ "_id" : // sentence id - you will query on this
, "text" : // sentence text
, "terms" : ["term1", "term2", "term3"]
}
So for each new sentence, you extract all the terms and save it (not the id) along with the sentence. The advantage here being that you will not need to query for the term separately. You can get all the terms for a given sentence in a single query. Additionally, the document size doesn't grow, and hence no document relocation.
Let's say you also want to have a unique list of terms and some per term meta data. You can have a separate terms collection which has a list of all the unique terms:
{ "_id": ,
, "term": //term
, "meaning":
, "metadata""
, "count": 1
}
You can have a unique index on term. Each time you extract terms from a sentence, you look up for it in this collection, and in case you don't find it, you insert it. Now let's say you also want to maintain a count of term appearance. So each time you find a term in a sentence and do a lookup in terms collection, you can increment (atomic) the count as well - pre-aggregation. If you add an index on count, you can get the top 100 terms, etc. easily on the fly.
Now let's say you want to query/count all the sentences with a given term. You can add an index on terms array and directly look up for all the sentences with a given term:
Sentence.where(:term => "term1").count //mongoid query
Again, you are achieving this with a single query, as opposed to getting a term id first in your case, and then the sentences.
Other than this it's always advisable to ensure that your working set and indexes fit into RAM for best performance.
So again, there are no right and wrong answers for schema design and it definitely depends on the queries you will be doing. I would also advise you to unlearn some of your relational DB concepts when trying to design for NoSQL databases. I learned it the hard way =) Hope some of this helps you in coming up with an efficient schema for your use case.

If you are trying to model a matrix with the whole collection representing the matrix, I think the go-to model should be to have each entry (row i, column j) as a document. If you put in a field like "index" : { "row" : i, "column" : j} and appropriate indices then it's easy and fast to do fun things like
get the entry at (i, j)
get row i
get column j
The matrix is represented sparsely so if row i only has 10 columns with values, row i is just 10 documents. If the rows/columns really do grow unboundedly to very large sizes then modeling a document as a row or column or something of "1 dimension" could hit the hard 16MB BSON document size limit.
I'm thinking the biggest drawback could be large index sizes given that every entry is its own document.

Related

What is the correct way to Index in MongoDB when big combination of fields exist

Considering I have search pannel that inculude multiple options like in the picture below:
I'm working with mongo and create compound index on 3-4 properties with specific order.
But when i run a different combinations of searches i see every time different order in execution plan (explain()). Sometime i see it on Collection scan (bad) , and sometime it fit right to the index (IXSCAN).
The selective fields that should handle by mongo indexes are:(brand,Types,Status,Warehouse,Carries ,Search - only by id)
My question is:
Do I have to create all combination with all fields with different order , it can be 10-20 compound indexes. Or 1-3 big Compound Index , but again it will not solve the order.
What is the best strategy to deal with big various of fields combinations.
I use same structure queries with different combinations of pairs
// Example Query.
// fields could be different every time according to user select (and order) !!
db.getCollection("orders").find({
'$and': [
{
'status': {
'$in': [
'XXX',
'YYY'
]
}
},
{
'searchId': {
'$in': [
'3859447'
]
}
},
{
'origin.brand': {
'$in': [
'aaaa',
'bbbb',
'cccc',
'ddd',
'eee',
'bundle'
]
}
},
{
'$or': [
{
'origin.carries': 'YYY'
},
{
'origin.carries': 'ZZZ'
},
{
'origin.carries': 'WWWW'
}
]
}
]
}).sort({"timestamp":1})
// My compound index is:
{status:1 ,searchId:-1,origin.brand:1, origin.carries:1 , timestamp:1}
but it only 1 combination ...it could be plenty like
a. {status:1} {b.status:1 ,searchId:-1} {c. status:1 ,searchId:-1,origin.brand:1} {d.status:1 ,searchId:-1,origin.brand:1, origin.carries:1} ........
Additionally , What will happened with Performance write/read ? , I think write will decreased over reads ...
The queries pattern are :
1.find(...) with '$and'/'$or' + sort
2.Aggregation with Match/sort
thanks
Generally, indexes are only useful if they are over a selective field. This means the number of documents that have a particular value is small relative to the overall number of documents.
What "small" means varies on the data set and the query. A 1% selectivity is pretty safe when deciding whether an index makes sense. If an particular value exists in, say, 10% of documents, performing a table scan may be more efficient than using an index over the respective field.
With that in mind, some of your fields will be selective and some will not be. For example, I suspect filtering by "OK" will not be very selective. You can eliminate non-selective fields from indexing considerations - if someone wants all orders which are "OK" with no other conditions they'll end up doing a table scan. If someone wants orders which are "OK" and have other conditions, whatever index is applicable to other conditions will be used.
Now that you are left with selective (or at least somewhat selective) fields, consider what queries are both popular and selective. For example, perhaps brand+type would be such a combination. You could add compound indexes that match popular queries which you expect to be selective.
Now, what happens if someone filters by brand only? This could be selective or not depending on the data. If you already have a compound index on brand+type, you'd leave it up to the database to determine whether a brand only query is more efficient to fulfill via the brand+type index or via a collection scan.
Continue in this manner with other popular queries and fields.
So you have subdocuments, ranged queries, and sorting by 1 field only.
It can eliminate most of the possible permutations. Assuming there are no other surprises.
D. SM already covered selectivity - you should really listen what the man says and at least upvote.
The other things to consider is the order of the fields in the compound index:
fields that have direct match like $eq
fields you sort on
fields with ranged queries: $in, $lt, $or etc
These are common rules for all b-trees. Now things that are specific to mongo:
A compound index can have no more than 1 multikey index - the index by a field in subdocuments like "origin.brand". Again I assume origins are embedded docs, so the document's shape is like this:
{
_id: ...,
status: ...,
timestamp: ....,
origin: [
{brand: ..., carries: ...},
{brand: ..., carries: ...},
{brand: ..., carries: ...}
]
}
For your query the best index would be
{
searchId: 1,
timestamp: 1,
status: 1, /** only if it is selective enough **/
"origin.carries" : 1 /** or brand, depending on data **/
}
Regarding the number of indexes - it depends on data size. Ensure all indexes fit into RAM otherwise it will be really slow.
Last but not least - indexing is not a one off job but a lifestyle. Data change over time, so do queries. If you care about performance and have finite resources you should keep an eye on the database. Check slow queries to add new indexes, collect stats from user's queries to remove unused indexes and free up some room. Basically apply common sense.
I noticed this one-year-old topic, because I am more or less struggling with a similar issue: users can request queries with an unpredictable set of the fields, which makes it near to impossible to decide (or change) how indexes should be defined.
Even worse: the user should indicate some value (or range) for the fields that make up the sharding-key, otherwise we cannot help MongoDB to limit its search in only a few shards (or chunks, for that matter).
When the user needs the liberty to search on other fields that are not necessariy the ones which make up the sharding-key, then we're stuck with a full-database search. Our dbase is some 10's of TB size...
Indexes should fit in RAM ? This can only be achieved with small databases, meaning some 100's GB max. How about my 37 TB database ? Indexes won't fit in RAM.
So I am trying out a POC inspired by the UNIX filesystem structures where we have inodes pointing to data blocks:
we have a cluster with 108 shards, each contains 100 chunks
at insert time, we take some fields of which we know they yield a good cardinality of the data, and we compute the sharding-key with those fields; the document goes into the main collection (call it "Main_col") on that computed shard, so with a certain chunk-number (equals our computed sharding-key value)
from the original document, we take a few 'crucial' fields (the list of such fields can evolve as your needs change) and store a small extra document in another collection (call these "Crucial_col_A", Crucial_col_B", etc, one for each such field): that document contains the value of this crucial field, plus an array with the chunk-number where the original full document has been stored in the 'big' collection "Main_col"; consider this as a 'pointer' to the chunk in collecton "Main_col" where this full document exists. These "Crucial_col_X" collections are sharded based on the value of the 'crucial' field.
when we insert another document that has the same value for some 'crucial' field "A", then that array in "Crucial_col_A" with chunk-numbers with be updated (with 'merge') to contain the different or same chunk number of this next full document from "Main_col"
a user can now define queries with criteria for at least one of those 'crucial' fields, plus (optional) any other criteria on other fields in the documents; the first criterium for the crucial field (say field "B") will run very quickly (because sharded on the value of "B") and return the small document from "Crucial_col_B", in which we have the array of chunk-numbers in "Main_col" where any document exists that has field "B" equal to the given criterium. Then we run a second set of parallel queries, one for each shardkey-value=chunk-number (or one per shard, to be decided) that we find in the array from before. We combine the results of those parallel subqueries, and then apply further filtering if the user gave additional criteria.
Thus this involves 2 query-steps: first in the "Crucial_col_X" collection to obtain the array with chunk-numbers where the full documents exist, and then the second query on those specific chunks in "Main_col".
The first query is done with a precise value for the 'crucial' field, so the exact shard/chunk is known, thus this query goes very fast.
The second (set of) queries are done with precise values for the sharding-keys (= the chunk numbers), so these are expected to go also very fast.
This way of working would eliminate the burden of defining many index combinations.

MongoDB index on many (nested) fields/attributes

In e-commerce application I have documents like this:
{ category:'A', ..., price:122,
attr:{ width:6, height:4, hasLCD:true, lcdType:'some text', ..., a36:null }
}
I.e. every product has many attributes of various simple types.
Now I want to filter products by dynamic queries containing top level fields plus some attributes. For example:
find({category:'A', price:{$lt:200}, ...,
'attr.height':{$lt:6}, 'attr.hasLCD':true, 'attr.lcdType':{$in:[...]}, ...})
And I'd like this to perform fast.
Trying to index on all possible 'attr.*' variants gives me an error (too many compound keys). I also suspect that if I index it that way and then omit one of attrs in query index won't work.
Trying to index on 'attr' as a whole does not help either.
What is the proper way to model this under MongoDB?
Update
I have tried this approach (also mentioned here). I.e. store attributes as array of key-value pairs:
attr2: [ {tag:'lcgType', value:'some text'}, ...
And index it like this:
ensureIndex({ 'attr2.tag':1, 'attr2.value':1 })
And query like this:
find({attr2:{$all:[
{$elemMatch:{tag:'bestseller',value:true}},
{$elemMatch:{tag:'weight',value:{$lte:100}}}
]}})
Now explain() says that it is using "BtreeCursor attr2.tag_1_attr2.value_1" but still "nscanned" : 31607 and the whole execution time have actually increased (compared to non-indexed scenario).
Something is wrong here.
Sub-question
What if I select some (less than 31) most frequently queried attributes and try to index on those. If I put all of them in single compound index:
ensureIndex({'attr.a1':1, 'attr.a2':1, ...})
According to the docs this index won't be used for queries missing attr.a1 attribute.
How to define index in this case?
If you really have to allow a lot of filters, combinations and possibly even sorts, MongoDB is not a good fit because it uses only one index per query. The number of indexes then grows way too fast, because compound keys are somewhat inflexible (that should answer the subquestion) and becomes a performance hog.
Use a search database like ElasticSearch, SolR, etc. instead that comes with the features you need. You can the use a $in on the ids that the search server returned if you want to keep the base information in MongoDB (it's usually a good idea to have the search database simply replicate the information of the primary data store so you don't need to sync changes two-way, which would be a nightmare)

MongoDB vs Columnar

Is MongoDB a good fit when there are several combinations of columns used for querying , thus creating indexes on all of the columns is not feasible? How does MongoDB performs when, say, you have no index on the column and you have millions of entries for that column?
If you have no index, a table scan is performed, as with any database system.
If the documents are in memory this will still be relatively fast but still take a given amount of time based on the number of documents in the collection as the database must look at each one. O(n)
Is the problem that you have a small set of varying keys per document or a large numer of keys that every document must have?
Column oriented datastores must store a large amount of columns to model varying attributes but mongodb is more flexible because of the document data model.
If you have documents that have a small number of varying attributes (out of a large set of attributes), this is indexable and will be O(logn).
Your documents would look like this:
{
"name":"some name",
"attrs":[
{"n":"subject","v":"the subject"},
{"n":"description","v":"Some amazing description"},
{"n":"comments","v":"Comments on this thing"},
]
}
Be indexible like this:
db.mycollection.ensureIndex({"attrs.n":1, "attrs.v":1})
and be queryable like this:
db.mycollection.find({attrs: {$elemMatch: {n: "subject", v: "the subject"}}})

MongoDB: Speed of field ("inside record") search in comporation with speed of search in "global scope"

My question may be not very good formulated because I haven't worked with MongoDB yet, so I'd want to know one thing.
I have an object (record/document/anything else) in my database - in global scope.
And have a really huge array of other objects in this object.
So, what about speed of search in global scope vs search "inside" object? Is it possible to index all "inner" records?
Thanks beforehand.
So, like this
users: {
..
user_maria:
{
age: "18",
best_comments :
{
goodnight:"23rr",
sleeptired:"dsf3"
..
}
}
user_ben:
{
age: "18",
best_comments :
{
one:"23rr",
two:"dsf3"
..
}
}
So, how can I make it fast to find user_maria->best_comments->goodnight (index context of collections "best_comment") ?
First of all, your example schema is very questionable. If you want to embed comments (which is a big if), you'd want to store them in an array for appropriate indexing. Also, post your schema in JSON format so we don't have to parse the whole name/value thing :
db.users {
name:"maria",
age: 18,
best_comments: [
{
title: "goodnight",
comment: "23rr"
},
{
title: "sleeptired",
comment: "dsf3"
}
]
}
With that schema in mind you can put an index on name and best_comments.title for example like so :
db.users.ensureIndex({name:1, 'best_comments.title:1})
Then, when you want the query you mentioned, simply do
db.users.find({name:"maria", 'best_comments.title':"first"})
And the database will hit the index and will return this document very fast.
Now, all that said. Your schema is very questionable. You mention you want to query specific comments but that requires either comments being in a seperate collection or you filtering the comments array app-side. Additionally having huge, ever growing embedded arrays in documents can become a problem. Documents have a 16mb limit and if document increase in size all the time mongo will have to continuously move them on disk.
My advice :
Put comments in a seperate collection
Either do document per comment or make comment bucket documents (say,
100 comments per document)
Read up on Mongo/NoSQL schema design. You always query for root documents so if you end up needing a small part of a large embedded structure you need to reexamine your schema or you'll be pumping huge documents over the connection and require app-side filtering.
I'm not sure I understand your question but it sounds like you have one record with many attributes.
record = {'attr1':1, 'attr2':2, etc.}
You can create an index on any single attribute or any combination of attributes. Also, you can create any number of indices on a single collection (MongoDB collection == MySQL table), whether or not each record in the collection has the attributes being indexed on.
edit: I don't know what you mean by 'global scope' within MongoDB. To insert any data, you must define a database and collection to insert that data into.
Database 'Example':
Collection 'table1':
records: {a:1,b:1,c:1}
{a:1,b:2,d:1}
{a:1,c:1,d:1}
indices:
ensureIndex({a:ascending, d:ascending}) <- this will index on a, then by d; the fact that record 1 doesn't have an attribute 'd' doesn't matter, and this will increase query performance
edit 2:
Well first of all, in your table here, you are assigning multiple values to the attribute "name" and "value". MongoDB will ignore/overwrite the original instantiations of them, so only the final ones will be included in the collection.
I think you need to reconsider your schema here. You're trying to use it as a series of key value pairs, and it is not specifically suited for this (if you really want key value pairs, check out Redis).
Check out: http://www.jonathanhui.com/mongodb-query

How to deal with Many-to-Many relations in MongoDB when Embedding is not the answer?

Here's the deal. Let's suppose we have the following data schema in MongoDB:
items: a collection with large documents that hold some data (it's absolutely irrelevant what it actually is).
item_groups: a collection with documents that contain a list of items._id called item_groups.items plus some extra data.
So, these two are tied together with a Many-to-Many relationship. But there's one tricky thing: for a certain reason I cannot store items within item groups, so -- just as the title says -- embedding is not the answer.
The query I'm really worried about is intended to find some particular groups that contain some particular items (i.e. I've got a set of criteria for each collection). In fact it also has to say how much items within each found group fitted the criteria (no items means group is not found).
The only viable solution I came up with this far is to use a Map/Reduce approach with a dummy reduce function:
function map () {
// imagine that item_criteria came from the scope.
// it's a mongodb query object.
item_criteria._id = {$in: this.items};
var group_size = db.items.count(item_criteria);
// this group holds no relevant items, skip it
if (group_size == 0) return;
var key = this._id.str;
var value = {size: group_size, ...};
emit(key, value);
}
function reduce (key, values) {
// since the map function emits each group just once,
// values will always be a list with length=1
return values[0];
}
db.runCommand({
mapreduce: item_groups,
map: map,
reduce: reduce,
query: item_groups_criteria,
scope: {item_criteria: item_criteria},
});
The problem line is:
item_criteria._id = {$in: this.items};
What if this.items.length == 5000 or even more? My RDBMS background cries out loud:
SELECT ... FROM ... WHERE whatever_id IN (over 9000 comma-separated IDs)
is definitely not a good way to go.
Thank you sooo much for your time, guys!
I hope the best answer will be something like "you're stupid, stop thinking in RDBMS style, use $its_a_kind_of_magicSphere from the latest release of MongoDB" :)
I think you are struggling with the separation of domain/object modeling from database schema modeling. I too struggled with this when trying out MongoDb.
For the sake of semantics and clarity, I'm going to substitute Groups with the word Categories
Essentially your theoretical model is a "many to many" relationship in that each Item can belong Categories, and each Category can then possess many Items.
This is best handled in your domain object modeling, not in DB schema, especially when implementing a document database (NoSQL). In your MongoDb schema you "fake" a "many to many" relationship, by using a combination of top-level document models, and embedding.
Embedding is hard to swallow for folks coming from SQL persistence back-ends, but it is an essential part of the answer. The trick is deciding whether or not it is shallow or deep, one-way or two-way, etc.
Top Level Document Models
Because your Category documents contain some data of their own and are heavily referenced by a vast number of Items, I agree with you that fully embedding them inside each Item is unwise.
Instead, treat both Item and Category objects as top-level documents. Ensure that your MongoDb schema allots a table for each one so that each document has its own ObjectId.
The next step is to decide where and how much to embed... there is no right answer as it all depends on how you use it and what your scaling ambitions are...
Embedding Decisions
1. Items
At minimum, your Item objects should have a collection property for its categories. At the very least this collection should contain the ObjectId for each Category.
My suggestion would be to add to this collection, the data you use when interacting with the Item most often...
For example, if I want to list a bunch of items on my web page in a grid, and show the names of the categories they are part of. It is obvious that I don't need to know everything about the Category, but if I only have the ObjectId embedded, a second query would be necessary to get any detail about it at all.
Instead what would make most sense is to embed the Category's Name property in the collection along with the ObjectId, so that pulling back an Item can now display its category names without another query.
The biggest thing to remember is that the key/value objects embedded in your Item that "represent" a Category do not have to match the real Category document model... It is not OOP or relational database modeling.
2. Categories
In reverse you might choose to leave embedding one-way, and not have any Item info in your Category documents... or you might choose to add a collection for Item data much like above (ObjectId, or ObjectId + Name)...
In this direction, I would personally lean toward having nothing embedded... more than likely if I want Item information for my category, i want lots of it, more than just a name... and deep-embedding a top-level document (Item) makes no sense. I would simply resign myself to querying the database for an Items collection where each one possesed the ObjectId of my Category in its collection of Categories.
Phew... confusing for sure. The point is, you will have some data duplication and you will have to tweak your models to your usage for best performance. The good news is that that is what MongoDb and other document databases are good at...
Why don't use the opposite design ?
You are storing items and item_groups. If your first idea to store items in item_group entries then maybe the opposite is not a bad idea :-)
Let me explain:
in each item you store the groups it belongs to. (You are in NOSql, data duplication is ok!)
for example, let's say you store in item entries a list called groups and your items look like :
{ _id : ....
, name : ....
, groups : [ ObjectId(...), ObjectId(...),ObjectId(...)]
}
Then the idea of map reduce takes a lot of power :
map = function() {
this.groups.forEach( function(groupKey) {
emit(groupKey, new Array(this))
}
}
reduce = function(key,values) {
return Array.concat(values);
}
db.runCommand({
mapreduce : items,
map : map,
reduce : reduce,
query : {_id : {$in : [...,....,.....] }}//put here you item ids
})
You can add some parameters (finalize for instance to modify the output of the map reduce) but this might help you.
Of course you need to have another collection where you store the details of item_groups if you need to have it but in some case (if this informations about item_groups doe not exist, or don't change, or you don't care that you don't have the most updated version of it) you don't need them at all !
Does that give you a hint about a solution to your problem ?