MongoDB database design, Is redundancies important? - mongodb

I have a question about MongoDB database design.
As far as I know (I'm not sure I'm correct), there is no need to use relationships between collections. For example I have collection for Users with their emails and there are email templates that I want to send them to the users.
Should I use my old paradigm of avoiding redundancies and design 3 collections like this:
Users: ID,Name,Email
Templates: ID,Contents
EmailSent: UserID,TemplateID
Or should use Nosql paradigm like this:
Users: ID,Name,Email
Templates: ID,Contents
EmailSent: UserID,Contents
Difference is only in Email sent collection. I'm looking for a clear answer according to MongoDB design architecture, not personal opinions

In this special case, I would not reference the used template from the sent emails, because a sent email is sent and can not be changed anymore. When you change the template after sending an email, the email already in the inbox of the receiver would not change. But when you look at the email in your application, it would appear with the new template even though that's not the template which was active when the email was generated. That would provide your users with misleading information.
In the more general case, there is no by-the-book solution for the question embedding vs. referencing. While MongoDB generally prefers embedding over referencing because of the lack of on-database JOINs, embedding causes problems when many documents embed copies of the same data and that data changes. In that case you either have to leave the data as-is (which can make sense in some cases, like here for example) or update all documents when you update the embedded data. This would be an expensive operation.
You won't have that costly mass-update operation with referencing instead of embedding. However, it would makes retrieval of the complete documents more expensive because you would have to perform multiple subsequent queries.
Which option you choose depends on your expected usual use-case:
When you expect that requesting with the sub-document is a frequent operation and updating the subdocument is a rare operation, you would choose embedding.
When the sub-document changes very frequently and requests are rare, referencing would be the smarter strategy.

Related

NoSQL Modeling fields visibility for certain users

I'm new to NoSQL modeling and I am currently confronted with a problem of which I do not know how to solve it.
Say I have a calendar and some people are allowed to see certain events. These people are categorised into 3 groups. In SQL, I would've given each event an integer and I would've made a bit-wise comparator. In NoSQL (Firestore in this case), I need to specify certain rules but, somehow I can't forbid someone to view a certain entry in a document. I have an idea on how to solve this, but it seems very... ineffective. Namely, make a collection where all the events are stored (only accessible by the admin) and based on the entries, update 3 documents in which the events are stored as well.
Is there a better method? I'm a bit new to this but it feels very bad.
Reads in Cloud Firestore are performed at the document level, meaning you either retrieve the full document, or you retrieve nothing. There is no way to retrieve a partial document. You cannot rely solely on the security rules to prevent users from reading a specific field.
If you want certain fields to be hidden from some users, then you have to put them in a separate document. You might consider creating a document in a private subcollection. And then write security rules that have different levels of access for the two collections.
You can refer Control Access to Specific Field for more information and example.

Is it ok to use Meteor publish composite for dozens of subscriptions?

Currently, our system is not entirely normalized, and we use meteor-publish-composite to obtain the normalized data in mongodb. Some models have very few dependencies, but others have arrays of objects (i.e. sub-documents) with few foreign keys that we are subscribing to when fetching each model.
An example would be a Post containing a list of Comment sub-documents, where each comment has a userId field.
My question is, while I know it would be faster to use collection hooks and update the collection with data denormalization, how does Meteor handle multiple subscriptions on the same collection?
Is a hundred subscriptions on the same collection affect the application speed (significantly)? What about a thousand? etc.
This may not fully answer your question, however after spending countless hours tuning the performance of a large meteor app, I thought I would share some of the things that I have learned.
In Meteor, when you define a publication, you are setting up a reactive query that continues to push data to subscribed clients when changes to the underlying mongo data causes the result of the query to change. In other words, it sets up a query that will continually push data to clients as the data is inserted, updated, or removed. The mechanism by which it does this is by creating an observer on the query.
When an observer is initialized (e.g. when publication is subscribed to), it will query mongodb for the initial dataset to send down and then use the oplog to detect changes going forward. Fortunately, meteor is able to re-use an existing observer for a new subscription if the query is for the same collection, same selectors, and same options.
This means that you could create hundreds of subscriptions against many different publications, but if they are hitting against the same collection and using the same query selectors then you effectively only have 1 observe in play. For more details, I highly recommend reading this article from kadira.io (from which I acquired the information I used in this answer).
In addition to this, Meteor is also able to deal with multiple publications publishing the same document, and when this occurs, the documents will be merged into one. See this for more detail.
Lastly, because of Meteor's MergeBox component, it will minimize the data being sent over the wire across all your subscriptions by keeping track of what data changed vs. what is already on the client.
Therefore, in your specific example, it sounds like you will be running several different subscriptions on effectively the same query (since you are just trying to de-normalize your data) and dataset. Because of all the optimizations that I described above, I would guess that you won't be plagued by performance issues by taking this approach.
I have done similar things in one of my apps and have never had an issue.

Meteor publishing and subscribing to a large Collection

So let's take this scenario, in an e-commerce application, a user searches for "wrist watches".
Is it advisable for me to publish and subscribe the entire Products collection ? Because that table my grow a lot in size. Is it possible for me to fetch from a collection without subscribing to it ?
Also, in Meteor 1.3, which is the best place to define collections ? From what I read, it has to be in /imports/api, but some light on it might be helpful.
Thanks,
When you want to get data to your meteor client, you have three options - choose your own adventure.
option 1: publish the whole collection
pros: easy to implement, fast to use/filter on the client once the data has arrived, publication can be reused on the server for all clients
cons: doesn't scale well / doesn't work past a couple of thousand documents, may be a lot to transmit to the client
use when: you have a small size-bounded collection and the client needs all of it for filtering / searching / selecting
option 2: use a method
You can have a meteor method deliver the filtered documents to the client instead of publishing them. I.e. the user searches for "wrist watches", and the method delivers only those documents. See this section of the guide for more details. You can stuff the documents into a local collection if you like, but it isn't required.
pros: performance, scalability, data isolation (you don't have to worry that some subset of the documents were added by another subscription)
cons: it's more work to set up and manage than a subscription
use when: you have an unbounded collection and you need a subset in the most performant way
option 3: use a reactive subscription
This is very similar to (2) except you'll be re-subscribing in an autorun after changing your search parameters. See this section for more details.
pros: easier to implement than (2)
cons: more computationally expensive an a bit slower than (2) with the possible exception that publications could be reused on the server (unlikely in the case of a search)
use when: you have an unbounded collection and you need a subset with the least amount of effort/code
Without knowing more about your particular use case, I'd go with (2).
As for where to define your collections, see this section and the todos app for examples. The recommendation is to use imports/api as you mentioned. For an explanation of why, see this question. If you need more detail, I'd recommend opening a separate question.
Generally speaking we don't post all fetched data onto a page at once. It too lengthy for the customers in terms of user experience. A common advice is pagination plus sorting.
As to Meteor, collections on the server are different from collections on the client. In short, a collection on the client is a subset of the server collection. Data in that subset is determined by a publication-subscription mechanism of Meteor. Data is published on the server and you subscribe to it on the client. This way you derive the subset. Morever you can define filtering, sorting, count, ect to shape the derived subset based on what and how you like the subset to be used on the client. The documentation contains a pretty decent guide and details about Meteor collections.
The place to define collections is really flexible in Meteor. It doesn't have to be /imports/api. It can be any location that can be accessed by both the server and the client, because in general use cases the server needs to see the data and define methods for manipulating the collection, and the client needs to see it as well for rendering data on web pages. But, as said, it is flexible and depends on how you implement and structure your applications. It can a location accessible by both the server and the client, but it needs not to be. In some cases the collections are defined on the server only, and the client fetch the data from implicit and indirect protocols. Meteor method is one of them, and Restful API is another to name a few. It's case by case and you do what you feel best. That is where the fun is from. Subscription is common and convenient but not the only.
Meteor defines special rules to folder access on the server and client respectively, and Meteor 1.3 imposes a new rule for modulation. I enjoy reading Meteor documentation and find them really useful, like this one helps develop solid knowledge on the afore-mentioned rules.

MongoDB design Questions - 2 way references

I am new to MongoDB so I apologize if these questions are simple.
I am developing an application that will track specific user interactions and put information about the user and the interactions into a MongoDB. There are several types of interactions that will all collect different information from the user.
My First question is: Should all of these interaction be in the same collection or should I separate them out by types (as you would do in a RDBMS)?
Additionally I would like to be able to look up:
All the interactions a specific user has made
All the users that have made a specific interaction
I was thinking of putting a Manual reference to an interaction document for each interaction a user performs in his document and a manual reference to the user that performed the interaction in each interaction document.
My second questions is: Does this "doubling up" of Manual references make sense or is there a better way to do this?
Any thoughts would be greatly appreciated.
Thank you!
My First question is: Should all of these interaction be in the same collection or should I separate them out by types (as you would do in a RDBMS)?
Without knowing too much about your data size, write amount, read amount, querying needs etc I would say; yes, all in one collection.
I am not sure if separating them out is how I would design this in a RDBMS either.
"Does this "doubling up" of Manual references make sense or is there a better way to do this?"
No it doesn't make sound databse design to me.
Putting a user_id on the interaction collection document sounds good enough.
So when you want to get all user interactions you just query by the interactions collection user_id.
When you want to do it the other way around you query for all interactions that fit your query area, pull out those user_ids and then do a $in clause on the user collection.
My First question is: Should all of these interaction be in the same collection or should I separate them out by types (as you would do in a RDBMS)?
The greatest advantage of a document store over a relational database is precisely that you can do that. Put all different interactions into one collection and don't be afraid to give them different sets of fields.
Additionally I would like to be able to look up:
All the interactions a specific user has made
I was thinking of putting a Manual reference to an interaction document for each interaction a user performs in his document and a manual reference to the user that performed the interaction in each interaction document.
Note that it's usually not a good idea to have documents which grow indefinitely. MongoDB has an upper limit for document size (per default:16MB). MongoDB isn't good at handling large documents, because documents are loaded completely into ram cache. When you have many large objects, not much will fit into the cache. Also, when documents grow, they sometimes need to be moved to another hard drive location, which slows down updates (that also screws with natural ordering, but you shouldn't rely on that anyway).
All the users that have made a specific interaction
Are you referring to a specific interaction instance (assuming that multiple users can be part of one interaction) or all users which already performed a specific interaction type?
In the latter case I would add an array of performed interaction types to the user document, because otherwise you would have to perform a join-like operation, which would either require a MapReduce or some application-sided logic.
The the first case I would, contrary to what Sammaye suggests, recommend to use not the _id field of the user collection, but rather the username. When you use an index with the unique flag on user.username, it's just as fast as searching by user._id and uniqueness is guaranteed.
The reason is that when you search for the interactions by a specific user, it's more likely that you know the username and not the id. When you only have the username and you are referencing the user by id, you first have to search the users collection to get the _id of the username, which is a additional database query.
This of course assumes that you don't always have the user._id at hand. When you do, you can of course use _id as reference.

Relations in Document-oriented database?

I'm interested in document-oriented databases, and I'd like to play with MongoDB. So I started a fairly simple project (an issue tracker), but am having hard times thinking in a non-relational way.
My problems:
I have two objects that relate to each other (e.g. issue = {code:"asdf-11", title:"asdf", reporter:{username:"qwer", role:"manager"}} - here I have a user related to the issue). Should I create another document 'user' and reference it in 'issue' document by its id (like in relational databases), or should I leave all the user's data in the subdocument?
If I have objects (subdocuments) in a document, can I update them all in a single query?
I'm totally new to document-oriented databases, and right now I'm trying to develop sort of a CMS using node.js and mongodb so I'm facing the same problems as you.
By trial and error I found this rule of thumb: I make a collection for every entity that may be a "subject" for my queries, while embedding the rest inside other objects.
For example, comments in a blog entry can be embedded, because usually they're bound to the entry itself and I can't think about a useful query made globally on all comments. On the other side, tags attached to a post might deserve their own collection, because even if they're bound to the post, you might want to reason globally about all the tags (for example making a list of trending topics).
In my mind this is actually pretty simple. Embedded documents can only be accessed via their master document. If you can envision a need to query an object outside the context of the master document, then don't embed it. Use a ref.
For your example
issue = {code:"asdf-11", title:"asdf", reporter:{username:"qwer", role:"manager"}}
I would make issue and reporter each their own document, and reference the reporter in the issue. You could also reference a list of issues in reporter. This way you won't duplicate reporters in issues, you can query them each separately, you can query reporter by issue, and you can query issues by reporter. If you embed reporter in issue, you can only query the one way, reporter by issue.
If you embed documents, you can update them all in a single query, but you have to repeat the update in each master document. This is another good reason to use reference documents.
The beauty of mongodb and other "NoSQL" product is that there isn't any schema to design. I use MongoDB and I love it, not having to write SQL queries and awful JOIN queries! So to answer your two questions.
1 - If you create multiple documents, you'll need make two calls to the DB. Not saying it's a bad thing but if you can throw everything into one document, why not? I recall when I used to use MySQL, I would create a "blog" table and a "comments" table. Now, I append the comments to the record in the same collection (aka table) and keep building on it.
2 - Yes ...
The schema design in Document-oriented DBs can seems difficult at first, but building my startup with Symfony2 and MongoDB I've found that the 80% of the time is just like with a relational DB.
At first, think it like a normal db:
To start, just create your schema as you would with a relational Db:
Each Entity should have his own Collection, especially if you'll need to paginate the documents in it.
(in Mongo you can somewhat paginate nested document arrays, but the capabilities are limited)
Then just remove overly complicated normalization:
do I need a separate category table? (simply write the category in a column/property as a string or embedded doc)
Can I store comments count directly as an Int in the Author collection? (then update the count with an event, for example in Doctrine ODM)
Embedded documents:
Use embedded documents only for:
clearness (nested documents like: addressInfo, billingInfo in the User collection)
to store tags/categories ( eg: [ name: "Sport", parent: "Hobby", page: "/sport"
] )
to store simple multiple values (for eg. in User collection: list of specialties, list of personal websites)
Don't use them when:
the parent Document will grow too large
when you need to paginate them
when you feel the entity is important enough to deserve his own collection
Duplicate values across collection and precompute counts:
Duplicate some columns/attributes values from a Collection to another if you need to do a query with each values in the where conditions. (remember there aren't joins)
eg: In the Ticket collection put also the author name (not only the ID)
Also if you need a counter (number of tickets opened by user, by category, ecc), precompute them.
Embed references:
When you have a One-to-Many or Many-to-Many reference, use an embedded array with the list of the referenced document ids (see MongoDB DB Ref).
You'll need to use an Event again to remove an id if the referenced document get deleted.
(There is an extension for Doctrine ODM if you use it: Reference Integrity)
This kind of references are directly managed by Doctrine ODM: Reference Many
Its easy to fix errors:
If you find late that you have made a mistake in the schema design, its quite simply to fix it with few lines of Javascript to run directly in the Mongo console.
(stored procedures made easy: no need of complex migration scripts)
Waring: don't use Doctrine ODM Migrations, you'll regret that later.
Redid this answer since the original answer took the relation the wrong way round due to reading incorrectly.
issue = {code:"asdf-11", title:"asdf", reporter:{username:"qwer", role:"manager"}}
As to whether embedding some important information about the user (creator) of the ticket is a wise decision or not depends upon the system specifics.
Are you giving these users the ability to login and report issues they find? If so then it is likely you might want to factor that relation off to a user collection.
On the other hand, if that is not the case then you could easily get away with this schema. The one problem I see here is if you wish to contact the reporter and their job role has changed, that's somewhat awkward; however, that is a real world dilemma, not one for the database.
Since the subdocument represents a single one-to-one relation to a reporter you also should not suffer fragmentation problems mentioned in my original answer.
There is one glaring problem with this schema and that is duplication of changing repeating data (Normalised Form stuff).
Let's take an example. Imagine you hit the real world dilemma I spoke about earlier and a user called Nigel wants his role to reflect his new job position from now on. This means you have to update all rows where Nigel is the reporter and change his role to that new position. This can be a lengthy and resource consuming query for MongoDB.
To contradict myself again, if you were to only have maybe 100 tickets (aka something manageable) per user then the update operation would likely not be too bad and would, in fact, by manageable for the database quite easily; plus due to the lack of movement (hopefully) of the documents this would be a completely in place update.
So whether this should be embedded or not depends heavily upn your querying and documents etc, however, I would say this schema isn't a good idea; specifically due to the duplication of changing data across many root documents. Technically, yes, you could get away with it but I would not try.
I would instead split the two out.
If I have objects (subdocuments) in a document, can I update them all in a single query?
Just like the relation style in my original answer, yes and easily.
For example, let's update the role of Nigel to MD (as hinted earlier) and change the ticket status to completed:
db.tickets.update({'reporter.username':'Nigel'},{$set:{'reporter.role':'MD', status: 'completed'}})
So a single document schema does make CRUD easier in this case.
One thing to note, stemming from your English, you cannot use the positional operator to update all subdocuments under a root document. Instead it will update only the first found.
Again hopefully that makes sense and I haven't left anything out. HTH
Original Answer
here I have a user related to the issue). Should I create another document 'user' and reference it in 'issue' document by its id (like in relational databases), or should I leave all the user's data in the subdocument?
This is a considerable question and requires some background knowledge before continuing.
First thing to consider is the size of a issue:
issue = {code:"asdf-11", title:"asdf", reporter:{username:"qwer", role:"manager"}}
Is not very big, and since you no longer need the reporter information (that would be on the root document) it could be smaller, however, issues are never that simple. If you take a look at the MongoDB JIRA for example: https://jira.mongodb.org/browse/SERVER-9548 (as a random page that proves my point) the contents of a "ticket" can actually be quite considerable.
The only way you would gain a true benefit from embedding the tickets would be if you could store ALL user information in a single 16 MB block of contigious sotrage which is the maximum size of a BSON document (as imposed by the mongod currently).
I don't think you would be able to store all tickets under a single user.
Even if you was to shrink the ticket to, maybe, a code, title and a description you could still suffer from the "swiss cheese" problem caused by regular updates and changes to documents in MongoDB, as ever this: http://www.10gen.com/presentations/storage-engine-internals is a good reference for what I mean.
You would typically witness this problem as users add multiple tickets to their root user document. The tickets themselves will change as well but maybe not in a drastic or frequent manner.
You can, of course, remedy this problem a bit by using power of 2 sizes allocation: http://docs.mongodb.org/manual/reference/command/collMod/#usePowerOf2Sizes which will do exactly what it says on the tin.
Ok, hypothetically, if you were to only have code and title then yes, you could store the tickets as subdocuments in the root user without too many problems, however, this is something that comes down to specifics that the bounty assignee has not mentioned.
If I have objects (subdocuments) in a document, can I update them all in a single query?
Yes, quite easily. This is one thing that becomes easier with embedding. You could use a query like:
db.users.update({user_id:uid,'tickets.code':'asdf-1'}, {$set:{'tickets.$.title':'Oh NOES'}})
However, to note, you can only update ONE subdocument at a time using the positional operator. As such this means you cannot, in a single atomic operation, update all ticket dates on a single user to 5 days in the future.
As for adding a new ticket, that is quite simple:
db.users.update({user_id:uid},{$push:{tickets:{code:asdf-1,title:"Whoop"}}})
So yes, you can quite simply, depending on your queries, update the entire users data in a single call.
That was quite a long answer so hopefully I haven't missed anything out, hope it helps.
I like MongoDB, but I have to say that I will use it a lot more soberly in my next project.
Specifically, I have not had as much luck with the Embedded Document facility as people promise.
Embedded Document seems to be useful for Composition (see UML Composition), but not for aggregation. Leaf nodes are great, anything in the middle of your object graph should not be an embedded document. It will make searching and validating your data more of a struggle than you'd want.
One thing that is absolutely better in MongoDB is your many-to-X relationships. You can do a many-to-many with only two tables, and it's possible to represent a many-to-one relationship on either table. That is, you can either put 1 key in N rows, or N keys in 1 row, or both. Notably, queries to accomplish set operations (intersection, union, disjoint set, etc) are actually comprehensible by your coworkers. I have never been satisfied with these queries in SQL. I often have to settle for "two other people will understand this".
If you've ever had your data get really big, you know that inserts and updates can be constrained by how much the indexes cost. You need fewer indexes in MongoDB; an index on A-B-C can be used to query for A, A & B, or A & B & C (but not B, C, B & C or A & C). Plus the ability to invert a relationship lets you move some indexes to secondary tables. My data hasn't gotten big enough to try, but I'm hoping that will help.