Consider an application in which we have some docs (I use doc instead of document in order to differentiate it from MongoDB's document) and modifications are performed on them. The only requirement we have is that changes on multiple docs are done atomically (All of them are done, or none). There are two ways to implement it:
A transaction is started and all the changes to the docs are performed inside it. Then it is commited. Whenever we need a doc we retrieve it by its ID.
A new document is added to MongoDB that includes all the changes to the docs (example below). Since a document is inserted atomically there is no need for a transaction. We put an index on changes.docId and whenever we want to retrieve a doc we find all changes on the doc (by the index) and aggregate them and produce the doc.
{
_id: ...
changes: [
{docId: 1, change: ...},
{docId: 10, change: ...},
{docId: 5, change: ...},
...
]
}
Note that since we need the history of changes, even in the first solution we keep the changed values inside the doc. Thus, by the measure of storage space these two solutions are not much different (without considering indexes, ...).
The question is that which of these solutions is better?
Some of my own thoughts on this question:
The second solution may be faster in writes (It does not need transaction handling among different documents and shards).
The first solution may be faster in reads (The second solution needs to look for all the changes on the doc with the help of the index, which may be spread in different documents or even shards).
Assuming that reads are more prevalent than write (although not much), if satisfying ACID among multiple documents (and shards) in MongoDB is super efficient and very low-cost the first solution may be better. But, if transaction handling makes a lot of overhead on the system and requires a tremendous amount of coordination among shards, the second solution may be better.
Related
This is MongoDB's api:
db.foo.watch([{$match: {"bar.baz": "qux" }}])
Let's say that collection foo contains millions of documents. The arguments passed into watch indicate that for every single document that gets updated the system will filter the ones that $match the query (but it will be triggered behind the scenes with any document change).
The problem is that as my application scales, my listeners will also scale and my intuition is that I will end up having n^2 complexity with this approach.
I think that as I add more listeners, database performance will deteriorate due to changes to documents that are not part of the $match query. There are other ways to deal with this, (web sockets & rooms) but before prematurely optimizing the system, I would like to know if my intuition is correct.
Actual Question:
Can I attach a listener to a single document, such that watch's performance isn't affected by sibling documents?
When I do collection.watch([$matchQuery]), does the MongoDB driver listen to all documents and then filters out the relevant ones? (this is what I am trying to avoid)
The code collection.watch([$matchQuery]) actually means watch the change stream for that collection rather than the collection directly.
As far as I know, there is no way to add a listener to a single document. Since I do not know of any way, I will give you a couple tips on how to avoid scalability problems with the approach you have chosen. Your code appears to be using change streams. It should not cause problems unless you open too many change streams.
There are two ways to accomplish this task by watching the entire collection with a process outside of that won't lead to deterioration of the database performance.
If you use change streams, you can open only a single change stream with logic that checks for all the conditions you need to filter for over time. The mistake is that people often open many change streams for single document filtering tasks, and that is when people have problems.
The simpler way, since you mentioned Atlas, is to use Triggers. You can use something called a match expression in your Triggers configuration to prevent any operations on your collection unless the
match expression evaluates to true. As noted in the documentation, the trigger function will not execute unless a field status in this case is updated to "blocked", but many match expressions are available:
{
"updateDescription.updatedFields": {
"status": "blocked"
}
}
I hope this helps. If not, I can keep digging. I think with change streams or Triggers, you are ok if you want to write a bit of code. :)
Every user in our system (Like Facebook and twitter) has an option to add other users to his predefined lists like: *"Favorites", "Follow", "Blocked", "Closed Friends". Than, we want to allow him to search the list, filter and see commutative data from all the above list. for example:
UserA {
IsFollow: 1,
IsFavorite: 0
...
IsBlocked: 0
}
We also want to keep some additional information when user adding another user to one of the above list such addingDate.
Option One is to manage different collections like "Favorites", "Follow", "Blocked", "Closed Friends"
Option Two - to manage one collection like "Relations" and keep all the data on that collection without the needs of using lookup...
Option Three - Use option One but create a flat collection with all the relevant data from each table (RabbitMQ, transaction update, etc).
Since I'm new in MongoDB (I'm migrating the system from MS SQL), I'm wondering what is the best approach for high scale system.
I would suggest you go with option 2, where all the keys will be present in one document.
MongoDB recommends a schema design where all the data are embedded into a single document. They claim that this will lead to less read-write operations to DB and faster CRUD operations compared to the relational mapping approach.
But, there is a catch here. The data should be embedded in a single document only if the relations are One-to-One, One-to-Few, or One-to-Many.
DO NOT GO WITH DOCUMENT EMBEDDING APPROACH IF YOUR DATA MAPPING RELATION IS One-to-Squillions. I recommend you to read this article
The reason why I am not recommending Option-1 to have a separate collection is you will have to make more requests to a DB for each and every collection linkage. Although the $lookup stage is fast, it is not as efficient compared to the embedding approach.
As far as option 3 goes, it's a viable approach (If you use transactions properly and effectively), but it adds up complexity in the coding side.
I have personally used both Option-1 & Option-2 approaches, and option-1 has always left the AWS-EC2 instance running MongoDB to higher CPU and RAM usage. As far as option-2 goes, I have a collection that has almost 1000 array elements (With key indexed) and 15K keys in each records (I am not joking) and MongoDB had no issues processing it. Just make sure that you use the Projection of return documents everywhere.
So, go for Option-2 as a standard approach and Option-3 for One-to-Squillions relation mapping.
For referencing two or more collections, make sure that you use MongoDB generated ObjectId instead of your own custom referencing since have seen a minor performance impact on using multi-document relation-mapping other than ObjectId (Even if that particular key is indexed)
Hope this helps. Reach me out if you have additional queries
I have two document formats which I can't decide is the mongo way of doing things. Are the two examples equivalent? The idea is to search by userId and have userId be indexed. It seems to me the performance will be equal for either schemas.
multiple bookmarks as separate documents in a collection:
{
userId: 123,
bookmarkName: "google",
bookmarkUrl: "www.google.com"
},
{
userId: 123,
bookmarkName: "yahoo",
bookmarkUrl: "www.yahoo.com"
},
{
userId: 456,
bookmarkName: "google",
bookmarkUrl: "www.google.com"
}
multiple bookmarks within one document per user.
{
userId: 123,
bookmarks:[
{
bookmarkName: "google",
bookmarkUrl: "www.google.com"
},
{
bookmarkName: "yahoo",
bookmarkUrl: "www.yahoo.com"
}
]
},
{
userId: 456,
bookmarks:[
{
bookmarkName: "google",
bookmarkUrl: "www.google.com"
}
]
}
The problem with the second option is that it causes growing documents. Growing documents are bad for write performance, because the database will have to constantly move them around the database files.
To improve write performance, MongoDB always writes each document as a consecutive sequence to the database files with little padding between each document. When a document is changed and the change results in the document growing beyond the current padding, the document needs to be deleted and moved to the end of the current file. This is a quite slow operation.
Also, MongoDB has a hardcoded limit of 16MB per document (mostly to discourage growing documents). In your illustrated use-case this might not be a problem, but I assume that this is just a simplified example and your actual data will have a lot more fields per bookmark entry. When you store a lot of meta-data with each entry, that 16MB limit could become a problem.
So I would recommend you to pick the first option.
I would go with the option 2 - multiple bookmarks within one document per user because this schema would take advantage of MongoDB’s rich documents also known as “denormalized” models.
Embedded data models allow applications to store related pieces of information in the same database record. As a result, applications may need to issue fewer queries and updates to complete common operations. Link
There are two tools that allow applications to represent these
relationships: references and embedded documents.
When designing data models, always consider the application usage of
the data (i.e. queries, updates, and processing of the data) as well
as the inherent structure of the data itself.
The Second type of structure represents an Embedded type.
Generally Embedded type structure should be chosen when our application needs:
a) better performance for read operations.
b) the ability to request and retrieve
related data in a single database operation.
c) Data Consistency, to update related data in a single atomic write operation.
In MongoDB, operations are atomic at the document level. No single
write operation can change more than one document. Operations that
modify more than a single document in a collection still operate on
one document at a time. Ensure that your application stores all fields
with atomic dependency requirements in the same document. If the
application can tolerate non-atomic updates for two pieces of data,
you can store these data in separate documents. A data model that
embeds related data in a single document facilitates these kinds of
atomic operations.
d) to issue fewer queries and updates to complete common operations.
When not to choose:
Embedding related data in documents may lead to situations where
documents grow after creation. Document growth can impact write
performance and lead to data fragmentation. (limit of 16MB per
document)
Now let's compare the structures from a developer's perspective:
Say I want to see all the bookmarks of a particular user:
The first type would require an aggregation to be applied on all the documents.
minimum set of functions that would be required to get the aggregated results, $match,$group(with $push operator):
db.collection.aggregate([{$match:{"userId":123}},{$group:{"_id":"$userId","bookmarkNames":{$push:"$bookmarkName"},"bookMarkUrls:{$push:"$bookmarkUrl"}"}}])
or a find() which returns multiple documents to be iterated.
Wheras the Embedded type would allow us to fetch it using a $match in the find query.
db.collection.find({"userId":123});
This just indicates the added overhead from the developer's point of view. We would view the first type as an unwinded form of the embedded document.
The first type, multiple bookmarks as separate documents in a collection,
is normally used in case of logging. Where the log entries are huge and will have a TTL, time to live. The collections in that case, would be capped collections. Where documents would be automatically deleted after a particular period of time.
Bottomline, if your documents size would not grow beyond 16 MB at any particular time opt for the Embedded type. it would save developing effort as well.
See Also: MongoDB relationships: embed or reference?
Quick question.
I have a number of update commands to run on my MongoDB database. These happen after a user has completed a number of tasks and wants to push all updates to the server. I will update several documents in several collections.
If I want to assure that these updates are atomic and no other simultaneous queries or commands from other users can interfere, can I separate my queries with ;?
Simplified example:
db.cities.find({"asciiname":"Zamin Sukhteh"});db.cities.find({"asciiname":"Konab-e Vasat"})
Will the above result in two separate and atomic queries?
While you can't use a delimiter to separate commands in the shell to introduce atomicity, you can use db.eval.
If you're only using the shell (which you said in comments), you can use the db.eval function to perform a database-wide lock while executing a block of JavaScript code. It's not something you'd normally want to do (as it blocks all writes and reads by default), but in the case that you're describing above (again, the comments), it sounds like it would fit your needs.
db.eval( function() {
var one = db.cities.find({"asciiname":"Zamin Sukhteh"});
var two = db.cities.find({"asciiname":"Konab-e Vasat"});
// other work ...
});
Update (to address a comment):
If you want efficient atomic (-like) updates in MongoDB, there are a few options:
Put everything in a single document. This is guaranteed atomic in MongoDB. However, that often doesn't work for complex document models (or large documents).
If there are dependencies on a document, consider placing new "versions" of the dependent documents, and then, only after those are all set, put the final document linking those documents together into the DB. Without the final "link", older documents shouldn't see the new versions. Depending on how the data is consumed, it's likely you could remove the older versions quite rapidly (or periodically if desired).
Accept that there will occasionally be mismatches and detect them (and then rerun the query to get fresh data). You might be able to use a timestamp or version to identify these cases as you traverse through your document structure.
Cache the data elsewhere as a full-structure for common queries
Decide MongoDB isn't a good fit for your requirements.
I have two collections, one (A) containing items to be processed (relatively small) and one (B) with those already processed (fairly large, with extra result fields).
Items are read from A, get processed and save()'d to B, then remove()'d from A.
The rationale is that indices can be different across these, and that the "incoming" collection can be kept very small and fast this way.
I've run into two issues with this:
if either remove() or save() time out or otherwise fail under load, I lose the item completely, or process it twice
if both fail, the side effects happen but there is no record of that
I can sidestep the double-failure case with findAndModify locks (not needed otherwise, we have a process-level lock) but then we have stale lock issues and partial failures can still happen. There's no way to atomically remove+save to different collections, as far as I can tell (maybe by design?)
Is there a Best Practice for this situation?
There's no way to atomically remove+save to different collections, as far as I can tell (maybe by design?)
Yes this is by design. MongoDB explicitly does not provides joins or transactions. Remove + Save is a form of transaction.
Is there a Best Practice for this situation?
You really have two low-complexity options here, both involve findAndModify.
Option #1: a single collection
Based on your description, you are basically building a queue with some extra features. If you leverage a single collection then you use findAndModify to update the status of each item as it is processing.
Unfortunately, that means you will lose this: ...that the "incoming" collection can be kept very small and fast this way.
Option #2: two collections
The other option is basically a two phase commit, leveraging findAndModify.
Take a look at the docs for this here.
Once an item is processed in A you set a field to flag it for deletion. You then copy that item over to B. Once copied to B you can then remove the item from A.
I've not tried this myself yet but the new book 50 Tips and Tricks for MongoDB Developers mentions a few times about using cron jobs (or services/scheduler) to clean up data like this. You could leave the documents in Collection A flagged for deletion and run daily job to clear them out, reducing the overall scope of the original transaction.
From what I've learned so far, I'd never leave the database in a state where I rely on the next database action succeeding unless it is the last action (journalling will resend the last db action upon recovery). For example, I have a three phase account registration process where I create a user in CollectionA and then add another related document to CollectionB. When I create the user I embed the details of the CollectionB document in CollectionA in case the second write fails. Later I will write a process that removes the embedded data from CollectionA if the document in CollectionB exists
Not having transactions does cause pain points like this, but I think in some cases there are new ways of thinking about it. In my case, time will tell as I progress with my app