If I create a single table (or document in document databases) per aggregate type,I can merge databases or shard them whenever I refactor the write side's microservices, and as the result the application becomes more scalable, and it also increases the speed of loading events.
Are there any side effects I should be aware of while I'm designing the event store like that?
Edit:
I'm currently using MongoDb.
What if I create a collection per aggregate id ?
Or a database per aggregate type, and a collection per aggregate id ...?
Is that problematic in performance, ease of data administration, maintainability, or further scalability?
If I create a single table (or document in document databases),I can merge databases or shard them whenever I refactor the write microservices, and as the result the application becomes more scalable.
Are there any side effects I should be aware of while I'm designing the event store like that?
I haven't seen any authoritative discussion of that design.
There was a discussion in the event sourcing community about having a separate table for each type of aggregate. You can find that discussion here. Executive summary: the more experienced practitioners seemed to be startled that anybody would do that on purpose.
One thing that you should keep in mind is that while events are real (they describe something of interest to the business), aggregates are artificial. You are probably going to be unhappy if redesigning your aggregate boundaries requires that you move your events all over the place.
The following may be helpful
https://github.com/NEventStore/NEventStore.Persistence.MongoDB
http://www.slideshare.net/dbellettini/cqrs-and-event-sourcing-with-mongodb-and-php
http://blingcode.blogspot.com/2010/12/cqrs-building-transactional-event-store.html
Related
I'm new to NoSQL and I'm trying to figure out the best way to model my database. I'll be using ArangoDB in the project but I think this question also stands if using MongoDB.
The database will store 12 categories of products. Each category is expected to hold hundreds or thousands of products. Products will also be added / removed constantly.
There will be a number of common fields across all products, but each category will also have unique fields / different restrictions to data.
Keep in mind that there are instances where I'd need to query all the categories at the same time, for example to search a product across all categories, and other instances where I'll only need to query one category.
Should I create one single collection "Product" and use a field to indicate the category, or create a seperate collection for each category?
I've read many questions related to this idea (1 collection vs many) but I haven't been able to reach a conclusion, other than "it dependes".
So my question is: In this specific use case which option would be most optimal, multiple collections vs single collection + sharding, in terms of performance and speed ?
Any help would be appreciated.
As you mentioned, you need to play with your data and use-case. You will have better picture.
Some decisions required as below.
Decide the number of documents you will have in near future. If you will have 1m documents in an year, then try with at least 3m data
Decide the number of indices required.
Decide the number of writes, reads per second.
Decide the size of documents per category.
Decide the query pattern.
Some inputs based on the requirements
If you have more writes with more indices, then single monolithic collection will be slower as multiple indices needs to be updated.
As you have different set of fields per category, you could try with multiple collections.
There is $unionWith to combine data from multiple collections. But do check the performance it purely depends on the above decisions. Note this open issue also.
If you decide to go with monolithic collection, defer the sharding. Implement this once you found that queries are slower.
If you have more writes on the same document, writes will be executed sequentially. It will slow down your read also.
Think of reclaiming the disk space when more data is cleared from the collections. Multiple collections do good here.
The point which forces me to suggest monolithic collections is that I'd need to query all the categories at the same time. You may need to add more categories, but combining all of them in single response would not be better in terms of performance.
As you don't really have a join use case like in RDBMS, you can go with single monolithic collection from model point of view. I doubt you could have a join key.
If any of my points are incorrect, please let me know.
To SQL or to NoSQL?
I think that before you implement this in NoSQL, you should ask yourself why you are doing that. I quite like NoSQL but some data is definitely a better fit to that model than others.
The data you are describing is a classic case for a relational SQL DB. That's fine if it's a hobby project and you want to try NoSQL, but if this is for a production environment or client, you are likely making the situation more difficult for them.
Relational or non-relational?
You mention common fields across all products. If you wish to update these fields and have those updates reflected in all products, then you have relational data.
Background
It may be worth reading Sarah Mei 2013 article about this. Skip to the section "How MongoDB Stores Data" and read from there. Warning: the article is called "Why You Should Never Use MongoDB" and is (perhaps intentionally) somewhat biased against Mongo, so it's important to read this through the correct lens. The message you should get from this article is that MongoDB is not a good fit for every data type.
Two strategies for handling relational data in Mongo:
every time you update one of these common fields, update every product's document with the new common field data. This is generally only ok if you have few updates or few documents, but not both.
use references and do joins.
In Mongo, joins typically happen code-side (multiple db calls)
In Arango (and in other graph dbs, as well as some key-value stores), the joins happen db-side (single db call)
Decisions
These are important factors to consider when deciding which DB to use and how to model your data
I've used MongoDB, ArangoDB and Neo4j.
Mongo definitely has the best tooling and it's easy to find help, but I don't believe it's good fit in this case
Arango is quite pleasant to work with, but doesn't yet have the adoption that it deserves
I wouldn't recommend Neo4j to anyone looking for a NoSQL solution, as its nodes and relations only support flat properties (no nesting, so not real documents)
It may also be worth considering MariaDB or Postgres
I'm trying to log various contact activities into an 'Activities' collection. Email activities will be looked up / updated using the email address (contactId isn't known), but non-email activities will only be able to rely on the contactId as the key (as email isn't necessarily available in that data).
Commingling simplifies the database design, but isn't worth any significant performance hit, as scalability is a concern too.
[Edit for clarification compared to similar question here: Mongodb: multiple collections or one big collection w/ index
Specifically, I'm trying to compare the performance impact of running 2 queries (one for each activity type based on a different key each time) against a single collection vs running the same 2 queries against 2 different collections. Rather than general data modeling concerns, i'm interested in understanding if query performance is significantly impacted when a good portion of the queried data has to be ignored by the db engine due because it lacks the key.
Your thoughts appreciated!
Some good considerations mentioned in the comments above...and am erring on the side of safety by going with separate collections since that will be easier to index and manage from a DBA perspective.
I have a highly normalized data model with me. Currently I'm using manual referencing by storing the _id and running sequential queries to fetch details from the deepest collection.
The referencing is one-way and the flow has around 5-6 collections. For one particular use case, I'm having to query down to the deepest collection by querying subsequent "_id" from the higher level collections. So technically I'm hitting the database every time I run a
db.collection_name.find(_id: ****).
My prime goal is to optimize the read without hugely affecting the atomicity of the other collections. I have read about de-normalization and it does not make sense to me because I want to keep an option for changing the cardinality down the line and hence want to maintain a separate collection altogether.
I was initially thinking of using MapReduce to do an aggregation from the back and have a collection primarily for the particular use-case. But well even that does not sound that good.
In a relational db, I would be breaking the query in sub-queries and performing a join to get the data sets that intersect from the initial results. Since mongodb does not support joins, I'm having a tough time figuring anything out.
Please help if you have faced anything like this earlier or have any idea how to resolve it.
Denormalize your data.
MongoDB does not do JOIN's - period.
There is no operation on the database which gets data from more than one collection. Not find(), not aggregate() and not MapReduce. When you need to puzzle your data together from more than one collection, there is no other way than doing it on the application layer. For that reason you should organize your data in a way that any common and performance-relevant query can be resolved by querying just a single collection.
In order to do that you might have to create redundancies and transitive dependencies. This is normal in MongoDB.
When this feels "dirty" to you, then you should either accept the fact that your performance will be sub-optimal or use a different kind of database, like a classic relational database or a graph database.
I am trying to pick MongoDB as my preferred database. I need help on the design of my table.
App background - analytics app where contacts push their own events and related custom data. Contact can have many events. Eg: contact did this, did that etc.
event_type, custom_data (json), epoch_time
eg:
event 1: event_type: page_visited, custom-data: {url: pricing, referrer: google}, current_time
event 2: event_type: video_watched, custom-data: {url: video_link}, current_time
event 3: event_type: paid, custom_data: {plan:lite, price:35}
These events are custom and are defined by the user. Scalability is a concern.
These are the common use cases:
give me a list of users who have come to pricing page in the last 7 days
give me a list of users who watched the video and paid more than 50
give me a list of users who have visited pricing, watched video but NOT paid at least 20
What's the best way to design my table?
Is it a good idea to use embedded events in this case?
In Mongo they are called collections and not tables, since the data is not rows/columns :)
(1) I'd make an Event collection and a Users collections
(2) I'd do 1 document per Event which has a userId in it.
(3) If you need realtime data you will want an index on what you want to query by (i.e. never do a scan over the whole collection).
(4) if there are things which are needed for reporting only, I'd recommend making a reporting node (i.e. a different mongo instance) and using replication to copy data to that mongo instance. You can put additional indexes for reporting on that node. That way the additional indexes and any expensive queries will not affect production performance.
Notes on sharding
If your events collection is going to become large - you may need to consider sharding. Perhaps sharding by user Id. However, I'd recommend that may be a longer term solution and not to dive into that until you need it.
One thing to note, is that mongo has currently (2.6) a database level write locking implementation. Which means you can only perform 1 write at a time. It allows many reads. Which means that if you want a high write system AND have a lot of users, you will need to look into sharding at some point. However, in my experience so far, administratively 1 primary node with a secondary (and reporting node) is easier to setup. We currently can handle around 10,000 operations per second with that setup.
However, we have had issues with spikes in users coming to the system. You'll want to make sure you have enough memory for your indexes. And SSD's would be recommended to. as a surge in users can result in cache misses (i.e. index not in memory) which causes it to be read off the hard disk.
One final note - there are a lot of NoSQL DB's and they all have their pros and cons. I personally found that high write, low read, and realtime anaysis of lots of data is not really mongo's strength. So it does depend on what you are doing. It sounds like you are still learning the fundamentals. It might be worth a read of all the available types to pick the right tool for the right job.
I'm planing to port from entity framework 4.0 to MongoDb. What are the best practices that can minimize the impact since the project is having social networking functionality hence, maintain a complex relational database.As a result, performance should be a matter if we use
relational database.
We have used domain Layer(using POCO), repository pattern and DTO Mapping in the project.Also,
What are the advantages and disadvantages of the decision ? At the same time, how it affect to my domain layer implementation ?
If you want to 'minimize impact' you'll want to create a database in MongoDB the one you have in SQL. Since there are no joins in the database you'll need to do multiple reads to complete your query. In itself that's not too bad because MongoDB is really fast, but obviously it has other issues (concurrency, etc.).
If, however, you want to move over fully to the NOSQL-way of doing things you'll likely not be able to 'minimize impact', you'll need to make substantial changes to the way you store content, the way you access it and the way you update it.
Storage: You'll likely create documents in your database that are denormalized and much closer to 'ViewModels' than 'Models'. You might for example store a count of child records in a parent record so that you can display it without having to load them or count them.
Access: You might end up using Map-Reduce for some queries to your database which is a very different mind-set from a traditional query.
Updates: In all likelihood your approach to updating will be different in order to take advantage of the many fine-grained MongoDB update features like $inc. Instead of posting back some large view model and then applying it to your model and then updating the database you might instead provide a much finer-grained Ajax call back that updates a single value. Take a look at CQRS for more ideas on how to think about models for updates vs queries.