Storing entities sharing common properties - mongodb

I am newbie in Mongodb.
Currently I am working on a project using MEAN stack. I am using schema less orientation for storing data i.e mongodb as client and not mongoose.
After my research from internet I found that schema less database performs(speed) well when compared to schema based nosql database and hence decided for schema less approach.
Scenario I am facing:
I have few entities that share common properties. Say for Entity A has name,location,phone no and Entity B has additional properties in common with those properties from that of A.
Suppose considering my application will scale upto 1 billion users
Question for the above scenario discussed
1) Is it better to store as different collection for different types
or prefer inheritance type of approach for storing those entities.
If inheritance type is preferred how it is done in mongodb.
Few other general questions(considering large scalability)
1) Is my schema less approach right choice
2) Is it better to use ODM tool or directly write code in my dao
layer to access the database without using object approach
Many may feel that this is totally out of scope from mongodb, I am asking this question basically from design and performance perspective.
So need advice from experts who has really worked on large scale application development using mongodb.

Related

Can we use a Database per aggregate in DDD

I am designing an enterprise application and there is a big question for me if it is ok to use one Database per each aggregates in Domain-Driven Design and apply CQRS for them.
For example I have one Domain that contains several Bounded Context and each BC have two or more aggregates, so can i use a relational Database like MSSQL and no-sql Database like MongoDb for one or more aggregate?
The concept (Domain-Driven Design) do not discuss the exact implementations. So it does not limit the use of database implementations. Go ahead with what you are trying if it suits your use case. The only thing is to go through some planning ahead for design, which can change if required sometime later. I would recommend having event sourcing in the blend as well. It'll really help the through denormalization in the mix with CQRS.
The main concern is to take care of commands reflecting state consistently through all databases. For example, if you have one aggregate root having an entity and some value objects spread over multiple databases, make sure that all the adapters behave similarly so that the domain then has no concern over how the data is stored (separated) across databases. If that is achieved neatly, then domains are free to have only domain logic. I mean this in terms of how the interfaces are designed for multiple databases. If the NoSQL DB interface shows methods that convey documents and the SQL DB shows it works on the tables, the domain will definitely take a hit switching between documents and tables. Abstract that logic (may be using Hexagonal architecture) and you're in a good position with multiple DBs.

Using relational database like schema in nosql data

I am trying to create a redis based datastore with multiple fields that can be used to fetch the entity based on its value. The data would be something like;
Person<Entity>
Name
Address
Purchases<Another Entity>
Reviews<list of another Entity>
The same will also exist in other entites as this will be a many-to-many relationship between the different entities.
I am not considering traditional databases as I am looking for scalability and fault tolerance in such example.
What I am creating is the following
Hash of Entity id mapped to each entity object
Sets containing the association of say Person to Purchases and another for Purchases to Person and so on - one for both sides of a many to many relationship.
Since this design will involve a lot of overhead, I suspect there is some flaw in keeping this unnormalized. As for the choice of using a memory store over a database, I am considering query response time to be of critical value. I am looking for suggestions about my design as I am implementing this example to learn how to handle bigdata challenges.
I am looking for suggestions about my design as I am implementing this
example to learn how to handle bigdata challenges.
On what basis do you believe your challenges are Big Data? How much data we talking about? You need to ask yourself that question first before discounting relational databases as a solution that may likely meet your needs.
I am not considering traditional databases as I am looking for
scalability and fault tolerance in such example.
Redis and relational databases have the same scalability issue; they don't scale well horizontally unless you either implement or use a custom sharding technique. Redis Cluster is meant to address this, but it's a work in progress and not yet production ready, in the meantime you can use twemproxy. Developed by Twitter, it's a proxying solution to distribute keys across a cluster of redis servers.
I am trying to create a redis based datastore with multiple fields
that can be used to fetch the entity based on its value.
Redis is not designed to query based on values, period; read up on this and this to better understand why.

Neo4j instead of relational database

I am implementing a sinatra/rails based web portal that might eventually have few many:many relationships between tables/models. This is a one man team and part time but real world app.
I discussed my entity with someone and was advised to try neo4j. Coming from real 'non-sexy' enterprise world, my inclination is to use relational db until it stops scaling or becomes a nightmare because of sharding etc and then think about anything else.
HOWEVER,
I am using postgres for the first time in this project along with datamapper and its taking me time to get started very fast
I am just trying out few things and building more use cases so I consitently have to update my schema (prototyping idea and feedback from beta) . I wont have to do this in neo4j (except changing my queries)
Seems like its very easy to setup search using neo4j . But Postgres can do full text search as well.
Postgres recently announced support for json and javascript. Wondering if I should just stick with PG and invest more time learning PG (which has a good community) instead neo4j.
Looking for usecases where neo4j is better, especially at protyping/initial phase of a project. I understand if the website grows I might end up having multiple persistent technologies like s3, relational (PG), mongo etc.
Also it would be good to know how it plays out with Rails/Ruby ecosystem.
Update1:
I got a lot of good answers and seems like the right thing to do is stick with Postgres for now (especially since I deploy to heroku)
However the idea of being schema-less is tempting. Basically I am thinking of a approach where you don't define a datamodel until you have say 100-150 users and you have yourself figured out a good schema (business use cases) for your product , while you are just demoing the concept and getting feedback with limited signups. Then one can decide a schema and start with relational.
Would be nice to know if there are easy to use schema/less persistence option (based on ease to use/setup for new user) that might give up say scaling etc.
Graph databases should be considered if you have a really chaotic data model. They were needed to express highly complex relationships between entities. To do that, they store relationships at the data level whereas RDBMS use a declarative approach. Storing relationships only makes sense if these relationships are very different, otherwise you'll just end up duplicating data over and over, taking a lot of space for nothing.
To require such variety in relationships you'd have to handle huge amount of data. This is where graph databases shines because instand of doing tons of joins, they just pick a record and follow his relationships. To support my statement : you'll notice that every use cases on Neo4j's website are dealing with very complex data.
In brief, if you don't feel concerned with what I said above, I think you should use another technology. If this is just about scaling, schemalessness or starting fast a project, then look at other NoSQL solutions (more specifically, either column or document oriented databases). Otherwise you should stick with PostgreSQL. You could also, like you said, consider polyglot persistence,
About your update, you might consider hStore. I think it fits your requirements. It's a PostgreSQL module which also works on Heroku.
I don't think I agree that you should only use a graph database when your data model is very complex. I'm sure they could handle a simple data model/relationships as well.
If you have no prior experience with Neo4j or Postgres, then most likely both with take quite a bit of time to learn well.
Some things to keep in mind when picking:
It's not just about development against a database technology. You should consider deployment as well. How easy is it to deploy and scale Postgres/Neo4j?
Consider the community and tools around each technology. Is there a data mapper for Neo4j like there is for Postgres?
Consider that the data models are considerably different between the two. If you can already think relationally, then I'd probably stick with Postgres. If you go with Neo4j you're going to be making a lot of mistakes for several months with your data models.
Over time I've learned to keep it simple when I can. Postgres might be the boring choice compared to Neo4j, but boring doesn't keep you up at night. =)
Also I never see anyone mention it, but you should look at Riak (http://basho.com/riak/) too. It's a document database that also provides relationships (links) between objects. Not as mature as a graph database, but it can connect a few entities quickly.
The most appropriate choice depends on what problem you are trying to solve.
If you just have a few many to many tables, a relational database can be fine. In general, there is better OR-mapper support for relational databases, as they are much older and have a standardized interface and row-column structure. They also have been improved on for a long time, so they are stable and optimized for what they are doing.
A graph database is better if e.g. your problem is more about the connections between entities, especially if you need higher distance connections, like "detect cycles (of unspecified length)", some "what do friends-of-a-friend like". Things like that get unwieldy when restricted to SQL joins. A problem specific language like cypher in case of Neo4j makes that much more concise. On the downside, there are mappers between graph dbs and objects, but not for every framework and language under the sun.
I recently implemented a system prototype using neo4j and it was very useful to be able to talk about the structure and connections of our data and be able to model that one to one in the data storage. Also, adding other connections between data points was easy, neo4j being a schemaless storage. We ended up switching to mongodb due to troubles with write performance, but I don't think we could have finished the prototype with that in the same time.
Other NoSQL datastores like document based, column, key-value also cover specific usecases. Polyglot persistence is definitively something to look at, so keep your choice of backend reasonably separated from your business logic, to allow you to change your technology later if you learned something new.

recommendations for a dbms for an EAV system with mostly insert and select operations needs on .net stack

In the project I have been working on, the data modeling requirements are:
A system consisting of N number of clients with each having N number of events. An event is an entity with a required name and timestamp at which it occurs. Optionally, an event may have N number of properties (key/value pares) defining attributes that a client want to store with the particular instance of that event.
The system will have mostly:
inserts – events are logged but never updated.
selects – reports/actions will be generated/executed based on events and properties of any possible combinations.
The requirements reflect an entity-attribute-value (EAV) data model. After researching for sometimes, I feel that a relational dbms like Sql Server might not be a good fit for this. (correct me if I'm wrong!)
So I'm leaning toward NoSql option like MongoDb/CouchDb/RavenDb etc.
My questions are:
What is the best fit in available NoSql solutions keeping in view of my system's heavy insert/select needs?
I'm also open for relational option if these requirements can be translated into relational schema. Although I personally doubt this, but after reading performance DBA answers (like referenced here), I got curious. However, I couldn't figure out myself an optimal relational model for my requirements, perhaps the system being rather generic.
thanks!
MongoDB really shines when you write unstructured data to it (like your event). Also, it is able to sustain pretty heavy write load. However, it's not very good for reporting. At least, for reporting in the traditional sense.
So, if your reporting needs are simple, you might get away with some simple map-reduce jobs. Otherwise you can export data to a relational database (nightly job, for example) and report the hell out of it.
Such hybrid solution is pretty common (in my experience).

Using Multiple Database Types to Model Data in a single application

Does it make sense to break up the data model of an application into different database systems? For example, the application stores all user data and relationships in a graph database (ideal for storing relationships), while storing other data in a document database, such as CouchDB or MongoDB? This would require the user graph database to reference unique ids in the document databases and vice versa.
Is this over complicating the data model and application? Or is this using the best uses of both types of database systems for scaling your application?
It definitely can make sense and depends fully on the requirements of your application. If you can use other database systems for things in which they are really good at.
Take for example full text search. Of course you can do more or less complex full text searches with a relational database like MySql. But there are systems like e.g. Lucene/Solr which are optimized for such things and can search fast in millions of documents. So you could use these systems for their special task (here: make a nifty full text search), then you return the identifiers and maybe load the relational structured data from the RDBMS.
Or CouchDB. I use couchDB in some projects as a caching systems. In combination with a relational database. Of course I need to care about consistency, but it it's definitely worth the effort. It pushed performance in the projects a lot and decreases for example load on the server from 2 to 0.2. :)
Something like this is for instance called cross-store persistence. As you mentioned you would store certain data in your relational database, social relationships in a graphdb, user-generated data (documents) in a document-db and user provided multimedia files (pictures, audio, video) in a blob-store like S3.
It is mainly about looking at the use-cases and making sure that from wherever you need it you might access the "primary" or index key of each store (back and forth). You can encapsulate the actual lookup in your domain or dao layer.
Some frameworks like the Spring Data projects provide some initial kind of cross-store persistence out of the box, mostly integrating JPA with a different NOSQL datastore. For instance Spring Data Graph allows it to store your entities in JPA and add social graphs or other highly interconnected data as a secondary concern and leverage a graphdb for the typical traversal and other graph operations (e.g. ranking, suggestions etc.)
Another term for this is polyglot persistence.
Here are two contrary positions on the question:
Pro:
"Contrary to that, I’m a big fan of polyglot persistence. This simply means using the right storage backend for each of your usecases. For example file storages, SQL, graph databases, data ware houses, in-memory databases, network caches, NoSQL. Today there are mostly two storages used, files and SQL databases. Both are not optimal for every usecase."
http://codemonkeyism.com/nosql-polyglott-persistence/
Con:
"I don’t think I need to say that I’m a proponent of polyglot persistence. And that I believe in Unix tools philosophy. But while adding more components to your system, you should realize that such a system complexity is “exploding” and so will operational costs grow too (nb: do you remember why Twitter started to into using Cassandra?) . Not to mention that the more components your system has the more attention and care must be invested figuring out critical aspects like overall system availability, latency, throughput, and consistency."
http://nosql.mypopescu.com/post/1529816758/why-redis-and-memcached-cassandra-lucene